Recognizing Neighborhood Satisfaction; Significant Dimensions And Assessment Factors


This study looks at the relation of attributes of the neighborhood and satisfaction with them to evaluate the overall neighborhood satisfaction. The concept of neighborhood has been severely blurred if not lost as a result of the development practices of the last several decades. So research must first come to a conclusion regarding how to define a neighborhood. Then it will reveal the concept of satisfaction and the term in neighborhood scale. Since Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood and dimensions of satisfaction consist different issues which refer to the aspects, characteristics, and features of the residential environment, so several factors that influence the neighborhood satisfaction will be introduced in various categories as the result of the essay.

Keywords: neighborhood, satisfaction, neighborhood satisfaction, dimensions of neighborhood satisfaction


Neighborhoods are the localities in which people live and are an appropriate scale of analyzing local ways of living. They can have an enormous influence on our health, wellbeing, and quality of life (Hancock 1997; Barton 2000; Srinivasan, O’Fallon, and Dearry 2003; Barton, Grant and Guise 2003).

The urban neighborhoods were once thriving communities with a variety of residents. Although racial segregation was prevalent in the majority of neighborhoods, many communities offered economic diversity (Bright, 2000). In the industrial era, they can be characterized as early establishments of quaint villages or in some instances attractive old suburbs of the cities. As cities grew and annexed these communities to the cities, they continued to thrive as a homogeneous part of the city, resulting in a habitat of diverse choices and opportunities. However, as the economy changed they experienced decline and reduced attention. The phenomenon called suburbanization, and later ”edge cities”, made center cities less attractive, at least for living in the urban neighborhoods. As there were policies that created this situation, there were efforts to keep the interest in neighborhoods too. Despite the efforts for revitalization though, the neighborhoods continue to be in distress. The process of continued decline points out the deficiencies in the approaches and programs (Vyankatesh, 2004: 22-23).

 Islamic Azad University, Salmas Branch

A good neighborhood is described as a healthy, quiet, widely accessible and safe community for its residents. Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction. A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”.

We aim to discover the factors that influence residents’ satisfaction with their neighborhoods. The basic question is as follows:

What neighborhood elements influence satisfaction and how do they do so in general?

Literature Review

Literature on Neighborhoods

Neighborhood Settings

Ebenezer Howard (1898) based his design of the Garden City on the neighborhood units, which were relatively self-sufficient units that merged. While Howard’s idea focused in the suburbs, Clarence Perry (1929) attempted it in the city. His neighborhood unit was a self-contained residential area bounded by major streets, with shopping districts in periphery and community center and an elementary school located at the center of the neighborhood unit. In 1966, Clarence Stein altered Perry’s ideal concept in Radburn’s design. It had an elementary school at the center and park spaces flowed through the neighborhood, but it was larger than Perry’s concept and introduced the residential street design with cul-de-sac to eliminate through traffic.

At World War II, the massive suburbs developed and the concepts of neighborhood as a basic unit in land development changed. Since 2000, New Urbanists have called for traditional neighborhood development (TND) and transit-oriented development (TOD) models. They propose a neighborhood unit with a center and a balanced mix of activities; and they gave priority to the creation of public space.

Defining Neighborhood

The literature on neighborhoods defines neighborhood in many ways. While there is little broad agreement on the concept of neighborhood, few geographers would contradict the idea that neighborhood is a function of the inter-relationships between people and the physical and social environments” Knox & Pinch, 2000, p. 8). Brower (1996) explains that its form is derived from a particular pattern of activities, the presence of a common visual motif, an area with continuous boundaries or a network of often traveled streets. Soja (1980, p. 211) coined the term sociospatial dialectic for this phenomenon where “people create and modify urban spaces while at the same time being conditioned in various ways by the spaces in which they live and work.” It seems that research uses multiple definitions of a neighborhood simultaneously to reflect the fact that neighborhood is not a static concept but rather a dynamic one (Talen & Shah, 2007)

Park states that “Proximity and neighborly contact are the basis for the simplest and most elementary form of association which we have in the organization of city life. Local interests and associations breed local sentiment, and, under a system which makes residence the basis for participation in the government, the neighborhood becomes the basis of political control … it is the smallest local unit … The neighborhood exists without formal organization” (Park, 1925, p. 7).

Keller emphasizes on Boundaries, social character, unity or belonging, local facility use. He declines “The term neighbourhood … refers to distinctive areas into which larger spatial units may be subdivided such as gold coast and slums … middle class and working class areas. The distinctiveness of these areas stems from different sources whose independent contributions are difficult to assess: geographical boundaries, ethnic or cultural characteristics of the inhabitants, psychological unity among people who feel that they belong together, or concentrated use of an area’s facilities for shopping, leisure and learning… Neighborhoods containing all four elements are very rare in modern cities … geographical and personal boundaries do not always coincide” (Keller, 1968, p. 87).

While Wilkenson’s definition of neighborhood is based on its Place-orientated process, partial social relations, shared interest characteristics as he states “Community is not a place, but it is a place-orientated process. It is not the sum of social relationships in a population but it contributes to the wholeness of local social life. A community is a process of interrelated actions through which residents express their shared interest in the local society” (Wilkenson, 1989, p. 339), Kitagawa and Taeubeur emphasize on area history, name, local awareness, local organizations, and local business issues of the neighborhoods. They argue that “When community area boundaries were delimited… the objective was to define a set of sub-areas of the city each of which could be regarded as having a history of its own as a community, a name, an awareness on the part of its inhabitants of community interests, and a set of local businesses and organizations orientated to the local community” (Kitagawa and Taeubeur, 1963, p. xiii).

Glass believes physical and social characters both, take place in a Territorial group which he defines as neighborhood: “A neighbourhood is a distinct territorial group, distinct by virtue of the specific physical characteristics of the area and the specific social characteristics of the inhabitants” (Glass, 1948, p. 18).

Research commissions other than authors have their own neighborhood definition. As a result US National Research Commission on Neighborhoods and US National Research Council has revealed definitions as follows:

 “A community consists of a population carrying on a collective life through a set of institutional arrangements. Common interests and norms of conduct are implied in this definition” (US National Research Commission on Neighborhoods, 1975, p. 2).

 “In last analysis each neighborhood is what the inhabitants think it is. The only genuinely accurate delimitation of neighborhood is done by people who live there, work there, retire there, and take pride in themselves as well as their community” (US National Research Council, 1975, p. 2).

Forrest and Kearns (2004, p. 2126) argue the concept of neighborhood in an increasingly globalizing society and state impact of the information/technological age on neighborhood: “new virtuality in social networks and a greater fluidity and superficiality in social contact are further eroding the residual bonds of spatial proximity and kinship.”

Different definitions serve different interests, so that the neighborhood may be seen as a source of place-identity, an element of urban form, or a unit of decision making. This codependence between the spatial and social aspects of neighborhood is arguably one of the main reasons why the concept is so difficult to define.

Categorizing Neighborhood

Blowers conceptualizes neighborhood not as a static spatial entity but as existing along a continuum yielding five neighborhood types (Figure 1). Proceeding left to right in the continuum additional characteristics or dimensions are cumulatively added yielding more complex neighborhoods:

Figure 1 – The Neighborhood Continuum (Blowers 1973)

1. Arbitrary neighborhood: Blowers describes these neighborhoods as having “no integrating feature other than the space they occupy.” These districts have few homogeneous qualities and exhibit low social interaction (Blowers, 1973: p.55).

2. physical neighborhood: The boundaries of physical neighborhoods despite the arbitrary’s ill – defined boundaries are delineated by natural or built barriers such as major roads, railways, waterways or large tracts of non-residential land use (e.g. industrial parks, airports, etc.) The inhabitants residing within the boundaries of a physical neighborhood may share few characteristics in common. Blowers’ cautions that occupying the same physical area does not automatically imply a high degree of social interaction (Butler, 2008: 8).

3. Homogeneous neighborhood: These neighborhoods are the most familiar type of neighborhood in Blowers typology which has distinct spatial boundaries and the residents share common demographic, social or class characteristics.

4. Functional neighborhood: Blowers describes these neighborhoods as “functional areas are those within which activities such as shopping, education, worship, leisure, and recreation take place.” Like any functional region in geography, they are organized around a central node with surrounding linked to it through activities, service interchanges and associations (Blowers, 1973, p. 59).

5. Community neighborhood: Blowers sees the community neighborhood as “close-knit, socially homogeneous, territorially defined group engaging in primary contacts.” (Blowers, 1972, p. 60). Chaskin defines neighborhood as “clearly a spatial construction denoting a geographical unit in which residents share proximity and the circumstances that come with it… communities are units in which some set of connections is concentrated, either social connections (as in kin, friend or acquaintance networks), functional connections (as in the production, consumption, and transfer of goods and services), cultural connections (as in religion, tradition, or ethnic identity), or circumstantial connections (as in economic status or lifestyle)” (Chaskin, 1997, p. 522). Blower (1972, p, 61) contends that the community neighborhood can be seen as a culmination of the characteristics of the

environment, the socio-economic homogeneity of the population, and the functional interaction that takes place will contribute to the cohesiveness of the community neighborhood.” (Blower, 1972, p, 61)

Some researches demonstrate other classifications of neighborhoods. For instance Ladd, 1970; Lansing & Marans, 1969; Lansing et al., 1970; Marans, 1976; Zehner, 1971 introduce micro and macro neighborhoods based on walkability. They agree that a neighborhood should comprise a for the New preceding neighborhood types on the continuum by stating that “the distinctiveness of the geographical walkable distance . However, the actual walkable distance considered has varied from a quarter-mile to one mile from center to edge (Calthorpe, 1993; Choi et al., 1994; Colabianchi et al., 2007; Congress Urbanism, 2000; Hoehner et al., 2005; Hur & Chin, 1996; Jago, Baranowski, Zakeri, & Harris, 2005; Lund, 2003; Perry, 1939; Pikora et al., 2002; Stein, 1966; Talen & Shah, 2007; Western Australian Planning Commission, 2000). Micro-neighborhood is an area that a resident could see from his/her front door, that is, the five or six homes nearest to their house. Similarly, Appleyard (1981) used the term, home territory. He looked at residents’ conception of personal territory in three streets with different traffic hazard. The results showed that residents drew their territorial boundaries to a maximum of a street block (between intersections with approximately 6-10 buildings each side), and to a minimum their own apartment building. Research showed that the micro-neighborhood deals more with social relationships among neighbors than the physical environment.

In a slight adaptation of Suttles’ (1972) schema, we might say that the neighbourhood exists at three different scales (Table 1):

Table 1. Scales of Neighborhood

Scale Predominant function Mechanism(s)

Home area Psycho-social benefits

(for example, identity; belonging)



Locality Residential activities

Social status and position Planning

Service provision

Housing market

Urban district or region Landscape of social and

economic opportunities

Employment connections

Leisure interests

Social networks

The smallest unit of neighbourhood, here referred to as the ‘home area’, is typically defined as an area of 5–10 minutes walk from one’s home. Here, we would expect the psycho-social purposes of neighbourhood to be strongest. As shown elsewhere (Kearns et al., 2000), the neighbourhood, in terms of the quality of environment and perceptions of co-residents, is an important element in the derivation of psycho-social benefits from the home. In terms of Brower’s (1996) outline of the ‘good neighbourhood’ , the home area can serve several functions, most notably those of relaxation and re-creation of self; making connections with others; fostering attachment and belonging; and demonstrating or reflecting one’s own values.

Some neighbourhoods and localities (in addition to individuals and groups) can be seen to be subject to discrimination and social exclusion as places and communities (Madanipour et al., 1998; Turok et al., 1999).

Once the urban region (the third level of neighbourhood in Table 1) is viewed as a landscape of social and economic opportunities with which some people are better engaged than others (for example, by reasons of employment, leisure activities or family connections), then the individual’ s expectations of the home area can be better understood (Kearns & Parkinson, 2001: 2104-2105).

Not only researchers have described several categories for neighborhoods but also different stratifications of neighborhood consumers have been developed. Four distinct types of user potentially reap benefits from the consumption of neighbourhood: households, businesses, property owners and local government. Households consume neighbourhood through the act of occupying a residential unit and using the surrounding private and public spaces, thereby gaining some degree of satisfaction or quality of residential life. Businesses consume neighbourhood through the act of occupying a non-residential structure (store, office, factory), thereby gaining a certain flow of net revenues or profits associated with that venue. Property owners consume neighbourhood by extracting rents and/or capital gains from the land and buildings owned in that location. Local governments consume neighbourhood by extracting tax revenues, typically from owners based on the assessed values of residential and non-residential properties (Galster, 2001: 2113)

Literature on satisfaction

Mesch and Manor (1998) define satisfaction as the evaluation of features of the physical and social environment.

Canter and Rees have argued that people interact with the environment at different levels— from the bedroom to the neighborhood and to the entire city. In their model of housing satisfaction, Canter and Rees (1982) referred to these levels of environment as levels of environmental interaction and defined them as scales of the environment that have a hierarchical order. They specified different levels at which people may experience satisfaction such as the house and the neighborhood. They also argued that the experience of satisfaction is similar and yet distinct at different levels of the environment. Similarly, Oseland (1990) and Gifford (1997, p. 200) stressed that other responses such as the experience of space and privacy also vary in different rooms in a home. Oseland’s study supported the hypothesis that users’ conceptualization of space depends on the location of the space. Some models of residential satisfaction (Weidemann & Anderson, 1985; and Francescato, Wiedemann, & Anderson, 1989) have also suggested that it is important to consider different levels of environment in the study of satisfaction.

Some studies, however, have examined how residential satisfaction varies at different levels of the environment (Paris & Kangari, 2005; Mccrea, Stimson, & Western, 2005). Most of these studies have examined residential satisfaction at two or three levels, namely the housing unit and the neighborhood level. For example, Mccrea et al., (2005) examined residential satisfaction at three levels; the housing unit, the neighborhood, and the wider metropolitan region. Although the manner in which levels of environment have been defined in these studies has depended on the context of the research and on the interest of the researcher, the most common levels of environment have been the housing unit and the neighborhood (Amole, 2009:867).


Neighborhood satisfaction

What is a good neighborhood? A common answer could describe it as a healthy, quiet, widely accessible and safe community for its residents, wherever they may live, in the suburbs or in the city. However Brower believes a good neighborhood is not an ideal neighborhood, but it is a place with minimum problems and defects (Brower, 1996). Practically, a neighborhood is defined by the psychology of its 4 types of consumers which includes households, businesses, property owners and local government as described above. The boundaries drawn are often based on these and other factors such as history, politics, geography and economics.

Whether there is a relative homogeneity in socioeconomic character, historic conditions such as annexations, political boundaries of wards and councils, or whether the place is divided by natural geographic features or by rails, streets etc all counts in deciding the ‘goodness’ of the neighborhood (Vyankatesh, 2004:20).

Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction (Amerigo, 2002; Amerigo & Aragones, 1997; Carvalho et al., 1997; Francescato, 2002; Hur & Morrow-Jones, 2008; Lipsetz, 2001; Marans, 1976; Marans & Rodgers, 1975; Mesch & Manor, 1998; Weidemann & Anderson, 1985). They have used a variety of terms such as, residential satisfaction, community satisfaction, or satisfaction with residential communities for it (Amerigo & Aragones, 1997; Cook, 1988; Lee, 2002; Lee et al., 2008; Marans & Rodgers, 1975; Miller et al., 1980; Zehner, 1971). (Hur, 2008a: 8)

High neighborhood satisfaction increases households’ sense of community and vice versa (Brower, 2003; Mesch & Manor, 1998). Studies often mention that residential and neighborhood satisfaction also influences people’s intentions to move (Brower, 2003; Droettboom, McAllister, Kaiser, & Butler, 1971; Kasl & Harburg, 1972; Lee, Oropesa, & Kanan, 1994; Nathanson, Newman, Moen, & Hiltabiddle, 1976; Newman & Duncan, 1979; Quigley & Weinberg, 1977). High satisfaction among residents encourages them to stay on and induces others to move in, and low satisfaction with the neighborhood environment urges current residents to move out. Marans and Rodgers (1975) and Marans and Spreckelmeyer (1981) find that the relationship between neighborhood satisfaction, decisions to move, and quality of life is a sequential process, with neighborhood satisfaction predicting mobility and mobility affecting quality of life (Hur, 2008b: 620).

Francescato et al. (1989) noted that “the construct of residential satisfaction can be conceived as a complex, multidimensional, global appraisal combining cognitive, affective, and cognitive facets, thus fulfilling the criteria for defining it as an attitude. (p.189)”

Dimensions of Neighborhood Satisfaction

Dimensions of satisfaction are similar at the different levels of the environment. The term “dimensions of satisfaction” refers to the aspects, characteristics, and features of the residential environment (such as design aspects, social characteristics, facilities provided, or management issues) to which the users respond in relation to satisfaction (Francescato, 2002). This is important because it would inform researchers about the important dimensions and relevant research questions at different levels of the environment.

A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”, however ambiguous that term may be, depicted in countless expressions or terms of public and private services, sewer, water, police schools, neighbors, entertainment facilities etc., (Ahlbrandt and Brophy, 1975). Availability of housing of a desirable choice is yet another factor influencing the choice of neighborhood. The desired lot sizes and architectural styles play their role in the choice. These livability features hold a key to the future viability of a neighborhood (Vyankatesh, 2004: 22).

Residents in neighborhoods where most homeowners are satisfied with their neighborhoods focus on different aspects of their neighborhoods than those where most are dissatisfied; and finally, we hypothesize that the two neighborhood groups differ in terms of features that affect neighborhood satisfaction.

The findings of neighborhood satisfaction research are sometimes contradictory because of the compound nature of “satisfaction.”

Since neighborhood characteristics vary; there are spatial differences in satisfaction across areas. Also length of residence, amount of social interaction, satisfaction with traffic, and satisfaction with appearance or aesthetics are important variables in neighborhood satisfaction. Thus complex characteristics of neighborhood satisfaction have been pointed in our research:

 Where Residents Live

Research found different circumstances affecting neighborhood satisfaction depending on where the residents live (Cook, 1988; Hur & Morrow-Jones, 2008; Zehner, 1971). For example, Zehner (1971) examined residents’ neighborhood satisfaction in new towns and less planned areas. New town residents were more likely to mention attributes of the larger area, the physical factors; and the less planned town residents were focused on the micro-residential features with emphases on the social characteristic of the neighborhood (Hur, 2008a: 17).

Socio-Demographic Characteristics

There were a number of studies that indicate the importance of sociodemographic characteristics on neighborhood satisfaction. They have found positive influences of longer tenure in the neighborhood (Bardo, 1984; Galster, 1987; Lipsetz, 2001; Potter & Cantarero, 2006; Speare, 1974), and homeownership (Lipsetz, 2001). Young, educated, and wealthy urban residents were found to be more satisfied than others (Miller et al., 1980). St. John (1984a, 1984b, 1987) found no evidence of racial differences in neighborhood evaluation, but Morrow-Jones, Wenning, and Li (2005) found that Satisfaction with a community’s racial homogeneity is another predictor of residential satisfaction.

 Social Factors in Neighborhood

Social and psychological ties to a place such as having friends or family living nearby (Brower, 2003; Lipsetz, 2001; Speare, 1974) were an important social factor in neighborhood satisfaction. Brower (2003) finds having friends and relatives living nearby is a factor that increases neighborhood satisfaction; Lipsetz (2000), on the other hand, finds that it has a largely negative effect on urbanites’ satisfaction but has no effect on that of suburbanites’.

The findings agree that residents were satisfied when they considered their neighbors as friendly, trusting, and supportive. People reported satisfaction was higher when they reported talking to their neighbors often and supporting each other formally and informally, especially for the residents who have lived in the neighborhood longer (Potter & Cantarero, 2006).

In addition to the positive social interactions factors, the other factors that decrease neighborhood satisfaction include crime rate, social incivilities such as harassing neighbors, teenagers hanging out, noise, fighting, and arguing.

 Physical Factors in Neighborhood

I. Physical environmental characteristics

Planners can more directly shape the neighborhood physical features and policy can apply the physical features effectively. Although planners support the importance of physical characteristics, residents consider social factors more important in judging a neighborhood (Lansing & Marans, 1969).

Research often finds physical characteristics a strong influence on neighborhood satisfaction compared to social or economic characteristics (Sirgy & Cornwell, 2002). Neotraditonal and New Urbanist approaches focus on physical features as a media to decrease dependence on the automobile, foster pedestrian activity, and provide opportunities for interaction among residents (Marans & Rodgers, 1975; Rapoport, 1987).

There are several physical environmental features that research has considered. Some relate to neighborhood satisfaction and the others have connections to the factors that may link to neighborhood satisfaction. Hur (2008a) has categorized Physical environmental characteristics to 3 types:

1. Physical disorder (incivilities):

It promotes fear of crime, makes people want to leave the area, and diminishes residents’ overall neighborhood satisfaction. Physical incivilities can be grouped into three kinds:

• Fixed feature elements (such as, a vacant house and dilapidated building): Fixed feature elements “change rarely and slowly” (p. 88). Individual housing and the building lot are fixed-feature elements of the neighborhood.

• Semi-fixed feature elements (such as, graffiti and broken feature on public property): semi-fixed feature elements “can, and do, change fairly quickly and easily” (p. 89), which Rapoport says, “become particularly important in environmental meaning…where they tend to communicate more than fixed-feature elements” (p. 89).

• Non-fixed (movable) elements (such as, litter and abandoned cars): Rapoport (1982) also suggested non-fixed feature elements, which include people and their nonverbal behaviors (p. 96).

2. Defensible space features :

“Defensible Space” is a program that “restructures the physical layout of communities to allow residents to control the areas around their homes (U.S. Department of Housing and Urban Development, 1996, p.9).” This supports an action to foster territoriality, natural surveillance, a safe image, and a protected milieu:

• Foster territoriality: Territoriality, another defensible feature, involves territorial symbols such as yard barriers (G. Brown et al., 2004; Perkins et al., 1993), block watch signs, security alarm stickers, and evidence of dogs (Perkins et al., 1993). Although they may reduce crime and fear of crime, research has not looked at the connection to residents’ neighborhood satisfaction. Litter and graffiti, which are also incivilities, affect image and milieu.

• Natural surveillance: Natural surveillance involves windows facing the streets, and place to sit outside (front porches). If provide eyes on the street (B. Brown et al., 1998; MacDonald & Gifford, 1989; Perkins et al., 1992, 1993), give residents opportunities to have informal contacts with their neighbors to help formation of local ties (Bothwell, 1998; B. Brown et al., 1998; Plas & Lewis, 1996), and affects non-verbal messages of monitoring (Easterling, 1991; Taylor & Brower, 1985). Research reported that a less visible street from neighboring houses had more crime (G. Brown et al., 2004; Perkins et al., 1993) indicating the importance of surveillance system in neighborhood. Despite of its significance, Bothwell et al. (1998) was the only study looking at natural surveillance as an influence on neighborhood satisfaction. The study showed how public housing residents in Diggs town have become known to each other, restored the sense of belongingness, and built strong neighborhood satisfaction via front porches.

• A safe image: The safe image conveys an impression of a safe and invulnerable neighborhood. If the image is negative, “the project will be stigmatized and its residents castigated and victimized (Newman, 1972, p. 102).”

• A protected milieu: A safe milieu is a neighborhood situated in the middle of a wider crime-free area, which is thus insulated from the outside world by a moat of safety (Burke, 2005, p.202).

3. Built or natural characteristics:

The third type of physical environmental features includes the degree to which a place looks built or natural. Studies have measured residential density, land use and vegetation. Lansing et al. (1970) was the only study to look at density-related characteristics (e.g., frequency of hearing neighbors, and privacy in yard from neighbors) on neighborhood satisfaction. But those elements were more social than physical, and thus may only get at physical density in the neighborhood indirectly. Lee et al. (2008) found that residents’ neighborhood satisfaction was associated with natural landscape structure: tree patches in the neighborhood environment that were less fragmented, less isolated, and well connected positively influenced the neighborhood satisfaction. Some research has looked at the associations between multiple attributes. Ellis et al. (2006) looked at relationships between land use, vegetation, and neighborhood satisfaction. While the amount of nearby retail land use has a negative correlation with neighborhood satisfaction, they found that the amount of trees moderated the negative effect (Hur, 2008a: 19-22)

II. Perceived and evaluative physical environmental characteristics

One set of studies identifies physical appearance as the most important factor for increasing neighborhood satisfaction and quality of life (Kaplan, 1985; Langdon, 1988, 1997; Sirgy & Cornwell, 2002). Nasar’s (1988) survey of residents and visitors found that their visual preferences related to five likable features: naturalness, upkeep/civilities, openness, historic significance, and order. People liked the visual quality of areas that had those attributes and they disliked the visual quality of areas that did not have them. Newly arrived residents point out that physical appearance is the most important factor for residential satisfaction, but long-time residents mention stress factors (e.g., tension with neighbors, level of income of the neighborhood, inability to communicate with others, racial discrimination, crime, etc.) as the most important factors (Potter & Cantarero, 2006).

 Emotional and temporal dimensions of the environmental experience

These are recognized as a component part of the people–environment relationship and therefore residential satisfaction. Residential satisfaction is indeed strongly associated with one’s attachment to the living space.


Several studies have constructed comprehensive models of residential satisfaction. The complex attributes of neighborhoods can be categorized to 7 types, each have several characteristics. These are the main features which should be studied, measured and rated to estimate the residential satisfaction of their neighborhoods.

We must point that each group of neighborhood satisfaction dimension has to deliberate separately by the 4 types of the consumers of the neighborhood which have been mentioned before. The total rank will demonstrate the neighborhood satisfaction status.

As the result of the essay we introduce a classification of the satisfaction dimensions. This can make a comprehensive base for evaluating almost all the features that influence the residential satisfaction in neighborhood scale.

The 7 types of the neighborhood attributes and satisfaction dimensions are presented in the table below (table 2):

Table 2. Complex Attributes of Neighborhoods

Satisfaction dimension Assessment factors Sub-factors

Spatial characteristics Proximity characteristics access to major destinations of employment

both distance and transport infrastructure

Local facility use Local interests

Open spaces

Access to recreational opportunities

Entertainment, shopping, etc

Mass – void –

Neighborhood boundaries


Pedestrian access to stores

Place – oriented design process


Physical characteristics Structural characteristics of the residential and non-residential buildings Type



design, state of repair

density, landscaping, etc.

Infrastructural characteristics roads



utility services, etc.

traffic –

Aesthetics / appearance Naturalness

Upkeep / civilities


Historic significance



Density of housing –

Building type Apartment

Villa, etc.

Environmental characteristics degree of land

topographical features

views, etc. –





cleanliness –

Climatic design Architecture

Wind tunnels

Sunny / too hot

Sentimental characteristics Place identification

Historical significance of buildings or district, etc.

length of residence

proximity to problem areas

name/ area pride

local awareness

Living space

New towns

Less planned areas


Place identity

Sense of place

Sense of Belonging to place

Social characteristics Local friend and kin networks

Degree of interhousehold familiarity

Type and quality of interpersonal associations

Residents’ perceived commonality Participation in locally based voluntary associations

Strength of socialization and social Control forces

Social support

Racial homogeneity

Neighborhood cohesion

Collective life

Interaction with communities

Interaction through favors

Interaction through social activity

Amount of social interaction

Territorial group

Common interests

Participation: informal social participation & participation in formal neighborhood organizations

Common conduct

Physical disorder (incivilities) Fixed feature elements (such as, a vacant house and dilapidated building)

Semi-fixed feature elements (such as, graffiti and broken feature on public property)

Non-fixed (movable) elements

Defensible space features Foster territoriality (such as block watch signs, security alarm stickers, and evidence of dogs)

Natural surveillance (such as windows facing the streets, and place to sit outside)

Built or natural characteristics Residential density

Land use


Demographic- economic characteristics Age distribution,

Family composition,


Religious Types

Tenure period / Home ownership

Wealthy / Poor

Ratio of owners / renters


Marital status

Cultural characteristics




Children under 18





education composition


Local business workers / retried


Family / friend nearby

Friendly / trusting and supportive neighbors

Crime rate

Teenagers hanging out


Fighting / arguing –

Management – political characteristics The quality of safety forces

Public schools

Public administration

Parks and recreation, etc.

Residents exert influence in local affairs through spatially rooted Channels or elected representatives

Local government service

Local associations

Political control

Local organizations –


Amole, Dolapo, 2009, Residential Satisfaction and Levels of Environment in Students’ Residences, Environment and Behavior, Volume 41, No. 6, P 867.
Barton, Hugh, 2000, Sustainable Communities: The Potential for Eco; neighborhoods- Earthscan Publications Ltd.
Blowers, A. (1973). The neighbourhood: exploration of a concept, Open Univ. Urban Dev. Unit 7, Pp 49-90.
Bright, Elise M., 2000, Reviving America’s Forgotten Neighborhoods: An Investigation of Inner City Revitalization Efforts. New York: Garland Publishing, Inc.
Brower, Sidney. (1996). Good neighborhoods: Study of in-town and suburban residential environments. Westport, CT: Praeger Publishers.
Butler, Kevin A., 2008, A Covariance Strucyural Analysis Of A Conceptual Nehghborhood Model, A dissertation for the degree of Doctor of Philosophy submitted to Kent State University, P 8.
Canter, D, & Rees, K.A. (1982). Multivariate model of housing satisfaction. International Review of Applied Psychology, 32, Pp 185-208.
Chaskin, Robert J., 1997, Perspectives on Neighborhood and Community: A Review of the Literature, The Social Service
Review, Vol. 71, No. 4, pp. 521-547. p. 522.
Churchman, A. (1999, May). Disentangling the concept of density. Journal of Planning Literature, 13(4), Pp 389-411.
Ellis, C. D., Lee, S. W., & Kweon, B. S. (2006). Retail land use, neighborhood satisfaction and the urban forest: An investigation into the moderating and mediating effects of trees and shrubs. Landscape and Urban Planning, 74, Pp 70-78.
Fleury-Bahi, Ghozlane & Félonneau, Line, 2008, Processes of Place Identification and Residential Satisfaction, Environment and Behavior, Volume 40, No.5, pp 669-682.
Forrest, R., & Kearns, A. (2004). Who Cares About Neighbourhood? Paper presented at the Community, Neighbourhood, Responsibility. From
Forrest, Ray & Kearns, Ade, 2001, Social Cohesion, Social Capital and the Neighbourhood, Urban Studies, Vol. 38, No. 12, pp.2125–2143.
Galster, George, 2001, On the Nature of Neighbourhood, Urban Studies, Vol. 38, No. 12, Pp 2113.
Gifford, R. (1997). Environmental psychology: Principles and practices. Boston: Allyn and Bacon, p. 200
Hur, Misun & Morrow-Jones, Hazel, 2008, Factors That Influence Residents’ Satisfaction with Neighborhoods, Environment and Behavior, Volume 40, No. 5, Pp 620.
Hur, Misun, 2008a, Neighborhood satisfaction, physical and perceived characteristics, A dissertation for the degree of Doctor of Philosophy submitted to Ohio state university, Pp 8- 17 – 20,21,22,19.
Johnson , Philip, 2008, Comparative Analysis of Open-Air and Traditional Neighborhood Commercial Centers, A dissertation for the degree of MASTER OF Master of Community Planning submitted to the University of Cincinnati.
Kaplan, R. (1985). Nature at the doorstep: Residential satisfaction and the nearby environment. Journal of Architectural and Planning Research, 2, Pp 115-127.
Kearns, Ade & Parkinson, Michael, 2001, The Significance of Neighbourhood, Urban Studies, Vol. 38, No. 12, pp. 2103–2110.
Keller, Suzanne, 1968, The Urban Neighborhood: A Sociological Perspective, Random House, p. 87
Ladd, F. C. (1970). Black youths view their environment: Neighborhood maps. Environment and Behavior, 2, Pp 74-99.
Lansing, J. B., Marans, R. W., & Zehner, R. B. (1970). Planned residential environments. Ann Arbor, Michigan: Institute for Social Research, The University of Michigan.
Lee, B. A., Oropesa, R. S., & Kanan, J. W. (1994). Neighborhood context and residential mobility. Demography, 31, Pp 249-270.
Mesch, G. S., & Manor, O. (1998). Social ties, environmental perception, and local attachment. Environment and Behavior, 30, Pp 504-519.
Morrow-Jones, H.,Wenning, M. V., & Li,Y. (2005). Differences in neighborhood satisfaction between African American and White homeowners. Paper presented at the Association of Collegiate Schools of Planning (ACSP46), Kansas City, MO.
Nasar, J. L. (1988). Perception and evaluation of residential street scenes. In J. L. Nasar (Ed.), Environmental aesthetics: Theory, research, and applications (pp. 275- 289). New York: Cambridge University Press.
Newman, O. (1972). Defensible space; crime prevention through urban design. New York: The MacMillian Company.
Oseland, N. A. (1990). An evaluation of space in new homes. Proceedings of the IAPS Conference Ankara, Turkey, Pp 322-331.
Park, Robert E. & Burgess, Ernest W., (1967) or (1984) or (1992), The City; Suggestions for Investigation of Human Behavior in the Urban Environment,
Potter, J., & Cantarero, R. (2006). How does increasing population and diversity affect resident satisfaction? A small community case study. Environment and Behavior, 38, Pp 605-625.
Rapoport, A. (1982). The meaning of the built environment: a nonverbal communication approach. Beverly Hills: Sage Publications.
Sizemore, Steve, 2004, Urban Eco-villages as an Alternative Model to Revitalizing Urban Neighborhoods: The Eco-village Approach of the Seminary Square/Price Hill Eco-village of Cincinnati, Ohio, A dissertation for the degree of MASTER OF COMMUNITY PLANNING submitted to The University of Cincinnati.
Soja, E. (1980). The socio-spatial dialectic. Annals of the Association of American Geographers, 70, Pp 207-225.
Talen, E., & Shah, S. (2007). Neighborhood evaluation using GIS: An exploratory study. Environment and Behavior, 39(5), Pp 583-615.
Vyankatesh, Terdalkar Sunil, 2004, Revitalizing Urban Neightborhoods: A Realistic Approach to Develop Strategies, A dissertation for Master of Community Planning submitted to University of Cincinnati, Pp 22-23-20-22.
Wilkinson, Derek, 2007, The Multidimensional Nature of Social Cohesion: Psychological Sense of Community, Attraction, and Neighboring, Springer Science+Business Media, pp. 214–229.
Zehner, R. B. (1971, November). Neighborhood and community satisfaction in new towns and less planned suburbs. Journal of the American Institute of Planners (AIP Journal), Pp 379-385.


Vernacular Architecture: college essay help near me

01.1 Background

How sensitive are you of the built environment that you live in? Have you ever come across a building that is rather ordinary but is fascinating and has a story behind it? Have you ever wondered why people build the way they do or why they choose that material over others or even why the building faces in that direction.

Fig 1- Palmyra House Nandgaon, India (Style-Contemporary; Principles-Vernacular)

In answering these questions we need to look at the communities, their identities and their tradition over time and this in essence is what is called “Vernacular Architecture’’.

The purest definition of vernacular architecture is simple…it is architecture without architects. It is the pure response to a particular person’s or society’s building needs. It fulfils these needs because it is crafted by the individual and society it is in. In addition the building methods are tested through trial-and-error by the society of which they are built until their building methods near perfection (over time) and are tailored to the climatic, aesthetic, functional, and sociological needs of their given society. Because the person constructing the structure tends to be the person who will be using it, the architecture will be perfectly tailored to that individual’s particular wants and needs.

Much of the assimilation of the vernacular architecture that we see today in India comes from the trading countries. India is a place which has many different cultures and has seen rapid economic growth over the past few decades which not only transforms people’s lives but also changes everyday environment in which they live, people in the nation are faced daily with the dual challenges. On one hand modernization and on the other preserving the heritage including all their built heritage. This gives us multiple perspectives on vernacular environments and the pure heritage of the country.

Fig 2-A modern adaptation of brick façade along with the contemporary design of the building.

Gairole House, Gurgaon, Haryana, India

“Vernacular buildings’’ across the globe provide instructive examples of sustainable solutions to building problems. Yet, these solutions are assumed to be inapplicable to modern buildings. Despite some views to the contrary, there continues to be a tendency to consider innovative building technology as the hallmark of modern architecture because tradition is commonly viewed as the antonym of modernity. The problem is addressed by practical exercises and fieldwork studies in the application of vernacular traditions to current problems.

The humanistic desire to be culturally connected to ones surroundings is reflected in a harmonious architecture, a typology which can be identified with a specific region. This sociologic facet of architecture is present in a material, a color scheme, an architectural genre, a spatial language or form that carries through the urban framework. The way human settlements are structured in modernity has been vastly unsystematic; current architecture exists on a singular basis, unfocused on the connectivity of a community as a whole.

Fig 3-Traditional jail screens, Rajasthan, India

Vernacular architecture adheres to basic green architectural principles of energy efficiency and utilizing materials and resources in close proximity to the site. These structures capitalize on the native knowledge of how buildings can be effectively designed as well as how to take advantage of local materials and resources. Even in an age where materials are available well beyond our region, it is essential to take into account the embodied energy lost in the transportation of these goods to the construction site.

Fig 4- Anagram Architects, Brick screen wall: SAHRDC building, Delhi, India

The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.


Buildings use twice the energy of cars and trucks consuming 30% of the world’s total energy and 16% of water consumption by 2050 they could go beyond 40%

Emitting 3008 tons of carbon which is the main cause of global warming.

In India a quarter of energy that is consumed goes in making and operating the buildings. Also almost half of the materials that we dig out form the ground goes into construction of buildings, roads and other construction projects. Hence, buildings are a very large cause of the environmental problems that we face today. Therefore, it is really important to re-demonstrate that good, comfortable sustainable buildings can play a major role in the improvement of our environment as well as can keep par with the modern designs and can perform even better than them.

The form and structure of the built environment is highly controlled by the factors such as the local area architecture or climate etc. In situations like these we need to study the forms in respect to our environment.

In India there is a whole lot variety of climate ranges and a constant need for developing architecture that will support the environment. We as architects need to study modern designs as well as the functions of the built form in respect to the local climate and cultural context.

VERNACULAR Architecture the simplest form of addressing human needs, is seemingly forgotten in modern architecture .But the amalgamation of the two can certainly aid to a more efficient built form.

However, due to recent rises in energy costs, the trend has sensibly swung the other way. Architects are embracing regionalism and cultural building traditions, given that these structures have proven to be energy efficient and altogether sustainable. In this time of rapid technological advancement and urbanization, there is still much to be learned from the traditional knowledge of vernacular construction. These low-tech methods of creating housing which is perfectly adapted to its local area are brilliant, for the reason that these are the principles which are more often ignored by prevailing architects. Hence, the study of this subject is much needed for better architects of future that are sensitive to the built form and the environment as well.

01.3 AIM

This study aims to explore the balance between the contemporary architecture practices Vis a Vis the vernacular architectural techniques. This work hinges on such ideas and practices as ecological design, modular and incremental design, standardization, and flexible and temporal concepts in the design of spaces. The blurred edges between the traditional and modern technical aspects of building design, as addressed by both vernacular builders and modern architects, are explored.


The above aim has been divided among the following objectives-

• Study of vernacular architecture in modern context.

• Study of parameters that make a building efficient.

• To explore new approaches towards traditional techniques.

• Study of the built environment following this concept.

• To explore approaches to achieve form follows energy.


The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.

Hence, the need to study this approach is becoming more relevant with the modern times.


Fusion of the vernacular and contemporary architecture will help in the design of buildings which are more sustainable and connect to the cultural values of people.



• Is vernacular architecture actually sustainable in today’s context in terms of durability and performance?

• How vernacular architecture has influenced the urban architecture of INDIA?

• Local architecture or modern architecture which is more loved by the locals that are living in the cities compared to the locals living the rural area?

• Will the passive design techniques from vernacular architecture contribute in the reduction of environmental crisis due to increasing pollution and other threats?

• Modern architecture has evolved from the use of concrete to steel and glass and other modern materials. What is the reason that the sustainability of local materials were compromised during these times which led to the underrated statement or a norm that vernacular architecture is village architecture as stated by the majority today ?


The discussion and debate about the value of vernacular traditions in the architecture and formation in the settlements in today’s world is no longer polarized.

India undoubtedly has a great architectural heritage which conjures images of Taj Mahal, Fatehpur Sikri, South Indian temples and Forts of Rajasthan. But, what represents Modern Architecture in India.

India has been a country of long history and deep rooted traditions. Here history is not a fossilized past but a living tradition. The very existence of tradition is proof in itself of its shared acceptance over changed time and circumstance, and thus its continuum.

This spirit of adaptation and assimilation continues to be an integral aspect of Indian architecture in the post-independence era as well. As such post-Independence India had voluntarily embraced modernism as a political statement by inviting world renowned Modern architect Le Corbusier to design capital city of young and free nation with democratic power structure.

Despite strong continuum of classical architecture from Indian traditions, these new interventions gained currency and came to be preferred choices for emulation of architects of the following genre. Not only Corbusier, even Louis Kahn, Frank Lloyd Wright and Buck Minster Fuller had their stints in India, Indian masters also got trained and apprenticed overseas, under international masters and continued the legacy forward.

Figure 1 Terracotta Façade –A traditional material used to create a modern design for a façade

02.2Vernacular architecture

02.2.1 Definition

Vernacular architecture is an architectural style that is designed based on local needs, availability of construction materials and reflecting local traditions. Originally, vernacular architecture did not use formally-schooled architects, but relied on the design skills and tradition of local builders.

Figure 2 A Traditional Kerala house

Later in the late 19th century many professional architects started exploring this architectural style and worked while using elements from this style. Many of those architects included people such as Le Corbusier, Frank Ghery and Laurie baker.

Vernacular architecture can also be defined as the “architecture of people “with its ethnic regional and local dialects. It is an aware style of architecture coined by the local builders through their practical knowledge and experiences gained overtime. Hence, Vernacular architecture is the architectural style of the people, by the people, for the people.

02.2.2 Influences on the vernacular

Vernacular architecture is influenced by a great range of different aspects of human behavior and environment, leading to differing building forms for almost every different context; even neighboring villages may have subtly different approaches to the construction and use of their dwellings, even if they at first appear the same. Despite these variations, every building is subject to the same laws of physics, and hence will demonstrate significant similarities in structural forms.


One of the most significant influences on vernacular architecture is the macro climate of the area in which the building is constructed. Buildings in cold climates invariably have high thermal mass or significant amounts of insulation. They are usually sealed in order to prevent heat loss, and openings such as windows tend to be small or non-existent. Buildings in warm climates, by contrast, tend to be constructed of lighter materials and to allow significant cross-ventilation through openings in the fabric of the building.

Buildings for a continental climate must be able to cope with significant variations in temperature, and may even be altered by their occupants according to the seasons.

Buildings take different forms depending on precipitation levels in the region – leading to dwellings on stilts in many regions with frequent flooding or rainy monsoon seasons. Flat roofs are rare in areas with high levels of precipitation. Similarly, areas with high winds will lead to specialized buildings able to cope with them, and buildings will be oriented to present minimal area to the direction of prevailing winds.

Climatic influences on vernacular architecture are substantial and can be extremely complex. Mediterranean vernacular, and that of much of the Middle East, often includes a courtyard with a fountain or pond; air cooled by water mist and evaporation is drawn through the building by the natural ventilation set up by the building form. Similarly, Northern African vernacular often has very high thermal mass and small windows to keep the occupants cool, and in many cases also includes chimneys, not for fires but to draw air through the internal spaces. Such specializations are not designed, but learned by trial and error over generations of building construction, often existing long before the scientific theories which explain why they work.


The way of life of building occupants, and the way they use their shelters, is of great influence on building forms. The size of family units, who shares which spaces, how food is prepared and eaten, how people interact and many other cultural considerations will affect the layout and size of dwellings.

For example- In the city of Ahmedabad, the dense fabric of city is divided in pols, dense neighborhoods developed on the basis of its community and its cohesion.Traditionllay the pols are characterized by intricately carved timber framed buildings built around a courtyards with narrow winding streets to ensure a comfortable environment within the Hot Arid climate of Ahmedabad. The design of these settlements also included stepped well and ponds to create a cooler microclimate, these are a great example of ecological sustainability with the Cultural influences.

Figure 3 Mud house Gujrat, Traditional mirror work done on the elevation of the hut

Culture also has a great influence on the appearance of vernacular buildings, as occupants often decorate buildings in accordance with local customs and beliefs.

For example- Warli art a form of representation of stories through simple forms like circles triangles and square are a form of decoration as well as a cultural tradition.

02.2.3 The Indian vernacular architecture

India is a country of great cultural and geographical diversity. Encompassing distinct zones such as the great Thar desert of Rajasthan, the Himalayan mountains, the Indo-Gangetic Plains,the Ganga delta, the tropical coastal region along the Arabian sea and the Bay of Bengal,the Deccan plateau and the Rann of the Kutch, each region has its own cultural identity and its own distinctive architectural forms and construction techniques that have evolved over the centuries as a response to its environmental and cultural setting. A simple unit of the dwelling has many distinct forms which depend on the climate, material available , social and cultural needs of the community.

Indian vernacular architecture the informal, functional architecture of structures, are unschooled informal architectural design and their work reflects the rich diversity of Indian climate, locally available building and materials and intricate variation in local social custom and craftsmanship. It has been estimated that worldwide close to 90% of all buildings is vernacular, meaning that it is for daily use for ordinary, local people and built by local craftsman. The term vernacular architecture in general refers to the informal building structures through traditional building methods by local builders without using the services of a professional architect. It is the most widespread form of building.

Indian vernacular architecture has evolved over time through the skillful craftsmanship of the local people. Despite the diversity, this architecture can be broadly divided into three categories.

• Kachha

• Pakka

• Semi pakka

“Vernacular traditions are a dynamic and creative ‘processes through which people, as active agents, interpret pat knowledge and experience to face the challenges and demands of the present. Tradition is an active process of transmission, interpretation, negotiation and adaptation of vernacular knowledge, skills and experience.”

-Asquith and Vellinga(2006)

IMG-Vellore house, Chennai, India

The architecture that has evolved over the centuries may be defined as the “architecture without architects”


They are the simplest and most honest form of buildings constructed using materials as per their availability. The practical limitations of the available building material dictate the specific form. The advantages of a Kuccha is that construction materials are cheap and easily available and relatively less labor is required. It can be said the Kuccha architecture Is not built for posterity nut with a certain lifespan in mind after which it will be renewed.

According to Dawson and Cooper (1998), the beauty of kuccha architecture lies in the practice of developing practical and pragmatic solutions to use local materials to counter the environment in the most economically effective manner.

For example in the North East, Bamboo is used to combat a damp, mild climate while in Rajasthan and Kutch ,mud, sunbaked bricks and locally available material is used to mould structures ;in the Himalayas they often use stone and sunken structures to protect themselves from the harsh cold. While in the south, thatch, coconut palms is used to create pitched roofs to confront a fierce monsoon.

MATERIALS-Mud, Grass, Bamboo, Thatch or sticks, Stone, Bamboo, lime

TECHNIQUE OF CONSTRUCTION: Construction of these houses were constructed with earth or soil as the primary construction material. Mud was used for plastering the walls.

IMG-House dwellings in Himalayas with sunken construction and stone used as insulating materials to block winds during harsh winters, HIMACHAL PRADESH


The architectural expression of Pukka is often determined by the establishments or art form which has been developed by the community, such as WARLI paintings. The Pukka buildings are generally built with permanence in mind. Often using locally available materials. Often using locally available materials, the pukka architecture has evolved to produce architectural typologies which are again region specific.

MATERIALS-Stone, brick, clay etc.

TECHNIQUE OF CONSTRUCTION- Construction of their house are done using masonry structure which may be brick or stone, depending upon the locally available material in the region where the structure is constructed, Manual labor is much high in construction of these structure than the kachcha houses.


A combination of the kachcha and pukka style together forms the semi- pukka. It has evolved as villagers have acquired the resources to add elements constructed of the durable materials characteristic of a pukka house, Its architecture has always evolves organically as the needs and resources of the local people residing in the specific region. The characteristic feature of semi pukka houses are that these houses has walls made from pukka material such as brick in cement/lime mortar, stone, clay tile but the roof construction is done in the kachcha way using Thach, bamboo etc. as the principal material of construction. Construction of these houses employs less manual labor than that of the pukka houses. Thach roofing Mud Adobe walls with Lime plaster.


The Climate of India comprises a wide range across its terrain. Five zones that can be identified in India on the basis on their climate are Cold, Hot and Dry, Composite, Temperate and Warm and humid.

Figure 4Climate zones of INDIA


These zones can be further narrowed down to three on the basis of passive techniques used and architectural styles of different regions.





The hot and dry zones of India include Ahmedabad, Rajasthan, Madhya Pradesh and Maharashtra.

A hot and dry climate is characterized by a mean monthly maximum temperature above 30 ºC. The region in this climate is usually flat with sandy or rocky ground conditions.

In this climate, it is imperative to control solar radiation and movement of hot winds. The building design criteria should, thus, provide appropriate shading, reduce exposed area, and increase thermal capacity.

Design Considerations for building in Hot and dry climate-

The hot and dry climate is characterized by very high radiation levels and ambient temperatures, accompanied by low relative humidity. Therefore, it is desirable to keep the heat out of the building, and if possible, increase the humidity level. The design objectives accordingly are:

(A) Resist heat gain by:

• Decreasing the exposed surface

• Increasing the thermal resistance

• Increasing the thermal capacity

• Increasing the buffer spaces

• Decreasing the air-exchange rate during daytime

• Increasing the shading

(B) Promote heat loss by:

• Ventilation of appliances

• Increasing the air exchange rate during cooler parts of the day or night-time

• Evaporative cooling (e.g. roof surface evaporative cooling)

• Earth coupling (e.g. earth-air pipe system)


(a) Planning: Indigenous planning layout was followed for places and simple small dwellings as seen in Shahjahanabad, Jaisalmer and many other cities in India. This type of a dense clustering layout ensured that the buildings were not exposed to the outer sun. This prevents the solar gain and the hot winds from entering the premises and also allows the cold wind to circulate within the building.

Figure 6 Hot and dry region settlement

(b) Waterbodies: Use of waterbodies such as ponds and lakes. These not only act as heat sinks, but can also be used for evaporative cooling. Hot air blowing over water gets cooled which can then be allowed to enter the building. Fountains and water cascades in the vicinity of a building aid this process.

Figure 7 AMBER FORT RAJASTHAN, INDIA A garden is positioned amidst the lake to provide a cooler microclimate for outdoor sitting.


Figure 8 Earth berming technique: Evaporative cooling through water feature Source-

(c) Street width and orientation: Streets are narrow so that they cause mutual shading of buildings. They need to be oriented in the north-south direction to block solar radiation.

Figure 9 Design techniques in Hot and dry regions Source-

(c) Open spaces and built form: Open spaces such as courtyards and atria are beneficial as they promote ventilation. In addition, they can be provided with ponds and fountains for evaporative cooling.

Courtyards act as heat sinks during the day and radiate the heat back to the ambient at night. The size of the courtyards should be such that the mid-morning and the hot afternoon sun are avoided. Earth-coupled building (e.g. earth berming) can help lower the temperature and also deflect hot summer winds.

Figure 10 Courtyard planning of Hot and dry region Source-

(2) Orientation and planform

An east-west orientation (i.e. longer axis along the east-west), should be preferred. This is due to the fact that south and north facing walls are easier to shade than east and west walls.

It may be noted that during summer, it is the north wall which gets significant exposure to solar radiation in most parts of India, leading to very high temperatures in north-west rooms.

For example, in Jodhpur, rooms facing north-west can attain a maximum temperature exceeding 38 ºC. Hence, shading of the north wall is


The surface to volume (S/V) ratio should be kept as minimum as possible to reduce heat gains.

Cross-ventilation must be ensured at night as ambient temperatures during this period are low.

(3) Building envelope

(a) Roof: The diurnal range of temperature being large, the ambient night temperatures are about 10 ºC lower than the daytime values and are accompanied by cool breezes. Hence, flat roofs may be considered in this climate as they can be used for sleeping at night in summer as well as for daytime activities in winter.

Figure 11 Flat roof for reverse heat gain during night Source-

The material of the roof should be massive; a reinforced cement concrete (RCC) slab is preferred to asbestos cement (AC) sheet roof. External insulation in the form of mud phuska with inverted earthen pots is also suitable. A false ceiling in rooms having exposed roofs can help in reducing the discomfort level.

Evaporative cooling of the roof surface and night-time radiative cooling can also be employed. In case the former is used, it is better to use a roof having high thermal transmittance (a high U-value roof rather than one with lower U-value). The larger the roof area, the better is the cooling effect.

The maximum requirement of water per day for a place like Jodhpur is about 14.0 kg per square meter of roof area cooled. Spraying of water is preferable to an open roof pond system. One may also consider of using a vaulted roof since it provides a larger surface area for heat loss compared to a flat roof.

(b) Walls: In multi-storeyed buildings, walls and glazing account for most of the heat gain. It is estimated that they contribute to about 80% of the annual cooling load of such buildings .So, the control of heat gain through the walls by shading is an important consideration in building design.

(c) Fenestration: In hot and dry climates, minimizing the window area (in terms of glazing) can definitely lead to lower indoor temperatures. It is found that providing a glazing size of 10% of the floor area gives better performance than that of 20%. More windows should be provided in the north facade of the building as compared to the east, west and south as it receives lesser radiation during the year. All openings should be protected from the sun by using external shading devices such as chajjas and fins.

Moveable shading devices such as curtains and venetian blinds can also be used. Openings are preferred at higher levels (ventilators) as they help in venting hot air. Since daytime temperatures are high during summer, the windows should be kept closed to keep the hot air out and opened during night-time to admit cooler air.

Figure 12 Louvers for providing shade and diffused lighting

The use of ‘jaalis’(lattice work) made of wood, stone or RCC may be considered as they

Allow ventilation while blocking solar radiation.

(a) Color and texture: Change of color is a cheap and effective technique for lowering

Indoor temperatures. Colors having low absorptivity should be used to paint the external surface. Darker shades should be avoided for surfaces exposed to direct solar radiation. The surface of the roof can be of white broken glazed tiles (china mosaic flooring). The surface of the wall should preferably be textured to facilitate self-shading.

Remarks: As the winters in this region are uncomfortably cold, windows should be designed such that they encourage direct gain during this period. Deciduous trees can be used to shade the building during summer and admit sunlight during winter. There is a general tendency to think that well-insulated and very thick walls give a good thermal performance. This is true only if the glazing is kept to a minimum and windows are well-shaded, as is found in traditional architecture.

However, in case of non-conditioned buildings, a combination of insulated walls and high

Percentage of glazing will lead to very uncomfortable indoor conditions. This is because the building will act like a green house or oven, as the insulated walls will prevent the radiation admitted through windows from escaping back to the environment. Indoor plants can be provided near the window, as they help in evaporative cooling and in absorbing solar radiation. Evaporative cooling and earth-air pipe systems can be used effectively in this climate. Desert coolers are extensively used in this climate, and if properly sized, they can alleviate discomfort by as much as


• Warm and humid

The warm and humid climate is characterized by high temperatures accompanied by very

High humidity leading to discomfort. Thus, cross ventilation is both desirable and essential.

Protection from direct solar radiation should also be ensured by shading.

The main objectives of building design in this zone should be:

(A) Resist heat gain by:

• Decreasing exposed surface area

• Increasing thermal resistance

• Increasing buffer spaces

• Increasing shading

• Increasing reflectivity

(B) To promote heat loss by:

• Ventilation of appliances

• Increasing air exchange rate (ventilation) throughout the day

• Decreasing humidity levels

The general recommendations for building design in the warm and humid climate are as follows:

(1) Site

(a) Landform: The consideration of landform is immaterial for a flat site. However, if there

are slopes and depressions, then the building should be located on the windward side or crest to take advantage of cool breezes.

(b) Waterbodies: Since humidity is high in these regions, water bodies are not essential.

(c) Open spaces and built form: Buildings should be spread out with large open spaces for

Unrestricted air movement. In cities, buildings on stilts can promote ventilation

and cause cooling at the ground level.

(d) Street width and orientation: Major streets should be oriented parallel to or within 30º of the prevailing wind direction during summer months to encourage ventilation in warm and humid regions. A north-south direction is ideal from the point of view of blocking solar radiation. The width of the streets should be such that the intense solar radiation during late morning and early afternoon is avoided in summer.

(2) Orientation and planform

Since the temperatures are not excessive, free plans can be evolved as long as the house is under protective shade. An unobstructed air path through the interiors is important. The buildings could be long and narrow to allow cross-ventilation. For example, a singly loaded corridor plan (i.e. rooms on one side only) can be adopted instead of a doubly loaded one. Heat and moisture producing areas must be ventilated and

Separated from the rest of the structure (Fig. 5.21) [8]. Since temperatures in the shade are not very high, semi open spaces such as balconies, verandahs and porches

can be used advantageously for daytime activities. Such spaces also give protection from rainfall. In multistoreyed buildings a central courtyard can be provided with vents at higher levels to draw away the rising hot air.

(3) Building envelope

(a) Roof: In addition to providing shelter from rain and heat, the form of the roof should be planned to promote air flow. Vents at the roof top effectively induce ventilation and draw hot air out. As diurnal temperature variation is low, insulation does not provide any additional benefit for a normal reinforced cement concrete (RCC) roof in a non-conditioned building.

However, very thin roofs having low thermal mass, such as asbestos cement (AC) sheet roofing, do require insulation as they tend to rapidly radiate heat into the interiors during


Fig- Padmanabhapuram Palace

A double roof with a ventilated space in between can also be used to promote air flow.

(a) Walls: As with roofs, the walls must also be designed to promote air flow. Baffle walls, both inside and outside the building can help to divert the flow of wind inside .They should be protected from the heavy rainfall prevalent in such areas. If adequately sheltered, exposed brick walls and mud plastered walls work very well by absorbing the humidity and helping the building to breathe. Again, as for roofs, insulation does not significantly improve the performance of a non-conditioned building.

(b) Fenestration: Cross-ventilation is important in the warm and humid regions. All doors and windows are preferably kept open for maximum ventilation for most of the year. These must be provided with venetian blinds or louvers to shelter the rooms from the sun and rain, as well as for the control of air movement.

Openings of a comparatively smaller size can be placed on the windward side, while the corresponding openings on the leeward side may be bigger for facilitating a plume effect for natural ventilation. The openings should be shaded by external overhangs. Outlets at higher levels serve to vent hot air. A few examples illustrating how the air movement within a room can be better distributed, are shown in figures below-

(c) Color and texture: The walls should be painted with light pastel shades or whitewashed, while the surface of the roof can be of broken glazed tile (china mosaic flooring). Both techniques help to reflect the sunlight back to the ambient, and hence reduce heat gain of the building. The use of appropriate colors and surface finishes is a cheap and very effective technique to lower indoor temperatures. It is worth mentioning that the surface finish should be protected from/ resistant to the effects of moisture, as this can otherwise lead to growth of mould and result in the decay of building elements.

Remarks: Ceiling fans are effective in reducing the level of discomfort in this type of climate. Desiccant cooling techniques can also be employed as they reduce the humidity level. Careful water proofing and drainage of water are essential considerations of building design due to heavy rainfall. In case of air-conditioned buildings, dehumidification plays a significant role in the design of the plant.

Architecture for hot and humid climate from Asmita Rawool

Figure 13 Traditional Kerala house

Parameters for sustainability in Warm and Humid Climate

Ecological Site planning The house is generally designed in response to ecology-the backwaters, plantations etc. allowing the building to effortlessly blend in to the landscape of coconut, palm and mango trees etc.

The house is divided in to quarters according to “Vastu Shastra”. It is generally desirable to build the house in the south west corner of the north-west quadrant. The south east corner is reserved for cremation purposes while the north-east corner has a bathing pool.

Local Materials The building is made from locally available stone and timber and terracotta tiles for roof.

Physical Response to climate The plan is generally square or rectangular in response to the hot and humid climate. The central courtyard and the deep verandas around the structure ensure cross ventilation. The south west orientation of the house prevent harsh sun rays from penetrating the house. Sloping roofs designed to combat heavy monsoon of the region. The overhanging roofs with projecting caves help to provide shade and cover up the walls from the rain.

Embodied energy The building use materials like Stone and timber which are a reservoir of embodied energy and have the potential to be recycled or reused.

Socio-Economic Adaptability Toilets have been integrated into the design of the house and RCC (Reinforced cement concrete) has been introduced to build houses with larger spans.


Thomas D’Arcy McGee – Canadian Figure

Thomas D’Arcy McGee is a historical figure who as Charles Macnab states “was the first political leader in Canada to be assassinated.” McGee is referred to by historian Alexander Brady as “[having] a unique place among the Canadian statesman of his time.” Canadian Archives states that Thomas D’Arcy McGee “was born in Carlingford, Ireland, the son of James McGee, and Dorcas Catherine Morgan.” It was during his childhood where Mcgee’s knowledge was made known to the members outside of the family, As explained by author T.F Slattery “a hedge schoolmaster, Michael Donnelly, helped him along with his books and fertilized his dreams.” When asked about McGee as a student Donnelly referred to McGee as the “brightest scholar [he] ever taught.” Thomas D’Arcy McGee did not live in Ireland his entire life as “in 1842 McGee left Ireland and travelled to North America.” T.P Slattery details the trip when he writes “McGee left Ireland with his sister Dorcas to go and live in Providence Rhode Island.” Once arriving in America McGee became a publisher Charles Macnab states “he was publishing his New York Nation at New York, Boston, and Philadelphia, and shipping it to Ireland, assuming a nationalist leadership as best he could over the remnants of Ireland.” He would spend a long time in America before “in the spring of 1857 McGee moved to Montreal, at the invitation of leaders of that city’s Irish community who expected him to promote their interest.” Thomas D’Arcy McGee made the jump into the political life in Canada as “he was elected to the Legislative assembly in December of 1857… He joined the cabinet of John Sandfield MacDonald in 1862, and chaired that year’s Intercolonial Railway conference at Quebec City.” However that career ended tragically, as stated by author James Powell “Thomas D’Arcy McGee, the much revered Canadian statesman, and orator, died by an assassin’s bullet on April 7, 1868 entering his boarding house on Sparks Street.” This death is significant as in a letter “Lady Agnes MacDonald, the prime minister’s wife… ‘McGee is murdered.. lying in the street.. shot through the head.” Thomas McGee was a great public speaker, and highly intelligent figure who played a role Canadian history as being the victim of one of the first political assassinations in the history of the country.

When thinking about infamous Canadian historical figures, one name that factors into the story of Canada is Thomas D’Arcy McGee. As historian T.P Slattery explains “Thomas D’Arcy McGee was born on Wednesday, April 13, 1825, in Carlingford Ireland on the Rosstrevor coast.” Thomas was raised by parents “James McGee and Dorcas Catherine Morgan.” Carlingford would just be McGee’s first residence as “When D’Arcy was eight, the family moved south to Wexford.” It was tragically during this time that “[Mcgee’s] mother was the victim of an accident and died on August 22, 1833. This was a heavy blow.” Thomas D’Arcy McGee was an Irish born citizen who at a young age lost a central figure in his life.

The importance Thomas D’Arcy McGee’s mother had on his life was influencing him on his ideological beliefs. One factor of influence was a nationalistic belief as Alexander Brady states “she was a woman who cherished the memory of her father’s espousal of the national cause and preserved all his espousal of the national cause and preserved all his national enthusiasms which she sedulously fed to her son.” McGee also developed his knowledge on Irish literature from his mother, as Brady states “She was interested in all of the old Irish myths and traditions and poetry, and these she related to her [son].” Thomas D’Arcy Mcgee’s nationalist ideologies were influenced upon him at a young age, and that led McGee to become “an ardent idealist for the nationality of his country.”

Thomas D’Arcy McGee was a highly intelligent human from a young age. T.P Slattery in his novel writes “A hedge schoolmaster, named Michael Donnelly, helped him along with his books.” Donnelly was a mentor figure to McGee as he helped him in regards to his schooling. When asked about McGee as an academic, Donnelly replied stating that McGee was “the brightest scholar I ever taught.” Thomas D’Arcy McGee was also a great public speaker. As T.P Slattery writes “In Wexford, D’Arcy had a boyish moment of triumph when he gave a speech before the Juvenile Temperance society, and Father Matthew, who happened to be there, reached over and tousled his hair.” Author Alexander Brady writes in regards to McGee’s performance that it was “delivered before the society a spell-binding oration, on which he received the hearty congratulations… This was [Mcgee’s] first public speech.” Thomas D’Arcy McGee was a very unique human being with a high level of intelligence and a gift for public speaking both noticeable at a young age.

Not a lifelong citizen in Ireland, Thomas D’Arcy McGee moved on in to another chapter in his life. There were multiple reasons that led to McGee leaving Ireland, one of which being that “Mcgee’s father had married again, and the stepmother was not popular with the children.” Another reason of McGee moving had to deal with the economic realities of Ireland as explained by Alexander Brady “The economic structure of Irish society was diseased. Approximately seven million were vainly endeavouring to wring a lean subsistence from the land, and hundreds of thousands were on the verge of famine.” With that information in mind Robin Burns explains on that “D’Arcy McGee left for North America in 1842, one of almost 93000 Irishmen who crossed the Atlantic that year.” Thomas set out from Ireland “on April 7th McGee [who] was not yet seventeen… with his sister Dorcas to go and live with their aunt, in Providence Rhode Island.” Thomas D’Arcy McGee had just set out on a new chapter in his life

Thomas D’arcy McGee arrived in America with “few material possessions beyond the clothes on his back.”. One of the first things that happened upon landing in America was that he presented a speech as TP Slattery explains “he was on his feet speaking at an Irish assembly.” It was at this speech where Mcgee’s feelings were stated towards the “British rule in Ireland,” as he states “the sufferings which the people of that unhappy country have endured at the hands of a heartless, bigoted despotic government are well known… Her people are born slaves, and bred in slavery from the cradle; they know not what freedom is.’ This message condemning the British rule over the Irish made an impact as it got Thomas D’Arcy McGee into a new profession while in the United States as a writer where he “joined the staff of the Boston Pilot.”

Thomas D’Arcy McGee had just moved to America, and “within weeks he was a journalist with the Boston Pilot, the largest Irish Catholic paper in the [United] [States].” With his new role, Thomas D’Arcy McGee was described as “the pilot’s traveling agent, [who] for the next two years travelled through New England collecting overdue accounts and new subscribers.” While being apart of these trips, McGee became connected a group known as “young Ireland Militants in Dublin.” One of the key members of the group was “Daniel O’Connell [who] held to a non violent political philosophy, but in 1843 he followed a change in strategy when he allowed some of the young militants who had joined the association after 1841 to plan and manage a series of rallies of hundreds of thousands across Ireland to hear him.” One of the members of the young Ireland group was “a young Ireland moderate Gavan Duffy who was the publisher of the Nation.” It was through the Nation that McGee became connected to the Young Ireland’s as Gavan Duffy had interest in him. Duffy had for a long time been a fan of McGee as Charles Macnab writes “Duffy had been impressed enough with young McGee to have engaged with him almost immediately to write a volume for Duffy’s library of Ireland series.” Hereward Senior details Duffy’s interest in Mcgee’s ability as he writes “the talents of D’Arcy McGee were recognized by Duffy, editor of the Nation who invited McGee to join its staff and McGee subsequently became part of the “Young Ireland” group.” In just a short amount of time, Thomas D’Arcy McGee had gone from arriving in the United States to now becoming recognized by the public for his ability as a writer.

Outside of his professional life, one of the actions that Thomas D’Arcy made a deeply personal decision in his life. As explained by historian David Wilson “On Tuesday, 13 July he married Mary Theresa Caffrey, whom he met at an art exhibition in Dublin.” The two connected over “their love in romantic poems, and letters show that she cared deeply about him.” The travel that McGee made however took a toll on the marriage, as Wilson states “They were torn apart by exile and continually uprooted as McGee moved from Dublin to New York, Boston, Buffalo, back to New York… When McGee was on the road Mary experienced periods of intense loneliness; when he was at home she often had to deal with his heavy drinking.”. The family suffered through tragedy as “of their five children, only two survived into adulthood.” However with all that tension and tragedy which happened there was still a connection between the family as Wilson writes “there was great affection and tenderness within the family, as Mcgee’s letters to his children attest. Mary continued to write of ‘my darling Thomas’, until the end of her days.” Outside of his workings as a writer, Thomas D’Arcy McGee did have his personal life with his family, whom he evidently cared for.

Mcgee’s last movements around Ireland came from incidents that he witnessed while in the country. This incident happened in the year 1847 as “the Irish confederation was frustrated in the general election, and a radical faction developed calling for armed action.” As explained by historian Hereward Senior “The young Irelanders were converted to the idea of a barricade revolution carried out by a civilized militia. They conspired to re-enact the French revolution on Irish soil. These young Irelanders were more attracted by the romance of revolution than by the republican form of government.” Thomas D’Arcy McGee had been a member in this revolution as Alexander Brady explains “he [consulted] the Irish revolutionists in Edinburgh, and Glasgow and enrolled four hundred volunteers.” However the downfall of McGee’s time in the revolution came as “he was arrested for sedition on the eve of his first wedding anniversary, the charges though are dismissed the next day.” This led to McGee ultimately leaving Ireland as “with a sad heart [McGee] boarded a brig at the mouth of the Foyle and sailed for the United States… In America he began at the age of twenty three a new life destined to plead for causes to prove more successful than the Irish independence.” This was the end of Thomas D’Arcy McGee’s life in Ireland.

Upon returning to the United States, Thomas D’Arcy McGee had moved to a different chapter in his life. He began writing papers one of the papers written was known as the New York Nation, as explained by Charles Macnab McGee “was publishing his New York Nation at New York, Boston, and Philadelphia.” Macnab also made sure that his paper reach Ireland, as Macnab writes “McGee shipped the paper to Ireland assuming a nationalist voice as best he could over the remnants of Young Ireland and the future political and cultural directions of the Irish world.” McGee made it clear in regards to this paper that he was willing to take a radical approach as explained by David Wilson one of his enemies being the catholic church. Wilson writes in his book “the reference [McGee] [Makes] to “priestly preachers of cowardice was pivotal; the catholic church had transformed heroic Celtic warriors into abject slaves. “The present generation of Irish Priests” he wrote, ‘have systematically squeezed the spirit of resistance out of the hearts of the people.’” In response to the criticism of the catholic church, the church condemned the Thomas D’Arcy McGee in a statement made by author being “Bishop John Hughes,” who described Mcgee’s writings as having “transferred the ‘odium of oppression’ from the British government to the catholic clergy.” The demand that Hughes made was that “unless the Nation shall purify its tone.. let every diocese, every parish, every catholic door be shut against it.” The eventual result of the Nation was explained by T.P Slattery stating that “The Mcgee’s were just in time to witness the collapse of his New York Nation… He moved on to Boston planning to sail back to Ireland.”

Thomas D’arcy Mcgee didn’t end up moving to Ireland as T.F Slattery writes “Mcgee postponed his return to Ireland and remained with his young family in Boston. There he pucked up a few fees lecturing.” As explained in the Quebec history the next chapter of Mcgee’s life happened “in 1950 McGee moved to Boston and founded the American Celt, and in 1952 he moved to buffalo where he published the American Celt for five years.” The purpose of the Celt as explained by T.P Slattery was to focus on “aid for the ancient missionary schools; encourage the Irish industrial enterprise, develop literature, and revive the music of Ireland.” The audience that was intended for the Boston Nation was “Irish worked who were irritated by the unexciting views of the Boston Pilot, and took for granted that McGee would be more to their taste as a rebel.” While in America McGee was also a novelist, as he published multiple writings about the Irish people. Examples of these writings are “A history of Irish settlers in North America (1851) to demonstrate that the Irish had made significant contribution to the history of North America.” McGee also wrote three other books titled “A history of the attempts to establish protestant reformation in Ireland (1853), the Catholic history of North America 1855), and the life of Rt. Rev Edward Maginn (1857).” In the same year of writing his last book a new chapter on Mcgee’s life was opened as “In 1857 he moved from Buffalo to Montreal, Lower Canada at the invitation of some Irish Canadians.” Thomas D’Arcy McGee was now moving to his third country.

While in Canada Thomas D’Arcy McGee continued continued writing. As Hereward Senior “Upon his arrival in Montreal McGee started to publish the New Era.” Mcgee’s new paper was quite significant to Canadian history as “a series of editorial and speeches by D’Arcy McGee had become historic. They constitute the evidence that McGee was the first of all the fathers of confederation to advocate a federal basis for a new nation.” What Slattery is implying is that McGee was the first major endorser of the formation of what would become known as Canada. Slattery continues in writing that “It began unnoticed in an article of June 27 called “queries for Canadian Constituencies,” with an acute analysis of some of the practical issues. This led the way to three important editorials… written on August 4,6, and 8 1857.” McGee’s writings from the New Era led to the next major decision in Mcgee’s life as “In December 1857 D’Arcy McGee was one of three members elected to represent Montreal in the Legislative Assembly. He had been nominated by the St Patrick’s society of Montreal.”

In regards to what McGee discussed in his editorials for the New Era, T.P Slattery states “The first editorial stressed the need for union as distinct from uniformity. The second was on the role of the French language, and the third, was on confederation.” In the first editorial McGee explained that “Uniform currency was needed “Uniform currency was needed; so were a widespread banking and credit system, the establishment of courts of last resort and an organized postal system “one is much more certain of his letters from San Francisco,” The next editorial McGee writes discusses is based on “the quality of Quebec,” which McGee discussed in an editorial on April 6th 1858 “urging parliament to adopt the proposals for federation which were to be introduced by Alexander Galt.. ‘we are in Canada two nations, and most mutually respect each other. Our political union must, to this end be made more explicitly we are to continue for the most general purpose as a united people.” The third editorial presented by McGee States “‘the federation of feeling must precede the federation of fact’. that epigram not only exposed the weakness of previous unions; it expressed Mcgee’s passion to arouse such a spirit, so a new people could come together in the north.” To specify the overall political philosophy Slattery States “[McGee] was a devoted student of Edmund Burke for theory, and of Daniel O’Connell for practice. His studies sharpened by his intelligence, and corrected as he matured through his sharper experiences.” With his political ideology out in the open Thomas D’arcy Mcgee had his path to “springboard for his start in Canadian politics. In December of 1857 he was elected to the Legislative assembly of the province of Canada.”

Thomas D’Arcy McGee had gotten into a new profession which ended up being politics. As Quebec history states “in 1958 McGee was elected as an Irish Roman Catholic to the Legislative assembly of Canada for Montreal west. A constituency which he represented until 1867, and he was re-elected for or to the house of commons of the new dominion.” He sat with the reform government of George Brown in 1858.” As Alexander Brady explains in his reasoning for supporting Brown “McGee was won by Brown’s frank, fearless character. Moreover, he believed that the Irish catholics could subscribe with little reservation to the reform leader’s principles.” One of the Principles that McGee agreed on with Brown was “a hostility to the intolerant Toryism of the old school and entertained his faith in the extension of popular suffrage economy in public expenditure and reduction of taxes.” Once the government returned in “March 1858, the parliamentary session began. From the outset McGee hurried into the leading debates and attacked the corruptions as the government party was descried, with all the weapons of wit and searching sarcasm.” What McGee was known for during his early years in Government was what he had been great at his whole life. Alexander Brady explains this talent when he writes a local reporter from “the globe wrote that [McGee] was undoubtedly the most finished orator in the house… he had the power of impression an audience accounted for by attributing to those which can only be accounted for by attributing to those who possess it some magnetic influence not common to everyone.” McGee may now have moved from a writer to a politician, however his childhood abilities as a public speaker had stayed with him.

Life for Thomas D’Arcy McGee in the Brown political party was not always phenomenal. As explained by David Wilson “the reform party began to alarm its French Canadian wing. Sensing an opportunity, the liberal Conservatives moved a non confidence resolution against the government.” This led to an area of debate in which “all the leading figures in government defended its record- all of them except McGee, who was getting drunk with friends when he was scheduled to speak. His erratic behaviour was symptomatic of deeper disillusionment with the reform party.” With Mcgee’s behaviour in question the government made its move to deal with him as agreements were reached between the leaders of the reform party that “a new reform government must abandon the Intercolonial railway, and that there would be no place of McGee in the new cabinet.” What also alienated Mcgee’s standing was in regards to political ideology, as David Wilson explains “McGee was a loose canon whose position on separate schools alienated the clear grits, whose position on separate schools alienated Rouges. For the members of the Reform party McGee had become a liability.” This was the beginning of the end for McGee in the reform party as “McGee felt that he had been strongly stabbed in the back by his own colleagues.” Feeling alienated by the members of his party Thomas D’Arcy McGee “transferred his allegiance to the conservatives, where he became minister of agriculture in the MacDonald Government of 1864.” McGee had thus crossed the political aisle embracing a new party.

As a member of the John A Macdonald party Mcgee’s status had increased. As explained in Canadian archives

In 1864 McGee had helped to organize the Canadian visit, a diplomatic goodwill tour of the Maritimes that served as a prelude to the first confederation conference. During this tour, Magee delivered many species in support of union and lived up to his reputation as the most talented politician of the era. He was a delegate to the Charlottetown conference and the Quebec conference. In 1865 he delivered two speeches on the union of the provinces, which subsequently bound and published.

Moments from McGee in regards to the two conferences are explained by T.P Slattery who writes, during the Quebec conference “McGee speaking with an ease of manner moved an amendment. He proposed that the provision be added to the provincial power over education… Andrew Archibald MacDonald, sitting at the far and of the table to Mcgee’s left seconded the amendment.” In explaining the logic behind his speech McGee stated “saving the rights and privileges which the protestant or catholic minority in both Canadas may possess as to their denomination schools when the constitutional act goes into operation.” In regards to the Charlottetown conference Mcgee’s major contribution as explained by David Wilson was “his principal contribution to the Charlottetown conference lay not in the formal proceedings but in the whirl of social events that surround the meetings- the dinner parties and luncheons, and the grand ball at the government house.” The effect that McGee had on these meetings was noticeable as “historians of confederation had pointed out, these events were important in creating a climate of camaraderie and allowing new friendships to form. At a liquid lunch on board the Victoria “Mcgee’s wit sparkled brightly as the wine,” the mapped was euphoric that the delegate proclaimed the banns of matrimony among the provinces.” Though Mcgee’s role in the Charlottetown conference was described by Wilson as “a secondary and often marginal role in the negotiation between Canada and the Maritimes… No other Canadian politician knew the maritimes better than McGee.” Hence McGee was more of an advertiser to the Maritime colonies with the goal of convincing them into join confederation.

The goal that McGee had played a role in aiming to accomplish finally had been accomplished. However as Alexander Brady states “In November 1866, the delegation of ministers appointed to represent Canada at the final drafting of the federal constitution sailed for England. McGee was not a member of that party.” This began Mcgee’s role declining in government as Hereward Senior explains “John A. MacDonald found it more convenient to draw the representative of the Irish Catholic community from the maritimes.” With that reality in mind Thomas D’Arcy McGee “prepared to run in his old constituency in Montreal west.” It was here that Thomas D’Arcy McGee faced off with a new foe.

The Fenian movement is explained by author Fran Reddy as she writes

The Irish Fenian Brotherhood movement spurred along the idea of union among the British North American Colonies had spurred along the idea of union amongst the British North American colonies. Due to increasing skirmishes along the border as the Fenians tried to move in from the United states to capture British North America colonies, believing that they could hold these as ransom to bargain for Ireland’s independence from British rule.

Why there are relevant to Thomas D’Arcy McGee is due McGee making an enemy with the Fenians when “in 1866 he condemned with vehemence the Irish American Fenians who invaded Canada; and in doing so he incurred the enmity of the Fenian Organization of the United States.” This played a role in the election that McGee was trying to win in Montreal as “In Montreal the Fenians were able to find allies amongst the personal and political enemies of McGee.” This movement caused an effect in the political life of McGee as “At the opening of the election campaign, Magee wrote to John A. MacDonald that he had decided not to go to Toronto, as it would provide the Grit Fenians” with an opportunity to offer him insults.” The attempt to stop McGee from getting elected failed as “McGee won by a slight majority in Montreal west,” thus regaining his old seat in government. The feelings the Fenians had of McGee influenced Thomas D’Arcy Mcgee’s time in government as explained by Canadian archives “Thomas D’Arcy McGee was seen as a traitor by the very Irish Community that he sought to defend, and by 1867 [Mcgee] expressed a desire to leave politics.”

However Thomas D’Arcy McGee would not get his wish of leaving the political scene, as Alexander Brady describes in detail the final moments of his life. As Brady writes “[McGee] spoke at midnight. Shortly after one on the morning of the 7th the debate closed. The members commented generally on Mcgee’s speech; some thought it was the most effective that they had ever heard him deliver.” After the evening concluded there was a new positive mood on McGee as “Perhaps part of the lightheartedness was caused by this reflection that on the morrow he would return to Montreal, where his wife and daughters were within a few days to celebrate his forty third birthday.” While continuing on McGee ended the evening as “he left his friend and walked to his loving on Sparks street. As he entered a slight figure glided up and at close range fired a bullet into his head. His assassin dashed away in the night, but left tell tale steps in the snow later to assist in his conviction.” The news of Mcgee’s death had spread quickly amongst Canada, one person who received the news was “Lady Agnes MacDonald the prime minister’s wife,” in which she states “The answer came up clear and hard through the cold moonlit morning: “McGee is murdered… lying in the street shot through the head.” The scene of the death was described by a witness on the scene “Dr. Donald McGillivray” in which he states “about half past two I was called and told that D’Arcy McGee had been shot at the door of his boarding house. I went at once. I found his body lying on its back on the sidewalk.” Thomas D’Arcy Mcgee’s life had come to an end.

The search for Mcgee’s killer led authorities to a man named “Patrick James Whelan who was convicted and hanged for the crime.” As Slattery explains “The police moved fast. Within twenty hours of the murder they had James Whelan in handcuffs. In Whelan’s pocket they found a revolver fully loaded. One of its chambers appeared to have been recently discharged.” There is also more evidence against Whelan as explained by Charles Macnab “Minutes before his execution, Patrick James Whelan admitted that he was present when McGee was shot.” Also presented during the trial was Whelan’s role in Mcgee’s campaign as written by Hereward Senior “his presence in Prescott during Mcgee’s campaign there, his return to Montreal when McGee returned, and his taking up employment in Ottawa when McGee took his seat in parliament all suggest he was stalking McGee.”

The main theory during the trial was that Whelan was a Fenian, which would make sense as they were the major enemy of McGee. However as Senior exclaimed “Whelan insisted he wasn’t a Fenian.” What Whelan was identified in was in fact called “the Ribbonmen, however Whelan was unquestionably under the influence of Fenian propaganda and engaged in clandestine work on their behalf.” There was a controversial moment in the trial as explained by T.F Slattery “The prisoner had come back from court and was telling what had happed. James Whelan did not say “he shot McGee like a dog’ but that Turner had sworn he heard Whelan say, “he’d shoot McGee like a dog,’ The prisoner asserts that his words have been twisted.” The trial resulted in a guilty verdict as “Whelan maintained his innocence throughout his trial and was never proven to be a Fenian. Nonetheless he was convicted of murder and hanged before more than 5000 onlookers on February 11th 1869.”

The funeral was very non luxurious as Charles Macnab states “The body was not handed over for a proper catholic burial. Instead it was buried in a shallow grave in the jail yary. There was fear of a massive fenian demonstration at Whelan’s funeral.” The status of Mcgee as a public figure was made evident by the amount of attendees at his funeral. As exclaimed by T.P Slattery “The population of the city was then one hundred thousand, but there was so many visitors for D’Arcy Mcgee’s funeral that the population had practically doubled.” Amongst the attendees were “Newspaper reporters who estimated the number marching and gathered along the long route wrote that a hundred thousand people participated in the demonstration of mourning.” In regards to the legacy of Thomas D’Arcy McGee Alexander Brady states “such material bases of union must fail to hold together different sects and races inhabiting the dominion, unless Canadians cherish what McGee passionately advanced, the spirit of toleration and goodwill, as the best expression of Canadian nationality.” David Wilson gives the perfect summary of who Thomas D’Arcy McGee was when he writes “For the myth makers, here was the ideal symbol of the Celtic contribution to Canadian nationality- an Irish catholic Canadian who became the youngest of the fathers of confederation, who was widely regarded as an inspirational and visionary Canadian nationalist and who articulated the concept of unity in diversity a century before it became the dominant motif of Canadian identity.” Thomas D’Arcy McGee was a very important public figure in Canadian history who met a tragic and unfortunate demise due to an assassination.

Thomas D’Arcy McGee was an Irish citizen born in the city of Carlingford Ireland. He moved at a young age, and during that time he dealt with the tragic loss of his mother who was killed in an accident. Mcgee’s Irish nationalist ideology was inspired by his mother, an ideology which played a major role in Mcgee’s life. Thomas D’Arcy McGee was a highly intelligent individual as while in his new location of Wexford a man who helped McGee in his studies said Thomas D’Arcy McGee was “the brightest scholar I have ever taught.” During his teenage years McGee moved over to the United States where over the next few years he published multiple papers which helped him gain the eye of an Irish nationalist organization. During this time Thomas D’Arcy McGee began his family by getting married in Dublin Ireland. Thomas D’Arcy Mcgee’s time in Ireland however came to an end as he was nearly arrested, a threat worthy enough of him moving back to America. While in America McGee went from New York to Boston publishing papers with a pro Ireland ideology. These papers led to the next chapter of Mcgee’s life involving his move to Canada, specifically Montreal. In Montreal McGee founded a new paper titled the Montreal Era, and in the Montreal Era he promoted what became known as Confederation. This led to Thomas D’Arcy McGee getting into politics in Montreal, where he became a member of the Reform Party of George Brown. While in the Reform party McGee was exposed as a loose canon, with views that split the party ideology, and he was also known for being an alcoholic. With that revelation McGee was angry to the point of which he joined the party in power under the leadership of John A. Macdonald. Thomas D’Arcy McGee played a role in Canadian Confederation as he attended both the Quebec, and Charlottetown conferences, leading to the formation of the country of Canada. However McGee was left off the delegation that would deliver the document of confederation to London. This development led McGee to run for a seat in political office, a seat to which McGee was attacked from a function of Ireland nationalists known as the Fenians. Thomas D’Arcy McGee won his seat however in April 7th 1868 McGee was murdered at the hands of a man named Patrick Whelan. Whelan was convicted of the crime and hanged as a result. McGee is one of Canadian history’s great public speakers, as there are several instances throughout his life where he swayed an audience with his speaking ability. Thomas D’Arcy McGee was an important figure in history and in the formation of the Country of Canada who tragically met his demise at the hands of a political assassination.


Powell, James. “The Hanging of Patrick Whelan.” Today in Ottawas History. August 22, 2014. Accessed November 28, 2018. the-last-drop/.

Archives Canada. “Thomas D’Arcy McGee (April 13, 1825 – April 7, 1868).” Library and Archives Canada. April 22, 2016. Accessed November 28, 2018. https://www.bac- mcgee.aspx.

Block, Niko, and Robin Burns. “Thomas D’Arcy McGee.” The Canadian Encyclopedia. April 22, 2013. Accessed November 28, 2018. thomas-darcy-mcgee.

Burns, Robin B. “Biography – McGEE, THOMAS D’ARCY – Volume IX (1861-1870) – Dictionary of Canadian Biography.” Home – Dictionary of Canadian Biography. 1976. Accessed November 28, 2018. mcgee_thomas_d_arcy_9E.html.

Bélanger, Claude. “Quebec History.” Economic History of Canada – Canadian Economic History. January 2005. Accessed November 28, 2018. QuebecHistory/encyclopedia/ThomasDArcyMcGee-HistoryofCanada.htm.

Reddy, Fran. “The Fenians & Thomas D’Arcy McGee: Irish Influence in Canadian Confederation.” The Wild Geese. June 30, 2014. Accessed November 29, 2018. http:// canadian.

Canada, Archives. “Common Menu Bar Links.” ARCHIVED – Daily Life: Shelter – Inuit – Explore the Communities – The Kids’ Site of Canadian Settlement – Library and Archives Canada. May 02, 2005. Accessed November 28, 2018. https://

Senior, Hereward. The Fenians and Canada. Toronto Ontario: The Macmillan Company of Canada Limited, 1978.

Macnab, Charles. Understanding the Thomas D’Arcy Mcgee Assassination: A Legal and Historical analysis. Ottawa Ontario: Stonecrusher Press, 2013.

Brady, Alexander. Thomas D’arcy Mcgee. Toronto Ontario: The Macmillan Company of Canada Limited, 1925.

Slattery, T.P. The Assassination of D’Arcy Mcgee. Garden City, New York: Doubleday & Company, Inc., 1968.

Wilson, David A. Thomas D’Arcy Mcgee: Volume 1. Passion, Reason, and Politics. 1825-1857. Montreal Quebec: Mcgill-Queen’s University Press, 2008.

Wilson, David A. Thomas D’Arcy Mcgee: Volume II. The Extreme Moderate. 1857-1868. Montreal Quebec: Mcgill- Queen’s University Press, 2011.

Slattery, T.P. They Got To Find Me Guilty Yet. Garden City, New York: Doubleday & Company Inc., 1972.


Hadrian’s works: online essay help

Architecture that has withstood the test of time gives us an insight into the culture and values of civilizations from the past. Ancient Roman architecture is widely known to be some of the most suggestive and prominent works because the Emperors who ruled used building designs to convey their strength and enrich the pride of their people. Hadrian was not a man of war like the emperors who preceded him. Instead, he dedicated his time to fortifying his nation’s infrastructure and politicking his way into the hearts of provinces far beyond the walls of Rome. I fell in love with the story of Hadrian for two reasons: his architectural contributions have withstood the test of time, and even though he is so well studied there is so much about his life we do not know. This research paper will zero in on the life of Roman Emperor Hadrian and how his upbringing and experiences influenced his architectural works. Hadrian struggled during his reign as well as within his own mind due to his enthusiasm for Classical Greek culture that was fused with the Roman pride his mentors had instilled in him. A description and discussion of Hadrian’s architectural works that I have found most interesting will illustrate this fusion even more.

Publius Aelius Hadrianus was born in Italica, Spain on the 24th of January, year 76 A.D. He was born to a family that was proud to be one of the original Roman colonists in the province that was considered to be one of Rome’s prized possessions. The land offered gold, silver, and olive oil of higher quality than that of Italy. Additionally, Hadrian was born during a period where Italica dominated the Roman literacy scene. The city also boasted being the birthplace of Hadrian’s predecessor, mentor, and guardian Trajan. Hadrian’s upbringing in Italica gave him a very unique perspective on Rome’s ruling of expansive territory as well as the artistic and intellectual qualities of Roman tradition. While growing up his “gaze would fall upon statues of Alexander, of the great Augustus, and on other works of art, which…were all of the highest quality.” He developed a sense of pride for being Roman, and this would translate into his future actions as emperor and architect.

Hadrian was strong in both mind and body. He was built tall and handsome, and kept in shape through his love for hunting. In the words of H.A.L Fisher, Hadrian was also “the universal genius.” He was a poet, singer, sculptor, and lover of the classics so he became known by many of his peers as a Greekling. The synergy between Greek and Roman ideals within Hadrian made him able to approach his nation’s opportunities and struggles from multiple angles, which is also why he would become such a successful emperor. By the time he came to power “Hadrian had seen more of the Roman dominion than any former emperor had done at the time of his accession. He knew not only Spain, but France and Germany, the Danube lands, Asia Minor, the Levant and Mesopotamia, and thus had a personal acquaintance with the imperial patrimony that no one else in Rome could rival.”

During Hadrian’s reign as emperor, he aligned himself with a military policy that was controversial at the time, but inspired by his upbringing in the province of Italica. He believed that the provinces should be guarded by a locally recruited military, while his Roman legions would stay in a single region for decades. The personal interest of provincial residents to protect themselves was his goal. The only Roman descendants that would aid in the protection of provinces were part of the corps d’elite – the best of the best – and would be sent only to train the recruited military-men. During his reign, however, Hadrian experienced a loss of two full legions. The thinning of his military meant he would rely heavily on recruited provincial men as well as physical barriers. One of which – his most famous – was located in Britain: Hadrian’s Wall.

Hadrian’s arrival in Britain was a spark that ignited a fire of progress and development. During the second century, much of London was destroyed by fire, and when the country was rebuilt to an area of about 325 acres, it became Rome’s largest northern territory by a long shot. Britons have historically always valued the countryside more than city life, as evident by their plain cities and attractive gardens, and for this reason many of the other cities that were rebuilt by the Romans ended up reducing in size, rather than expanding. The inhabitants simply wanted to live in the beauty of nature, and moved out of their towns exponentially as the countryside was developed. The most significant and long lasting accomplishment during the time that Rome rebuilt its English territory was the design and completion of Hadrian’s Wall.

Hadrian foresaw a symbiotic relationship that he and the British territory could share. It was based on his need for man power, which Britain had plenty to loan out. In return, Hadrian would fortify the territory and protect it from the northern savages. His past militarized protection experiences usually presented him with an expansive section of land to keep account of; but since Britain was surrounded by water in most directions his first inspiration was to build a wall. Looking back on his struggle at the Rhine-Danube region, Hadrian knew that if a military force were to be compromised a stronghold built for retreat would only lead them to death. His strategic mind led him to believe that mobility was crucial in remaining tactically offensive, so a system of fortifications spread out to increase the area of control and communication was his ideal option.

Hadrian’s Wall began near the River Tyne and stretched all the way to the Solway. It wasn’t meant to be manned at every point along its length, but rather act as a system that would drive the traffic of his enemies. “Because its course was plotted from one natural advantage to the next, the wall seems to have chosen the most difficult route across the English countryside.” It climbs to steep crags and clings to dangerous ridges. Enemy forces would not only deal with a man-made wall in their path, but in many cases they found themselves faced with natural structures that made traversing the wall even more difficult; not to mention the ditch on the north side of the wall that was twenty seven feet wide and nine feet deep. “The gateways allowed the passage of troops for operations to the north and were points where civilian traffic between north and south could be controlled.” The wall was intended to be made of mortared masonry up until the River Irving, where limestone was no longer available locally. The wall continued on made of turf. Gates were built along the wall roughly every one Roman Mile (1.5 km). Behind each gate was a reinforced guard tower that would house the patrol.

Another one of the reasons Hadrian’s construction of the wall is such an astonishing feat is because the entire project was done by hand. Roman legionaries would spend time completing a pre-specified length of the wall, and then allow the next legion to come along and continue where they left off. Unlike most Roman architecture, the stones used to build the wall were small, about eight inches in width and nine inches in length. Historians attribute the use of small stones to the work that was required to get them to the wall. Every stone would have to be carried by the backs of men or animals, and cross a distance of eight miles all the way from a quarry in Cumberland. Then, without the aid of pulleys or ropes, legionaries would place each stone one by one.

As time went on, the wall was rebuilt and fortified by Hadrian’s successors and became a permanent fixture in the British provincial landscape; far more than just a military structure. Romanesque townships were built along the wall situated near the guard forts. The townships would be fully equipped with bath houses, temples, and even full marketplaces.

In the modern world, we do not see Hadrian’s Wall as it was during the height of Roman rule, though it is clear that influential proprietors of the wall overtime tried their best to maintain the “symbolism and materiality of the Roman remains.” The years of the walls existence have allowed man and weather to tear down the wall so that its stones could be used to build churches, roads and farmhouses. Experienced architects have worked to rebuild the wall overtime and John Clayton is responsible for one of the most significant rebuilds. He purchased a long stretch of farms along the central portion of the wall, and used the original stones that had fallen over time to reconstruct it. Clayton also moved many of the inhabitants and communities that were built near the wall to locations further away so as to increase the walls visibility.

It is refreshing to know though that modern day Roman enthusiasts can see a virtually untouched portion of the wall between Chollerford and Greenhead known as “Britain’s Wall Country.” It is “an unspoiled region of open fields, moors and lakes in the country of Northumberland.” Chesters, a city about half mile west of Chollerford is home to one of the best excavated wall forts. It touts remains of towers, gates, steam rooms, cold baths, the commandant’s house, and chambers where soldiers relaxed. The most well preserved wall fort in all of Europe to date is located at Housesteads in the same region. The fort is in the shape of a rectangle with rounded edges, and “along its grid of streets are foundations marking the commandant’s house, administrative buildings, workshops, granaries, barracks, hospitals” and more. One of the most Romanesque features of the fort is the presence of latrines; complete with wooden seats, running water, and a flushing system to carry waste away. Britain would not see these luxuries again until the 19th century as Roman standards were not equaled again until that time. Modern museums along the wall feature many artifacts from the original dwellers and attract tourists from all around the world.

At the ripe age of ten years old, Hadrian’s father passed away. Ancient documentation lends us virtually no details about his mother, but a father figure would have been the most important in Hadrian’s upbringing. Fortunately for him, he had two men that would play that role in his life. The first was Acilius Attianus, with whom Hadrian would spend the next five years with and have his first introduction to the capital city. Attianus also introduced Hadrian to his first formal education. He would return home to Italica for a year or two only to be summoned back by his other guardian Trajan.

In order to truly understand the character and reasoning behind more of Hadrian’s architectural works, one must look closely at the influence his cousin, mentor and guardian, Trajan, had on him. From an objective point of view, Trajan paved the way for Hadrian by becoming the first emperor to ever be born outside of Italy, and proved to the people of Rome that “loyalty and ability were of more importance than birth.” Trajan also moved young Hadrian from place to place whenever he saw his perspective become too narrow or close-minded.

At the age of forty and prior to becoming an emperor, Trajan developed relationships with men like Domitian and his predecessor Nerva. The latter would eventually adopt him as his own heir. His status allowed him to usher Hadrian into political positions that would give him the opportunity to interact with powerful people and make a positive impression. Trajan led both Hadrian and Rome into the light as a positive example. Moderation and Justice were at the forefront of all of his decision making, and is exemplified in his declaring that all honest men were not to be put to death or disfranchised without trial. Trajan brought Hadrian along with him to fight the Dacian wars, and it is here that Hadrian learned how the Roman army was organized and led. He witnessed Trajan tearing “up his own clothes to supply dressing for the wounded when the supply of bandages ran out.” During the outbreak of the second Dacian war, he granted Hadrian the gift of serving as commanding officer. After Hadrian proved his worth to his uncle and Rome, Trajan granted him a gift of even more importance – a diamond ring originally owned by Trajan’s predecessor Nerva, and symbolized the fact that Hadrian would absolutely be his successor.

At the age of forty two, Hadrian for the first time showed Rome that he was an innovator and a man who lived by the beat of his own drum: he wore a beard. In the later days of the Roman Republic, beards had gone out of style. In fact, no emperor prior to Hadrian had worn a beard. Some historians credit his beard to wanting to look like a philosopher, while others think he did so to hide a scar running from his chin to the left corner of his mouth. The real reason is that Hadrian realized there was no point in carrying on with the custom without reason. During his lifetime, shaving was practically torture for men, because they had no access to soap or to steel. Hadrian’s reintroduction of the beard among Roman’s would also foreshadow his eventual distaste in all things Roman.

Hadrian adopted Trajan’s sense of modesty and moderation. He did not except titles bestowed upon him immediately, and would only accept it for himself when he had felt he truly earned it. One of the best examples of this is demonstrated by the titles he chose for himself to be printed on Roman currency during his reign. Historical records from the period that document Hadrian’s reign would incorporate each and every one of the titles that he was ever given. “But on the emperor’s own coins the full official titulature occurs only in the first year. After that, first imperator was dropped, then even Caesar. Up to the year 123, he is pontifex maximus…holder of the tribunician power…For the next five years his coins proclaim him simply as Hadrianus Augustus.”

As if paying homage to Augustus, the founder of the empire and title that he had come to honor, Hadrian set off to see that the infrastructure of his roman state was intact and fortified under his direction. After five years of travel to improve the cities of Corinth, Manteca, and Sicily, Hadrian returned to Rome. He had laid down excellent groundwork for his governmental policy, so he finally had time to improve the infrastructure in his nation’s capital. He would soon realize his visions for structures like the Temple of Venus, and his most significant architectural accomplishment of all: The Pantheon.

Rome’s Pantheon was originally built by Marcus Vipsanius Agrippa. Destroyed by fire during Nero’s reign in year 80, Hadrian had it completely redesigned and reconstructed. “The very character of the Pantheon suggests that Hadrian himself was its architect…an impassioned admirer of Greek culture and art and daring innovator in the field of Roman architecture, could have conceived this union of a great pedimental porch in the Greek manner and of a vast circular hall, a masterpiece of architecture typically Roman in its treatment of curvilinear space, and roofed with the largest dome ever seen.” In lieu of his inherent modesty, he decided not to even put his own name on the façade of the building. Instead he would give credit to the original designer, by inscribing it with M. Agrippa. Though there is no hard proof that Hadrian was its only designer, it is only reasonable to believe that his mind, infused with Roman and Greek culture, could conjure its design – that which is one of the most renowned structural feats in human history. The most significant difference between typical Roman and Greek architecture was the importance of height. Romans believed in reaching for the heavens with their architecture. The bigger and more grandiose a building or monument was the better.

It is unusual that we do not find much ancient documentation on the building despite its historical importance. In fact, the only written report from the time is from Dio Cassius who thought the building was constructed by its original designer, M. Agrippa. He referred to the building as a temple of many gods. “A rectangular forecourt to the north provided the traditional approach, its long colonnades making the brick rotunda, so conspicuous today, appear less dominating; a tall, octastyle pedimented porch on a high podium with marble steps also created the impression of a traditional Roman temple.” The building’s southern exposure would reveal to an onlooker the Baths of Agrippa, to the east lie the Saepta Julia, and to the west the Baths of Nero.

The Pantheon is basically composed of a columned porch and cylindrical space, called a cella, covered by a dome. Some would argue that the cella is the most essential aspect of the Pantheon, while the porch is only present in order to give the building a façade. “Between these is a transitional rectangular structure, which contains a pair of large niches flanking the bronze doors. These niches probably housed the statues of Augustus and Agrippa and provided a pious and political association with the original Pantheon.” Once inside the dome, a worshipper would find himself in a magnificently large space illuminated only by a large oculus centered on the ceiling. The walls of the chamber are punctuated with eight deep recesses alternating between semicircular and rectangular in shape. At the south end of the interior is the most elaborate recess complete with a barrel-vaulted entrance. “The six simple recesses are screened off from the chamber by pairs of marble columns, while aediculae (small pedimented shrines) raised on tall podia project in front of the curving wall between the recesses.” Encircling the entire room just above the recesses is an elaborate classically styled entablature. The upper portion of the dome was decorated as well, but what remains is mostly from an 18th century restoration. “The original decoration of the upper zone was a row of closely spaced, thin porphyry pilasters on a continuous white marble plinth.” The dome floor is decorated in a checkerboard pattern of squares and circles within squares. The tiles are made of porphyry, marbles, and granites while the circles are made of gilt bronze.

The pantheon was built almost entirely of concrete, save the porch which was also constructed of marble. From the outside of the domed section, it would appear to an onlooker to be made out of brick, but this is not the case. The bricks in this section are only a veneer, or thin decorative layer. Simple lime mortar that was popular during the period was made by combining sand, quicklime, and water. When the water evaporated, the concrete was set. Roman concrete used in the construction of the Pantheon, called pozzolana, acted quite like modern Portland cement and would set even when the mixture was still wet. Hadrian designed the Pantheon’s domed top to be 43.3 meters in diameter, which is also the exact height of the interior room. A cross section of the rotunda would reveal that it was based off of the dimensions of a perfect circle, and that is what makes the interior space seem so majestic. The sheer size of the dome was never replicated or surpassed until the adoption of steel and other modern reinforcements. What made Hadrian’s dome possible though was his use of concentric rings laid down one after the other over a wooden framework to create the basic shape of the dome during construction. The rings would apply pressure to one another, thus stabilizing the structure. The lower portion of the dome was thick and made of heavy concrete and bricks, while the upper portion was built thin and utilized pumice to make it lightweight.

The exact purpose of the front porch is unknown, and as mentioned before, may have only been added in order to give the building a façade. “It consists of a pedimented roof, supported by no less than sixteen monolithic columns, eight of grey Egyptian granite across the front, three on either flank, and two behind them on each side.” By adding this colonnade Hadrian had proven that he saw past what man had originally used it’s temples for. Traditionally, the temple cella would never be entered by the public, and so architects would hone and focus their craft on the exterior elements of the temple. Hadrian had effectively anticipated the Christian church by several centuries in the design of his “House of Many Gods.”

The Pantheon embodies everything Hadrian was as a person during the early portion of his ruling. It was very much a fusion of Greek and Roman principles that mirrored Hadrian’s inner character. He shared grand Roman pride with the people he served, and they would forever see the Pantheon as a symbol of that pride. However, as Hadrian matured as a ruler, saw more of the world, and returned to Rome for short periods at a time, there was a monumental shift in his opinions of his own capital.

Not unlike Trajan, there was another man who played an integral role in Hadrian’s life. His name was Antinous, and although not many specifics are known about his life and relationship with Hadrian, we do know that he was from Bythnia. The two met there when critics believe Antinous was the age of eighteen. “To say that he was ”like a son” to Hadrian is to put a charitable slant on their rapport. It was customary for a Roman emperor to assume the airs, if not the divine status, of the Olympian god Jupiter.” Though it was never explicitly stated or denied, it is widely believed that Hadrian and Antinous were more than just friends, but lovers.

As a part of Hadrian’s entourage, Antinous naturally went on all of the quests that led him to see the world. It was on one of these expeditions along the Nile River that Antinous lost his life, and forever plagued the mind of the now devastated emperor. Some say Antinous was murdered by his ship mates, while others even speculate that Hadrian may have sacrificed him in a testament to Egyptian mystery cults that involved Antinous’ sacrifice as a way for Hadrian to gain immortality. Nevertheless, Hadrian went on to express his admiration for the boy to the world at large. He ordered the production of his image in full scale statues, busts, and miniature printings on coins and other various items. “Full lips, slightly pouting; a fetching cascade of curls around his soft yet squared-off face; somewhat pigeon-breasted, but winningly athletic, his backside making an S-curve that begs to be stroked… one could rhapsodise further, but it is more telling to stress the sheer quantity of production.”

The most fascinated reason I have come to discover about Hadrian’s mass production of Antinous’ image is that of classical religious revival. “Hadrian knew about the Christians, whom he regarded as harmless idiots; he waged war against the Jews, who challenged his authority.” He presented Antinous as Dionysos, Pan, and as a second Apollo. Each of these disguises are intricately portrayed on images of Antinous in order to reinstall his personal views to the people he ruled.

Today the image of Antinous has survived even in Western culture. What we perceive as beauty in both men and women has been absolute for millennia; symmetrical features and calibrated proportions that Antinous embodied so thoroughly. Across other world cultures, the same holds true. Even populations completely secluded from the western world will perceive beauty as we do. As one inspects the image of Antinous methodically, they can only deduce that Hadrian was a man of fine taste.

After a stint in Africa, Hadrian returned to Rome for a short period of time, but felt as if he belonged there no more. “In Rome he hated the court etiquette, at the same time as he insisted on it: the wearing of the toga, the formal greetings, the ceremonies, the endless pressure of business.” So he left for Athens, and felt at home there. His distaste for the capital of his country foreshadowed his political decline and eventual downfall, but his positive contributions to the Roman society and historical architecture were far from over.

While in Athens, Hadrian had the opportunity to express his inner Greekling in a manner that was stronger than ever. He could talk the talk and walk the walk so well in Athens that he undertook the last round of initiation at Eleusis. The Panhellenic council offered him a place to continue leading the people who so dearly looked up to him. Though the Panhellenic council did not have formal political power, it unified the public because it was the only society that could grant a new territory to be truly Hellenic. While serving the council, and being referred to as Panhellenios, Hadrian was constantly immersing himself in the local culture, and enjoyed watching the best Athletes in Greece perform at the Panhellenic games. The Athenians even granted Hadrian the title of “Olympian.”

At this point in Hadrian’s reign, he seems to forget the lessons of moderation and justice taught to him by Trajan. He was once an emperor reluctant to accept praise from his people, but in Athens he did just the opposite. He designed and ordered the building of a new city called Hadrianopolis. As a testament to his distaste of Rome, a statue was erected of Hadrian at Olympia. The statue adorns a lorica, or breastplate that is engraved with symbols that depict the character of one who wears it. “Hadrian’s lorica shows Athene, flanked by her owl and her snake, being crowned by two graces, and standing atop the Wolf of Rome which suckles Romulus and Remus.” Clearly Hadrian believed deep down that Athens was a city superior to Rome, and the sight of the statue would surely leave a bitter taste in the mouth of any Roman who traveled to Olympia and gazed upon it.

Even after all Hadrian had done for the welfare and protection of Rome, he failed his people in one great aspect. He began his rule as an outsider, and remained so because he spent so little time in the capital city. Near the end of his days, the tension was amounting to a great amount of stress. So much so, that he became a tyrant. Hadrian would not have mercy on anyone who stepped on his toes as their leader. On one hand, the senate understood that he had outsmarted them, and the Italian members were fully aware that they were outnumbered by provincially born citizens. They had additional reasoning to dislike him because he had intentionally expensed Roman resources in order to benefit the provinces he would visit. On the other hand, “he had given them a fine new city, purged of old abuses, enriched and embellished with magnificent buildings…He had given them cleaner airier houses.” In the eyes of the Romans though, Hadrian had crossed a line. It was no secret that he had come to shy away from Rome, and that he preferred Athens. Fortunately for him, he had seen this end to his reign coming. Eight years prior, he began building a Villa in Tivoli, the classic Tibur, so that he would be able to spend the end of his days in his own version of paradise.

The most extensive architectural work of Hadrian’s is without a doubt his Villa at Rome. His villa was built at the base of Tivoli on a plain about 18 miles from Rome. Critics argue as to why Hadrian chose this spot for his Villa. He had an entire empire to choose from, and places like the Town of Tivoli offered fantastic views as well as better weather. Though Hadrian’s choice of location is criticized from a picturesque point of view, he chose it for more logical reasons. For one, he built his villa on the healthiest spot of land he could find – located on the breezy lowlands of the Apennines, within reach of wind from the west, and protected by hills. The plane was naturally unleveled, but the architect made it so by excavating obstacles in some places and paving others. All eight to ten square miles were eventually completely level, partially natural and partial of poured masonry. Another reason why Hadrian may have chosen the location is because the land belonged to his wife Sabina – albeit she played a very negligible part in his life. For all logical reasons, Hadrian chose the spot because he would be so easily able to make the land into anything he wanted with little effort.

Not unlike Versailles, Hadrian’s Villa imposes a formal order through a system of axes, so that the nature is dominated by geometry. The architecture is composed of spaces that are both closed and unclosed. The entire site was built on and around the north, west and south sides of a giant mound. In some cases, it cut well below ground level. A large multistoried wall superseded the mound, and contained cubicles that would house guards and slaves. As it has been well-established, Hadrian’s architectural mind drew from both Greek and Roman styles; it seems as though his villa is also illustrating a fusion of organic and man-made principles. “At Tivoli, it occurs, as it does perhaps even more powerfully on the arcades which form the face of the Palatine hill above the roman forum, that the scale of natural formations and of man-made structures coincides, so that the hills become in a sense man-made, and the structures take on the quality of natural formation.” For the representation of Canopus – a recreation of a resort near Alexandria – Hadrian designed a system of subterranean passages within a ravine to symbolize the River Styx. Hadrian truly felt that he had the control of the word in his hands, and felt no bound for what his works could be or represent.

A modern tourist would enter the villa through an area in the north moving toward the Poikele, yet Hadrian had intended his visitors to enter from an area between the Canopus and the Poikele so as to force them to walk under the huge mound walls filled with servants. The entrance into the Villa illustrates Hadrian’s juxtaposition of circles and squares that would be a recurring geometric theme in the rest of its architecture and layout. Canopus lies to the right of the entrance, with the Poikele to the left, and further on, two baths were in view. Although a further descriptive tour would help immensely in painting the picture of Hadrian’s Villa to the reader, it would take far too many words, so I am going to focus on only a few of the features that I find fascinating about the structure.

There is a space in Hadrian’s Villa known as a cryptoporticus. At its center there was a raised pool, about the size and shape of an average American swimming pool. Because the pool was raised, it seemed to hang in the middle of the court, while the double portico that surrounded it gave the structure a heavier feel. The Hall of Doric Pillars to its side are neither Roman or Greek in design, and feel as though Hadrian was experimenting with an architectural style all his own. The large field about the top of the hill is perfectly level up to the point where it drops off, and is supported by the Hundred Chambers before a vast valley. It is rectangular in shape with concave ends, and once again we find a pool at its center. Around that, what used to be a hippodrome, has been recreated as a garden.

The sculptures found at Hadrian’s Villa are so numerous that it is nearly impossible to study ancient sculpture without mention of the monument. Hadrian furnished his villa with not only all of the luxuries that Rome had to offer, but all of the best artwork. Egyptian figures and sculptures of his friends and family have been found in the ruins of his villa. Since each new excavation of the grounds reveals new artifacts, museums around the world have its works on display. Two statues of Antinous have been found in the ruins. One was created clearly by Greek design, while the other emanates Egyptian symbolism. Hadrian also had a curiosity for portraits, thus many were found in his ruins as well. He even went on to change Roman law and popularized self-portraits within the homes of Roman nobility and upper-class.

My overall goal with this paper was to dive headfirst into Hadrian’s life, and hopefully see why he built the things he did. Personally, seeing Rome through the eyes of Hadrian has given me a newfound appreciation for what inspires architects to design the things they do. All of Hadrian’s works mentioned in this document divulge both his inner and outer struggles as emperor, and more importantly have influenced the decisions of all architects beyond his time. Just like the emperors before him, Hadrian’s architecture made a statement about Roman strength and their everlasting objective to emulate their Gods. Hadrian’s title set him at the head of the Roman military, and his strategy and tactical senses were put forth by his design of his wall in Britain. He was not an emperor set on conquering as much land as possible, but of fortifying the land he already ruled over. I set out to illustrate two sides of Hadrian that were prominent in his works – his love for classical Greek culture and the Roman pride he was brought up with. We saw these two aspects outlined in his designs of the Pantheon and Hadrian’s Villa. The two designs also outline how at the beginning of his reign and directly after the influence of Trajan, Hadrian was still true to his Roman origin. By the end of his term, Hadrian had almost completely disregarded the culture of his capital city, and he fully embraced his Hellenistic tendencies. Hadrian’s Pantheon and Villa compare and contrast his Greco-Roman outlook within their own designs. What captivates me even more about Hadrian is that there are still so many mysteries about his life to uncover. Fortunately for us, he left behind artifacts and even entire monuments for us to interpret and imagine what life in ancient Rome would have been like.


What were Prisoner of War camps like during the Civil War?

What were Prisoner of War camps like during the Civil War, what were the conditions and how did it effect the prisoners?

During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. The statistics behind the prisoner of war camps have been concluded by multiple sources and records. In the four years of the Civil War more than 150 POW camps were established in the North and South combined (“Prisons”). That number of camps may seem large, but it clearly was not enough considering the issues concerning overcrowding. Though the exact number of deaths is not certain, records state 347,000 men died in camps total, 127,000 from the Union, and 220,000 from the Confederacy (“Prisons”). Of the men that died in the Civil War more than half of them were prisoners of war. When comparing the camps to war they should not have been so similar, Men in camps were usually left to die. They suffered from mental trauma and health complications the same if not worse than soldiers fighting the war. An example of prisons valuing extraneous items over the prisoners. From the years 1862-1865 Belle Isle held prisoners in Virginia under terrible conditions according to poet Walt Whitman. The prisoners endured the biting cold, filth, hunger, loss of hope, and despair (“Civil…Prison”). Belle Isle had an iron factory and hospital on the island, yet barracks were never built (Zombek). The prisoners only had small tents to protect them from the elements. The lack of shelter shows the priorities of the prisoners’ needs whilst having a hospital and iron factory. As an open-air stockade escaping Belle Isle was increasingly difficult (Zombek). The total disregard when it comes to the prisoner’s safety and protection from elements when it comes to Elmira is ridiculous. In July of 1864 Elmira prison was opened. Elmira was known for the terrible death rate of 25% and for holding 12,123 men Bailey when the regulated capacity was 4,000 (“Civil…) Prison”). The urgent need for medical supplies was ignored by the capital (“Elmira”). When winter came Elmira the prisoner’s clothing was taken and when Southerners were sent things, they would burn it if it were not grey (“Elmira”). The mistreatment of prisoners was intentional at Elmira as well as other prisons. After taking a glimpse at some prisons and the overall statistics of camps the following is still quite shocking. Andersonville was a Confederate prisoner of war camp, it is painted out to be the worst one in history.

Prisoners at Andersonville were so malnourished they looked like walking bones. They began to lose hope and turned to their lord. In Andersonville, the shelter, or lack thereof, was another issue. Prisoners had to use twigs and blankets due to inflation in lumber prices (“Civil…) Deadliest”). This represents how every material’s price adds up and contributes to the conditions. Within 14 months of 13,000 of the 45,000 prisoners died. The prison was low on Beef, cornmeal, and bacon rations meaning the prisoners lacked vitamin C therefore, most got scurvy (“Civil…) Prison”). With the guards turning a blind eye, prisoners had to fend for themselves. Some took this lack of authority to far and those were the “Andersonville Raiders.” They stole food, attacked their equals, and stole waves from their shelters (Serena). Andersonville especially made people turn violent and caused them to lose faith in humanity. A 15-foot-high stockade guarded the camp though the true threat was a line. 19 feet within the stockade there was a line, to keep prisoners away from the walls. If a prisoner were caught crossing the line they would be shot and killed (Serena). This technique was honestly unnecessary and a waste of resources. First the conditions now, the location of Andersonville. A swamp ran through the camp, with little access to running water or toilets prisoners used the swamps. This polluted the water, making it even more non-consumable (Serena). In the process of building Andersonville Prison, slave labor was implemented to build the stockade and trenches (Davis). The camps abused their Bailey power to not only harm prisoners but to use slaves. From swelling numbers of prisoners, they started having trouble finding space to sleep (Davis). With the capacity increasing and disgusting conditions it was a funhouse for disease. Andersonville was assumed to be the optimal position for a POW camp because of the food, the only problem was farmers did not wish to sell crops to the Confederacy (“Myths”). This is just another example of how Andersonville would have been better if given more assistance.

Was any justice ever served for the men who ran the camp.

James Duncan and Henry Wirz were both officers of Andersonville after the prison closed, they were both charged with war crimes (Davis). Wirz’s two-month trial started in August 1865. The trial included 160 witnesses, Wirz did not show a distaste towards prisoners. He served as a scapegoat for many of the allegations, he was charged with harming the lives and health of Union soldier’s and murder (“Henry”). Henry Wirz was a witness to all the mistreatment in Andersonville as a commander, therefore making him liable for the thousands of prisoners who died. Wirz was then executed (“Henry”). Unlike Wirz, Duncan was lucky, after a two-and-a-month trial he was sentenced to 15 years. After spending a year at Fort Pulaski Duncan escaped (Davis). Duncan was never truly punished for his actions because he escaped after so much time.With the logistics behind Andersonville its to realize and understand the arguments of each the Union and the Confederacy. Why were prisoners treated so poorly when the neck supplies for such was provided. The North had access to a surplus of medical supplies, food supplies, and other resources meaning they could have treated the prisoners better (Prisons). They had no reasoning besides wanting to save resources and torture Confederate soldiers. In the North they just sat around and made the soldiers live shelter less and lack protection from the elements (Macreverie). In opposition to the North, the South did not intend to have such poor conditions. For example, in Andersonville the prisoners and guards were both fed the same rations (Macreverie). The Bailey South struggled more with food compared to the North. Those tending the fields did not have shoes and only a handful of cornmeal or a few peanuts (“Prisons”). The prisoners were not fed due to a lack of preparation. Both sides tried to simplify the reasons for neglect in camps to shortage in food supply and seeking vengeance. Both sides ran the camps differently, but they faced the same problem, shortages of supplies (“Myth”). The south inevitably tried their best though their best was not good enough, the North had the luxury to a choice in how they treated prisoners and they chose the wrong one. During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. It’s safe to say Andersonville was a memorable prison but for all the wrong reasons. The arguments weren’t the best for either side when it came to justifying their actions. Generally statistics involving the camps were definitely interesting and honestly very shocking. All in all prisoner of war camps were unsafe and had terrible conditions but they served the purpose of capturing soldiers from the opposing side during war.


Formation of Magmatic-Hydrothermal Ore Deposits


Magmatic- hydrothermal ore deposits provide the main source for the formation of many trace elements such as Cu, Ag, Au, Sn, Mo, and W. These elements are formed in a tectonic setting, by fluid dominated magmatic intrusions in Earth’s upper crust, along convergent plate margins where volcanic arcs are created. Vapor and hypersaline liquid are the two forms of magmatic fluid important to the ore deposits. The term ‘fluid’ as it is being referred is non-silicate, aqueous liquid or vapor; Hypersaline liquid is also known as brine and is indicative of a salinity of >50wt%. The salinities in magmatic environments that can form ore deposits have a substantial range from a very low .2-.5wt% to the hypersaline >50wt%. The salinity of a fluid was thought to be one of the main contributing factors to which elements formed under certain specific conditions however, recent developments support a new theory that is discussed later. There are multiple different types of ore deposits such as skarn, epithermal (high and low sulphidation), porphyry and pluton-related veins. However, there are two different ore deposits, porphyry and epithermal, that produce the greatest abundance of trace elements around the world (Hedenquist and Lowenstern, 1994).

Porphyries, one type of ore deposit which occurs adjacent to or hosted by intrusions, typically develop in hypersaline fluid and are associated with Cu” Mo” Au, Mo, W or Sn. Another type of ore deposit which occurs either above the parent intrusion or distant from the magmatic source is known as epithermal and relates to Au-Cu, Ag-Pb, Au (Ag, Pb-Zn). The term epithermal rightfully refers to ore deposits formed at low temperatures of <300”C and at shallow depths of 1-2km (Hedenquist and Lowenstern, 1994). The epithermal ore deposits can be further separated in to two different types, the high sulfidation and the low sulfidation deposits which are shown in Figure 1. High sulfidation epithermal deposits form above the parent intrusion near the surface and from oxidized highly acidic fluids. These systems are rich in SO2- and HCl-rich vapor that gets absorbed in to the near surface waters causing argillic alteration (kaolinite, pyrophyllite, etc.). The highly acidic waters then get progressively neutralized by the host rock. Low sulfidation occurs near the surface as well but away from the source rock, as seen in Figure 1, and is dominated by meteoric waters. The fluids are reduced with a neutral pH and CO2+, H2S, and NaCl as the main fluid species. The main difference between the two epithermal fluids is how much they have equilibrated with their host rocks before ore deposition (White and Hedenquist, 1995). In addition to the two main types of ore forming deposits, there are certain environments where they are capable of occurring.

There are three important reoccurring ore-forming environments around the globe that produce these trace elements. The first of which is the deep crust where gold deposits form due to mixing and phase separation among the aquo-carbonic fluids. The second is the granite-related Sn-W veins which provide the interaction of hot magmatic vapor and hypersaline magmatic liquid with cool surface-derived meteoric water, as a widespread mechanism for ore mineral precipitation by fluid mixing in the upper crust. Third is the Porphyry-Epithermal Cu-Mo-Au systems resulting from the varying density and degree of miscibility of saline fluids between surface and magmatic conditions that propose the role of fluid phase separation in ore-metal fractionation and mineral precipitation (Heinrich et al., 2007).

Figure 1 (Hedenquist and Lowenstern, 1994)


The formation of magmatic-hydrothermal ore deposits is a complicated but not so timely process that undergoes numerous phases. A general depiction can be seen in Figure 2 showing the different components involved in the system. Hydrothermal ore deposits are initiated by the ”generation of hydrous silicate magmas, followed by their crystallization, the separation of volatile-rich magmatic fluids, and finally, the precipitation of ore minerals in veins or replacement deposits.’ (Audetat, Gunther, and Heinrich, 1998). Porphyry magma chambers have been dated using individual zircon grains. Since the magma reservoirs in which porphyry deposits form occur in the upper crust, they are found to have a maximum life span of <1 Ma. The porphyry stocks struggle to remain ‘at the temperature of mineralization(>350”C) for more than even a few tens of thousands of years, even with massive heat advection by magmatic fluids.’ (Quadt et al., 2011) The hosted zircons analyzed contain significantly different ages that range over a span of millions of years indicating multiple pulses of porphyry emplacement and mineralization. Diffusive equilibrium occurs even faster than mineralization between magmatic fluids and altered rocks. Thermal constraints suggest that the porphyries and their constituent ore fluids underwent the ore-forming process in multiple spirts of as little as 100 yrs. each. The methods behind this are handled and discussed later (Quadt et al., 2011).

Figure 2. Illustration of ore-forming magmatic-hydrothermal system, emphasizing scale and transient nature of hybrid magma with variable mantle (black) and crustal (gray) components. Interacting processes operate at different time scales, depending on rate of melt generation in mantle, variable rate of heat loss controlled by ambient temperature gradients, and exsolution of hydrothermal ‘uids and their focused ‘ ow through vein network, where Cu, Au, or Mo are enriched 100 fold to 1000 fold compared to magmas and crustal rocks (combining Dilles, 1987; Hedenquist and Lowenstern, 1994; Hill et al., 2002; Richards, 2003). (Quadt et al., 2011)

Chemical and temperature gradients are important due to the selective dissolution and re-precipitation of minerals in rare elements to also form ore deposits. Most ore deposits form in the upper crust due to the advection of magma and hot fluids into cooler rocks, creating the rather steep temperature gradients. Temporary steep gradients in pressure, density, and miscibility in response to the brittle deformation of rocks to form vertical vein networks proves physical properties of miscible fluids to be of equal importance. H2O-CO2”-NaCl controls the composition of crustal fluids causing variations in the physical properties and in turn affecting the chemical stability of dissolved species. (Heinrich 2007)

Evidence from fluid inclusions suggests the interaction of multiple fluids in volcanic arcs through fluid mixing as well as fluid phase separation. These fluid inclusions can provide insight in to the substantial role the geothermal gradient plays in the formation of these ore deposits and why they only occur under certain environmental conditions. Salinity was thought to be a main contributing factor (primary control) to which elements were precipitated but it is now debated that vapor and sulfur play a key role, especially in terms of Cu-Au deposits. Supporting evidence suggests the likelihood of one bearing greater significance than the other so both are discussed and compared. The addition of sulfur causes Cu and Au to prefer the vapor phase. Figure 3 shows the vapor/liquid concentrations surpass 1 and allow the elements to more easily shift in to the vapor phase where they can then be transported.

Figure 3 (Left). Experimental data for the partitioning of a range of elements between NaCl-H2O-dominated vapor and hypersaline liquid, plotted as a function of the density ratio of the two phases coexisting at variable pressures (modified from Pokrovski et al. 2005; see also Liebscher 2007, Figs. 13, 14). As required by theory, the fractionation constant of all elements approaches 1 as the two phases become identical at the critical point for all conditions and bulk fluid compositions. Chloride-complexed elements, including Na, Fe, Zn but also Cu and Ag are enriched to similar degrees in the saline liquid, according to these experiments in S-free fluid systems. Hydroxy-complexed elements including As, Si, Sb, Sb and Au reach relatively higher concentrations in the vapor phase, but never exceed their concentration in the liquid (mvapor/mliquid < 1). Preliminary data by Pokrovski et al. (2006a,b) and Nagaseki and Hayashi (2006) show that the addition of sulfur as an additional complexing ligand increases the concentration ratios for Cu and Au in favor of the vapor (arrows); in near-neutral pH systems (short arrows) the increase is minor, but in acid and sulfur-rich fluids (long arrows) the fractionation constant reaches ~ 1 or more, explaining the fractionation of Cu and Au into the vapor phase as observed in natural fluid inclusions. (Heinrich 2007)


The solubility of ore minerals increases as water vapor density increases with the transient pressure rise along the liquid-vapor equilibrium curve. The nature of this occurrence suggests ‘that increasing hydration of aqueous volatile species is a key chemical factor determining vapor transport of metals and other solute compounds.’ (Heinrich et al., 2007) The high salinity in hypersaline fluid systems allows for the vapor and liquid to coexist beyond waters’ super critical point. The increasing water vapor density accompanied by an increase in temperature leads to higher metal concentrations as an inherent result of increased solubility of the minerals in vapor. ‘[Observed] metal transport in volcanic fumaroles’ and even higher ore-metal concentrations in vapor inclusions from magmatic-hydrothermal ore deposits’ (Heinrich et al., 2007) has led to research in order to quantify vapor transportation. Fractionation is of key importance because all elements behave differently when it occurs to the coexisting vapor and the hypersaline liquid. Certain elements such as ”Cu, Au, As, and B partition into the low-density vapor phase while other ore metals including Fe, Zn, and Pb preferentially enter the hypersaline liquid.’ (Heinrich et al.,2007). This basically means that vapor is now known to contain higher concentrations of ore metals than any other known geological fluid.


‘Sulfur is a major component in volcanic fluids and magmatic-hydrothermal ores including porphyry-copper, skarn, and polymetallic vein deposits, where it is enriched to a greater degree than any of the ore metals themselves’ (Seo, Guillong, and Heinrich, 2009). Sulfur is necessary for the precipitation of sulfide minerals such as pyrite and anhydrite. Sulfide is an essential ligand in metal-transporting fluids to increase the solubility of Cu and Au. Introducing sulfur to Cu and Au in vapor phase can also cause them to be relatively volatile. Sulfur however changed the conditions in which Cu and Au enter the vapor phase, as seen in Figure 3, and shines light on why it’s possible for Cu and Au to partition in to low density magmatic vapor (Heinrich et al., 2007). Sulfur basically makes it easier for Cu and Au in particular to enter in to the vapor phase where they can then be more easily transported making sulfur a key to the high concentrations of ore in the vapor phase.

Methods and Results:

ZIRCON DATING using LA-ICP-MS and ID-TIMS Figure 4(Above, Left). Rock slab from Bajo de la Alumbrera, showing early andesite porphyry (P2, left part of picture and xenolith in lower right corner) that solidi’ed before becoming intensely veined and pervasively mineralized by hydrothermal magnetite + quartz with disseminated chalcopyrite and gold. After this ‘rst pulse of hydrothermal mineralization, a dacite porphyry intruded along an irregular subvertical contact (EP3, right part of picture), before both rocks were cut by second generation of quartz veins (diagonal toward lower right). (Quadt et al., 2011) Figure 5(Above, Right). A: Concordia diagram with isotope dilution’thermal ionization mass spectrometry (ID-TIMS) results from the ‘rst (red ellipses, P2) and second (blue ellipses, EP3) Cu-Au mineralizing porphyry of Bajo de la Alumbrera. B, C: For comparison, published laser ablation’inductively coupled plasma’mass spectrometry (LA-ICP-MS) analyses and their interpreted mean ages and uncertainties on the same age scale (replotted from Harris et al., 2004, 2008; LP3 is petrographically indistinguishable from EP3, but cuts also second phase of ore veins). All errors are ” 2”. MSWD’mean square of weighted deviates. (Quadt et al., 2011) ‘Porphyry Cu ”Mo ” Au deposits form by hydrothermal metal enrichment from fluids that immediately follow the emplacement of porphyritic stocks and dikes at 2-8km depth’ (Quadt et al., 2011) Samples were taken from two porphyry Cu-Au deposits, the first samples taken from Bajo de la Alumbrera, a volcanic complex located in northwestern Argentina. Uranium-Lead LA-ICP-MS, laser ablation- inductively coupled plasma- mass spectrometry, and ID-TIMS, isotope dilution- thermal ionization mass spectrometry, analyses were performed on zircons from the samples to conclude concordant ages of single crystals between two mineralizing porphyry intrusions. The LA-ICP-MS data was taken previously and is represented by Figure 5, B and C. ID-TMS analyzed samples from the two intrusions. One sample known as BLA-P2 is quartz-magnetite(-K-feldspar-biotite) altered P2 porphyry while the other sample, taken 5m from the EP3 contact to exclude contamination, is known as BLA-EP3. BLA-EP3 ”truncates the first generation of hydrothermal quartz-magnetite veinlets associated with P2, and is in turn cut by a second generation of quartz veins’ (Quadt 2011). The results were compared with the previously existing data and the P2 porphyry grain ages are shown to range from 7.772 ”0.135 Ma to 7.212 ”0.027 Ma. The maximum age for subvolcanic intrusion, solidification, and first hydrothermal veining of P2 is as late as 7.216 ”0.018 Ma (P2-11 is the most precise of the young group) when the zircons crystallized from the parent magma. ‘The EP3 porphyry truncated these veins and provided concordant single-grain ages with a range from 7.126 ”0.016 Ma to 7.164 ”0.057 Ma. It is ultimately concluded that the two intrusions are separated in age by .090 ”0.034Ma. With this data, it can be said that the two porphyries intruded within a period of 0.124 m.y. from each other. Figure 6. Concordia diagrams with isotopic dilution’thermal ionization mass spectrometry (ID-TIMS) results from three porphyries (A: KM10, KM2, 5091-400; B: KM5; C: D310) bracketing two main pulses of Cu-Au mineralization at Bingham Canyon (Utah, USA); Re-Os (molybdenite) data are from Chesley and Ruiz (1997). (Quadt et al., 2011) The second samples were taken from Bingham Canyon in Utah,USA and were found in pre-ore, syn-ore, and post-ore porphyry intrusions. All three of the porphyry intrusions were dated using ID-TIMS analysis and yielded the results seen in figure 6. It was found that two Cu-Au mineralization pulses occurred. The first is associated with a quartz monzonite porphyry which existed prior to the mineralization of the Cu-Au in the porphyry. A second pulse of Cu-Au is known to occur because it cuts through the latite porphyry and truncates the first veins. Thirty-one concordant ages were taken collectively from the three intrusions and the most precisely dated of the grains concluded all the porphyries overlap in an age range of 38.10 ‘ 37.78 Ma. A single outlying grain of younger age is present in the oldest intrusion and is thought to be attributed to residual Pb loss. Upon interpretation of the three porphyries and the two Cu-Au pulses, a window of .32Ma is the time it took for their occurrence. In all three of the intrusions there are significantly older concordant grains dated as far back as 40.5Ma which hosts a minimum life time of the magmatic reservoir to be .80 ‘ 2 million years in age. (Quadt et al., 2011) Errors in the analyzed zircon grains can be minimalized if crystals that have undergone Pb loss are avoided or have been removed by chemical abrasion. The lifetime of the mineralization of a single porphyry is important for alternative physical models of magmatic-hydrothermal ore deposits which are expected to be constrained to a lifetime of less than 100k.y. Comparison of the porphyry intrusions in both sites provided substantial evidence of the relatively short lifespan of their formation. In both sites, the two consecutive pulses occur >1M.y. apart, .09M.y. and .32M.y. respectively.


Mineral deposits of Sn-W are commonly formed by the mixing of magmatic fluids with external fluids along the contact zones of granitic intrusions (Heinrich, 2007). Tin precipitation was proven to be driven by the mixing of hot magmatic brine with cooler meteoric water by using LA-ICP-MS to measure fluid inclusions taken before, during, and after the deposition of Cassiterite(Sn02). (Audetat, Gunther, and Heinrich 1998). The fluid inclusions that formed in minerals during the time of the ore formation recorded temperatures between 500-900”C at several kilometers depth. The average size range of the inclusions is between 5 and 50 micrometers. In order to prove the importance of fluid-fluid interaction in the formation of magmatic-hydrothermal ore deposits, the Yankee Lode was analyzed. The Yankee Lode is a magmatic-hydrothermal vein deposit located in eastern Australia and is a part of the Mole Granite intrusion. This vein consists of primarily quartz and cassiterite that’s well preserved in open cavities. Two quartz were analyzed, their crystals have the same pattern of hydrothermal growth and precipitation represented by successive zones of inclusions as seen in Figure 7.

Fig. 7 (A) Longitudinal section through a quartz crystal from the Yankee Lode Sn deposit, showing numerous trails of pseudosecondary fluid inclusions and three growth zones recording the precipitation of ilmenite, cassiterite, and muscovite onto former crystal surfaces. The fluid inclusions shown in the right part of the figure represent four different stages in the evolution from a magmatic fluid toward a meteoric water-dominated system. Thtot corresponds to the final homogenization temperature. (Audetat, Gunther, and Heinrich, 1998)

There are indications of boiling fluid throughout the entire history of the quartz precipitation due to the presence of both low-density vapor inclusions and high-density brine inclusions. Apparent salinities of both inclusions were taken using microthermometric measurements and ‘Pressure for each trapping stage was derived by fitting NaClequiv values and homogenization temperatures (Thtot) of each fluid pair into the NaCl-H2O model system’ (Audetat, Gunther, and Heinrich, 1998). This data basically shows that there were three pulses of extremely hot fluid injected into the system before cool water mixing and had a consecutive temporary increase in pressure. The pressure increases are noted along with some of the various fluid inclusions analyzed in Figure 7. In this system tin is the main precipitating ore-forming-element as represented in Figure 8. The initial Sn concentration of 20 wt% starts to drop drastically at the onset of cassiterite precipitation. By stage 23, represented in Figure 8C, only 5 wt% of the initial concentration of Sn remains. At this same stage in non-precipitating elements, the fluid mixture still contains 35% of the magmatic fluid indicating the chemical and cooling(thermal) effects of fluid mixing are the cause for the precipitation of cassiterite. Three pulses of magmatic fluid occurred before the formation of cassiterite was initiated in response to Sn precipitation, however the onset of cool meteroric groundwater mixing didn’t occur until the third pulse. This proves the fluid-fluid mixing is critical to the formation of trace elements (Audetat, Gunther, and Heinrich, 1998).

There is another component occurring in this system along with the precipitation of Sn, the magmatic vapor phase selectively transporting copper and boron into the liquid mixture represented by Figure 8D. Boron’s initial marked reduction occurs at stage 25 in Figure 8D, exactly where tourmaline begins to precipitate. Note that the concentration of B remained near its original magmatic value in stage 23 and 24 when simultaneously the none precipitating elements underwent substantial dilution. B also decreased in stages 26 and 27 relative to its initial value but not as much as would be expected considering the continual growing and extracting of B from the fluid to form tourmaline. Copper follows the same trend as Boron of having the same original magmatic value in stages 23 and 24, indicating an excess of these two elements. The vapor and brine inclusions in the vapor phase were found to be selectively enriched in Cu and B. This explains the excess to be condensation of magmatic vapor into the mixing liquids as Cu and B prefer to partition to the vapor phase as opposed to the saline liquid like the other elements. It has been suggested that Cu can be stabilized in a sulfur-enriched vapor phase as opposed to metals which stabilize in brine by chloro-complexes. Gold, Au, is thought to behave similarly to Cu which could explain why it is selectively coupled with Cu and As in high sulfidation epithermal deposits. (Audetat, Gunther, and Heinrich, 1998).

Fig. 8. (Left) Evolution of pressure, temperature, and chemical composition of the ore-forming fluid, plotted on a relative time scale recorded by the growing quartz crystal. (A) Variation in temperature and pressure, calculated from microthermometric data. Hot, magmatic fluid was introduced into the vein system in three distinct pulses before it started to mix with cooler meteoric groundwater. (B) Concentrations of non-precipitating major and minor elements in the liquid-dominant fluid phase, interpreted to reflect progressive groundwater dilution to extreme values. (C) A sharp drop in Sn concentration is controlled by the precipitation of cassiterite. (D) B and Cu concentrations reflect not only mineral precipitation (tourmaline) but also the selective enrichment of the brine-groundwater mixture by vapor-phase transport. (Audetat, Gunther, and Heinrich, 1998)

Fig. 9 (Right) Partitioning of 17 elements between magmatic vapor and coexisting brine, calculated from analyses of four vapor and nine brine inclusions in two ‘boiling assemblages.’ At both pressure and temperature conditions recorded in these assemblages, Cu and B strongly fractionate into the magmatic vapor phase. (Audetat, Gunther, and Heinrich, 1998)


Fluids are released from the upper crustal plutons associated with magmatic-hydrothermal systems. These fluids are usually saline and phase separation occurs into very low salinity vapors and high-salinity brines as discussed earlier. Salt precipitation can have a major impact on the permeability of a system and the ore formation along the liquid-vapor-halite curve making certain ore deposits precipitate out more than others. Halite-bearing fluid inclusions were analyzed from porphyry deposits using microthermometry to discover the inclusions can homogenize by halite dissolution. (Lecumberri-Sanchez et al., 2015).

Based on the hypothesis formed from the examination of fluid inclusions that there is widespread halite saturation in magmatic-hydrothermal fluids, further data was collected and studied. Roughly 11,000 fluid inclusions from 57 different porphyry systems were used to identify halite bearing inclusions. There were about 6,000 halite-bearing inclusions in the data set. These inclusions were then subdivided in to two different methods of homogenization, by vapor bubble disappearance or by halite dissolution and found that 90%, 52 out of the 57, of the porphyry systems homogenized by halite dissolution. The pressure at homogenization was then calculated based on the PVTX, pressure-volume-temperature-composition, properties of H2O-NaCl and found the pressures at fluid inclusion homogenization exceeds 300MPa. If significant fluid-inclusion migration is expected, several millimeters, then water loss can occur and would result in salinity changes as well as density changes. This however is not the proposed idea because migration of no more than a few micrometers is common. If no migration is evident, this leads the more plausible explanation that heterogeneous entrapment of halite due to highly variable temperatures, ”100”C, occurred. This means it is thought that halite saturation occurs at the time of trapping. The coexistence of vapor inclusions with homogenized brine inclusions are a result of halite saturation along the liquid-vapor-halite curve. Trapped halite found in the surface of another growing mineral has also been observed and means that, ‘heterogeneous entrapment of solid halite inside FIs is a natural consequence of halite saturation’ (Lecumberri-Sanchez et al., 2015).

Figure 10. Left: Pressure-salinity projection of the H2O-NaCl phase diagram at 400 ”C (Driesner and Heinrich, 2007) showing a potential mechanism for copper sulfide mineralization via halite (H) saturation. Destruction of the liquid (L) phase results in partitioning H2O to the vapor (V), and Cu and Fe to the solid phase. Right side shows the same process schematically. (Lecumberri-Sanchez et al., 2015)

Halite saturation usually occurs at the eutectic where vapor, liquid, and halite are all in existence together. Since halite precipitates at shallow crustal levels, other ore minerals are able to precipitate out of liquid. The Na,Cl, Fe& Cu-rich liquid + vapor phase traverses the phase boundary to the more stable vapor + halite stage as seen in Figure 10. Once this eutectic point is reached, liquid decreases and starts precipitating out the Cu-Fe sulfides (”Au) that was in liquid. It can be concluded that salt saturation acts as a precipitation mechanism in magmatic-hydrothermal fluids. This allows for the rapidly ascending vapor phase to transport sulfur and gold upward however the mechanism is limited by the availability of reduced sulfur. The disproportionation of SO2 similarly occurs at temperatures around which halite saturation occurs which provides the needed sulfur. This indicates that salinity is not the only key component to the formation of magmatic-hydrothermal deposits, sulfur is of equal if not more importance. Lecumberri-Sanchez et al., 2015).

SULFUR in a Porphyry Cu-Au-Mo System

In order to better understand the role sulfur plays in high temperature metal segregation by fluid phase separation, two porphyry Cu-Au-Mo deposits were examined along with two granite related Sn-W veins, and barren miarolitic cavities. The fluid inclusion assemblages underwent microthemometric analysis to measure salinities. No modification after entrapment occurred and the temperature range for homogenization of the brine inclusions was between 323”8 to 492”8 ”C. This indicates heterogeneous entrapment of variable temperatures, ”100”C, signifying halite saturation at the time of fluid inclusion trapping. LA-ICP-MS was used to measure absolute element concentrations with Na as a standard. The results were coupled with the microthemometry data to estimate the P-T conditions of the brine + vapor entrapment. (Seo et al., 2009)

Sulfur quantification in fluid inclusions was done by using two different ICP-MS instruments, a sector-field MS and the quadrupole MS, on homogeneous inclusions with similar salinities (42.4” 1.2 NaCl equiv.wt%). The size of the inclusions being analyzed can inhibit the ability to detect sulfur. The results of the quantification are such that the dominant components of the coexisting brine-vapor inclusions are NaCl, KCl, FeCl2, Cu and S. The concentrations of Cu to S are very similar and follow the same trend as seen in Figure 11 when normalized to Na (the dominant cation component). Figure 11 shows the correlation of S/Na to Cu/Na with a slope of 1 and mole ratio of 2:1, S:Cu. Figure 12 represents the fractionation behavior of how some elements prefer the brine and some prefer vapor. The elements are normalized to Pb which prefers brine and shows Au, Cu, and S are clearly correlated in their partitioning in to vapor. Figure 11 and 12 also indicates the significance of the environment in which the samples had formed. The Sn-W samples show the concentrations of Cu and S to prefer vapor where as in the porphyry Cu-Mo-Au samples show a Cu and S enrichment in the vapor phase relative to the salt components but the absolute concentrations in vapor are lower than in the brine. The overall combination of the two fluid phases in the porphyry Cu-Mo-Au are much higher in S, Cu, and Au than those in the Sn-W mineralizing fluids. The importance of sulfur and chloride as complexing agents in both of the fluid phases can be represented by the exchange equilibria:

Exchange equilibria (1) shows the preferred equilibria shift is towards Cu-S complexes in vapor and (2-4) shows stabilizing K, Na, and Fe as chloride complexes in brine. The main significance is Cu prefers stabilization in vapor with the addition of S. (Seo et al., 2009)

This means that salinity is not the main contributing factor to the formation of Cu deposits. S is now known to be important since the ”efficiency of copper extraction from the magma is determined by the sulfur concentration in the exsolving fluids’ (Seo et al., 2009). Magmatic sulfide melt inclusions have been observed and may have formed at the time of fluid saturation in the magma. Copper is precipitated out of brine and vapor as chalcopyrite (CuFeS2) and/or bornite (Cu5FeS4) once cooled. The Cu and S enriched vapor phase has the greatest contribution. (Seo et al., 2009)

Fig. 10(Next Page) Concentrations of sulfur and copper in natural magmatic’hydrothermal ‘uid inclusions. Co-genetic pairs of vapor+brine inclusions (‘boiling assemblages’) in high temperature hydrothermal veins from porphyry Cu’Au’Mo deposits (orange to red symbols), granite related Sn’W deposits (blue’green), and a barren granitoid (black’ gray)are shown. All vapor(a) and brine inclusions(b) have sulfur concentrations equal to copper or contain an excess of sulfur (the S: Cu=1: 1 line approximates a 2: 1 molar ratio). Element ratios (c), which are not influenced by uncertainties introduced by analytical calibration (Heinrich et al.,2003), show an even tighter correlation along and to the right of the molar 2:1 line, with Cu/Na as well as S/Na systematically higher in the vapor inclusions (open symbols) than in the brine inclusions (full symbols). Averagesof3’14single ‘uid inclusions in each assemblage from single healed fractures are plotted, with error bars of one standard deviation. Scale bars in the inclusion micrographs represent 50 ”m.(Seo et al., 2009)

Fig. 11(Above). Partitioning of elements between co-genetic vapor and brine inclusions. Fluid analyses including sulfur and gold are normalized to Pb, which is most strongly enriched in the saline brine (Seward, 1984). S, Cu, Au, As and sometimes Mo preferentially fractionate into the vapor relative to the main chloride salts of Pb, Fe, Cs, K and Na. A close correlation between the degrees of vapor fractionation of S,Cu and generally also Au indicates preferential sulfur complexation of these metals in the vapor. The two boxes distinguish assemblages in which absolute concentrations of Cu and S are higher or lower in vapor compared with brine. This grouping correlates with geological environment, i.e, the redox state and pH of the source magmas and the exsolving ‘uids. (Seo et al., 2009)


Throughout the many years of research multiple types of analysis have been performed such as LA-ICP-MS, sector-field MS, microthermometry, quadrapole MS, and ID-TIMS. Zircon crystals were dated to provide ages of the magmatic system in which the ore deposits formed as well as help recognize multiple pulses can occur within the same system >1M.y. apart. Fluid inclusions have been examined in great detail to bring further insight in to the magmatic pulses. These pulses are critical to fluid-fluid mixing which in turn effects the precipitation of Sn, forming cassiterite, in Sn-W veins. There are however multiple different environments for deposits to form. Porphyry-epithermal Cu-Au-Mo deposits precipitate different elements. Vapor-liquid fractionation in the porphyry-epithermal system between coexisting brine and vapor is due to the increased transport of Cu and Au in sulfur-enriched acidic magmatic-hydrothermal vapors (Pokrovski et al., 2007).

The formation of magmatic-hydrothermal ore deposits was once thought to be mainly dependent on the salinity of the fluid, hypersaline or vapor. Salinity can be used to recognize an elements referential fluid. For example, Cu and Au prefer low salinity vapors as opposed to coexisting hypersaline fluid and elements such as Pb and Fe prefer hypersaline conditions (Williams-Jones and Heinrich, 2005). Salinity can also serve as a precipitation mechanism for Cu and Au into vapor phase however it has been discovered that reduced sulfur must be present. Fluid phase separation is critical for Cu and Au to partition in to the vapor phase which is aided by sulfur-enriched acidic magmatic-hydrothermal vapors. Sulfur is in turn essential for metal transport in fluids and increasing the solubility of Cu and Au. The low salinity Cu-Au-Mo rich vapor phase is greatest contributor to Cu-Au deposits. (Pokrovski et al., 2007)


Hedenquist, Jeffrey W., and Jacob B. Lowenstern. “The role of magmas in the formation of hydrothermal ore deposits.” Nature 370.6490 (1994): 519-527.
Audetat, Andreas, Detlef G”nther, and Christoph A. Heinrich. “Formation of a magmatic-hydrothermal ore deposit: Insights with LA-ICP-MS analysis of fluid inclusions.” Science 279.5359 (1998): 2091-2094.
Heinrich, Christoph A. “Fluid-fluid interactions in magmatic-hydrothermal ore formation.” Reviews in Mineralogy and Geochemistry 65.1 (2007): 363-387.
Seo, Jung Hun, Marcel Guillong, and Christoph A. Heinrich. “The role of sulfur in the formation of magmatic’hydrothermal copper’gold deposits.” Earth and Planetary Science Letters 282.1 (2009): 323-328.
Von Quadt, Albrecht, et al. “Zircon crystallization and the lifetimes of ore-forming magmatic-hydrothermal systems.” Geology 39.8 (2011): 731-734.
White, Noel C., and Jeffrey W. Hedenquist. “Epithermal gold deposits: styles, characteristics and exploration.” SEG newsletter 23.1 (1995): 9-13.
Lecumberri-Sanchez, Pilar, et al. “Salt Precipitation In Magmatic-Hydrothermal Systems Associated With Upper Crustal Plutons.” Geology 43.12 (2015): 1063-1066. Environment Complete. Web. 20 Apr. 2016.
Pokrovski, Gleb S., Anastassia Yu Borisova, and Jean-Claude Harrichoury. “The effect of sulfur on vapor’liquid fractionation of metals in hydrothermal systems.” Earth and Planetary Science Letters 266.3 (2008): 345-362.
Williams-Jones, Anthony E., and Christoph A. Heinrich. “100th Anniversary special paper: vapor transport of metals and the formation of magmatic-hydrothermal ore deposits.” Economic Geology 100.7 (2005): 1287-1312.
Simmons, Stuart F., and Kevin L. Brown. “Gold in magmatic hydrothermal solutions and the rapid formation of a giant ore deposit.” Science 314.5797 (2006): 288-291.


Improving agricultural productivity (focus on Tanzania): online essay help


Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. The biggest aspects of Tanzania’s low agricultural productivity is lack of response to changing weather patterns, the lack of a consistent farming system, the lack of awareness of different farming systems. Therefore, in this meta-analysis, the possibility of agricultural productivity improving was examined by evaluating the effectiveness of GM crops, with the assistance of either nitrogen fertilizers or legumes for biological nitrogen fixation. Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Deerfield Academy’s Ebscohost Database, Google Scholar and Google. After an evaluation of many studies, GM crops could be a solution under many conditions. Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation. Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and effiency of all three systems. Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.


Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. Even with only a quarter of the land used, most is damaged by soil erosion, low soil productivity and land degradation. This is a result of several agricultural and economic problems including poor access to improved seeds, limited modern technologies, dependence on rain-fed agriculture, lack of education on updated farming techniques, limited funding by the government and availability of fertilizers. Tanzanian agriculture is characterized primarily by small-scale subsistence farming, and so approximately 85 percent of the arable land is used by smallholders cultivating between 0.2 ha and 2.0 ha. Tanzania devotes about 87% of their land to food crops, which include mainly banana, cassava, cereal, pulses and sweet potatoes. The other 13% is used for cash crops that include cashew, coffee, pyrethrum, sugar, tea and tobacco. Tanzania’s food crop production yields are estimated to be only 20-30% of potential yields. The average food crop productivity in Tanzania stood at about 1.7 tons/ha far below the potential productivity of about 3.5 to 4 ton/ha.

The biggest aspect of Tanzania’s low agricultural productivity, is the dependence on rain-fed agriculture, lack of a consistent farming system and the lack of awareness of different farming systems. Because of this, many studies have been done to promote either the more traditional approach of chemical fertilizer use, the genetic approach of GM crops or a more sustainable approach of using legumes for nitrogen fixation. I will be evaluating these three methods in this study. Both chemical fertilizers and legumes are currently being used by mostly uneducated Tanzanian farmers, but a very low level.

This study focuses on each farming system in relation to maize especially. This is because maize is the most preferred staple food and cash crop in Tanzania. Maize is grown in all agro-ecological zones in the country. Over two million hectares of maize are planted per year with average yields of between 1.2–1.6 tonnes per hectare. Maize accounts for 31 percent of the total food production and constitutes more than 75 percent of the cereal consumption in the country. About 85 percent of Tanzania’s population depends on it as an income-generating commodity. It is estimated that the annual per capita consumption of maize in Tanzania is over 115Kg; national consumption is projected to be three to four million tonnes per year.

A GM trial has just officially started last October, in the Dodoma region, a semi-arid area in the central part of the country. Tanzania took a long time to approve this trial because of its strict liability clause in the Environment Management Biosafety Regulations that stated that scientists, donors and partners funding research would be held accountable in the event of any damage that might occur during or after research on GMO crops. However, it was revised, and the trial began. It sets out to demonstrate whether or not a drought- tolerant GM white maize hybrid developed by the Water Efficient Maize for Africa (WEMA) project can be grown effectively in the country. Because of Tanzania’s dependence on rain-fed agriculture, this initiative could provide hope in increasing agricultural productivity of not only corn, but other food and cash crops. The project is funded by the U.S. Agency for International Development, the Bill and Melinda Gates Foundation and the Howard G. Buffett Foundation. The gene comes from a common soil bacterium and was made by Monsanto, a sustainable agriculture company that develops better seeds and systems to help farmers with productivity on the farm and grow more nutritious food while conserving natural resources, under the WEMA project. The GM seeds are affordable to farmers who works on relatively small plots of land. The corn is expected to increase yields by 25% during modern drought.

Nitrogen fertilizers (NF) are the conventional method, therefore it has the most recognition, but also the most controversy. NFs has boosted the amount of food that farms can produce, and the number of people that can be fed by farmers meeting crop demands for nitrogen and increase yield. The annual growth rate of nitrogen fertilizers in the world is 1.3%. Of the overall increase in demand for 6 million tons nitrogen between 2012 and 2016, 60 percent would be in Asia, 19 percent in America, 13 percent in Europe, 7 percent in Africa and 1 percent in Oceania.However, NFs have been linked to numerous environmental hazards including marine eutrophication, global warming, groundwater contamination, soil imbalance and stratospheric ozone destruction. In particular, in Sub-Saharan Africa, including Tanzania, nitrate runoff and leaching mainly from commercial farms have led to excessive eutrophication of fresh waters and threatened the lives of various fish species. However, this is because of the lack of understanding on the farmers of how much to use for a plot versus Tanzanian farmers having too much access to fertilizers. There also many health effects such as babies can ingest high nitrogen levels of water, and gets sick with gastrointestinal swelling and irritation, diarrhea, and protein digestion problems. Nitrogen leaches into groundwater as nitrate, which has been linked with blue-baby syndrome in infants, adverse birth outcomes and various cancers. Economically speaking, nitrogen fertilizers have become a huge cost in agriculture.

Legume nitrogen fixation provides a sustainable alternative to the costly and environmentally unfriendly nitrogen fertilizer for small-scaled farms. Biological nitrogen fixation is the process that changes inert N2 into biologically useful NH3 by plants. Perennial and forage legumes, such as alfalfa, sweet clover, true clovers, and vetches, may fix 250–500 pounds of nitrogen per acre. In a study that compared the environmental, energetic and economic factors of organic and conventional farming systems, in legume-based farming, the crop yields and economics of organic systems compared with the conventional definitely varied based on the type of crop, region and growing conditions however the environmental benefits attributable to reduced chemical inputs, less soil erosion, water conservation and improved soil organic matter were consistently greater in organic systems using legumes. However, there are many factors that also need to be in place for legumes to be the best option, this includes considering the best growing system, growing conditions, and non-fixing crops to grow with it.

The reason I wanted to study agriculture in Tanzania, particularly, is because of my love for the country after spending two weeks there learning about sustainable development and sustainable agriculture. Understanding the impact that agriculture has on the people and the economy is very inspirational to me, and connects to my passion in easing global hunger.

The purpose of this study is to provide a solution to Tanzania’s long standing fight against improving agricultural yields with a heavy consideration on drought tolerance through evaluating GMO crops, with the assistance of either increasing nitrogen fertilizer use or legumes for biological nitrogen fixation. With each approach comes many obstacles and challenges but also rewards if done properly. I believe the reason none of these methods have taken dominance is because of the lack of proper implementation, maintenance and funding. Therefore, the study will also address those concerns with each, and discuss a plan to follow if GM crops are used with either legumes or nitrogen fertilizers or both.

Methods and Materials

Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Google, Google Scholar and Deerfield Academy’s Ebscohost Database. I searched combinations of keywords to agriculture in Tanzania, GM technology, chemical fertilizer use in Tanzania and legume nitrogen fixation. Concrete keywords used related to agriculture in Tanzania were “agriculture in Tanzania,”problems affecting agriculture in Tanzania,” “farm yields in Tanzania.” Concrete keywords used related to GM technology were “GM crops,” “GM trial in Tanzania,” “Impact of GM crops,” “drought tolerant maize,” “herbicide tolerant,” “insect resistant.” Concrete words used related to chemical fertilizer use in Tanzania includes “fertilizer assessment in Tanzania,” “fertilizers costly in Tanzania,” “environmental impacts of fertilizer use in Tanzania,” “economic impacts of fertilizers in Tanzania.” Concrete keywords I searched for legume nitrogen fixation were “legume nitrogen fixation,” “improving yields with legumes,” “best legumes for nitrogen fixation,” “economic impact of legume nitrogen fixation.” The search was completed by February 2017.

Most of the publications on Google were news articles and articles in academic journals and website pages while Google Scholar and Deerfield Academy’s Ebscohost Database comprised of book chapters, conference papers, working papers, academic journals and reports in institutional series. Articles published in academic journals were all passed through a peer-reviewed process. Some of the working papers and reports are published by research institutes or government organizations, while others are NGO publications.

Each published work had to meet certain criteria to be included.

If it is a news article, it had to be from credible news sources like the Guardian, New York Times and Washington Post.
If it is from a academic journal, it had to be from a credible organization, institution or university like the World Bank, the UN, Wellesley College.

The study is an empirical investigation of the economic, health, or environmental impacts of GM crops in particular GM maize, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania.

The study reports the impacts of GM crops, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania, in terms of one of more of the following outcome variables: yield, farmer profits, environmental, economic and health advantages and disadvantages.

Results and Discussion

Problems with maize production

According to The African Agricultural Technology Foundation, in a policy brief detailing the WEMA project, despite the importance of maize as the main staple crop, average yields in farmers’ per hectare compared to the estimated potential yields of 4–5 metric tonnes per hectare. While farmers are keen on increasing maize productivity, their efforts are hampered by a wide range of constraints. The Foundation has identified three reasons for the low productivity of maize, which can be applied to any crop in Tanzania that grows in a semi-arid region:

Inadequate use of inputs such as fertiliser, improved maize seed and crop protection chemicals. The inputs are either not available or too expensive for the farmers to afford

Inadequate access to information and extension services. Many farmers continue to grow unsuitable varieties because they have no access to information about improved maize technologies due to the low levels of interaction with extension

Drought is a major threat to maize production in many parts of Tanzania. Maize production can be a risky and unreliable business because of erratic rainfall and the high susceptibility of maize to drought. The performance of local drought-tolerant cultivars is poor. Maize losses can go as high as 50 percent due to drought related stress.

These constraints highlight exactly what the problem is with increasing productivity. Without improving these three constraints for all crops and farmers, Tanzania’s agricultural productivity can never increase.

GM Crop Evaluation

Transgenic plants are species that have been genetically modified using recombinant DNA technology. Scientists have turned this method for many reasons including: engineering resistance to abiotic stresses like drought, extreme temperatures or salinity, or biotic stresses, such as insects and pathogens, that would normally be detrimental to plant growth or survival. In 2007, for the twelfth consecutive year, the global area of biotech crops planted continued to increase, with a growth rate of 12% across 23 countries. As of 2010, 14 million farmers from 25 countries, including 16 developing countries grow GM crops.

Right now, South Africa is the only African country that has completely implemented GMO crops including HT/Bt/HT-Bt cotton, HT/Bt/HT-Bt maize and HT soybean which are some of Tanzania’s major food and cash crops. South Africa has gained an income of US$156 million since the country switched mostly to biotech crops from 1998-2006. A study published in 2005 by Marnus Gouse is a Researcher in the Department of Agricultural Economics, Extension and Rural Development at the University of Pretoria, South Africa, involved 368 small and resource-poor farmers and 33 commercial farmers, the latter divided into irrigated and dry-land maize production systems. The data indicated that under irrigated conditions Bt maize resulted in an 11% higher yield, a cost savings in insecticides of US$18/ha equivalent to a 60% cost reduction, and an increase income of US$117/hectare. Under rain-fed conditions Bt maize resulted in an 11% higher yield, a cost saving on insecticides of US$7/ha equivalent to a 60% cost reduction, and an increased income of US$35/hectare.

Richard Sitole, the chairperson Hlabisa District Farmers’ Union, KZN in South Africa, said 250 emergent subsistence farmers of his Union planted Bt maize on their smallholdings, averaging 2.5 hectares, for the first time in 2002. His own yield increased by 25% from 80 bags for conventional maize to 100 bags, earning him an additional income of US$300 as of November 2007. He said, “I challenge those who oppose GM crops for emergent farmers to stand up and deny my fellow farmers and me the benefit of earning this extra income and more than sufficient food for our families.”

Because South Africa has the necessary resources, funding and experience in biotech crops, it can thrive in both the international public and private sector, and therefore improve their technology just as the other 23 countries can. Therefore, it is up to South Africa especially t share this knowledge with farmers in other African countries, but in particular, Tanzania, so that Tanzanian farmers can advance agriculture just as South Africa has, if this deems to be the best route to take.

NGO Opposition to GM Crops

Genetically modified crops have been opposed for several years by non-governmental organizations. Because they are not for profit, they have gained more social trust, and so people listen to them. Much of the NGO opposition has been from European-based organizations such as Greenpeace International, and Friends of the Earth International. Many U.S. and Canadian based organizations have also joined these organizations in the anti-GMO campaign. Notice, these are all rich countries, who have the influence over poorer countries. This kind of influence is harmful to countries that do not have the research or experience with GMOs such as Tanzania. People from Europe and North America would obviously not be attracted to GMOS because farming is already very productive. As many as 60 percent of all people are poor farmers could benefit from this technology. Farmers in poor countries rely almost entirely on food crops, not on crops for animal feed or industrial use like the U.S., so today’s ban on GMO foods is specifically damaging to those poor farmers. It becomes more shameful still when anti-GMO campaigners from rich countries intentionally hide from developing country citizens the published conclusions of their own national science academies back home, which continue to show that no convincing evidence has yet been found of new risks to human health or the environment from this technology.

Therefore, if GMOs were to be implemented in Tanzania, farmers would have to be trained and taught of the many benefits of GMOs. This training should be provided by the organizations that are providing the GMO seeds such as Monsanto. Without this training, GMOs crops could fail just like other methods, because of lack of knowledge and maintenance.

Importance of Seed-Saving

More than 90% of seeds sown in by farmers are saved on their own farms. Saving and exhanging seeds is important to Tanzanian farmers, and farmers in general for several reasons. According to the Permaculture Research Institute, saving seeds is important because big corporations that farmers buy from are only interested in the most profitable hybrids and ‘species’ of plants. Therefore, it decreases the biodiversity by condensing the market and discontinuing many crop varieties. When farmers save seeds with good genes and strong traits, the likely hood of better quality and the crops’ ability to adapt to its environment increases. Over generations, the crops will develop stronger resistance to pests as well. However, if GMO seeds provided by Monsanto were the sole practice in Tanzania, farmers could not save or exchange their seesd. As explained on the Monsanto website, “When farmers purchase a patented seed variety, they sign an agreement that they will not save and replant seeds produced from the seed they buy from us.” Therefore, unless USAID, the Bill and Melinda Foundation, and other organizations plan to support the costs of buying seeds on a regular basis, farmers will not be able to maintain their farms if they cannot afford to buy GMO seeds. Tanzanian farmers would be put at risk if this system was implemented without any financial support, and if they were to save or replant seeds, they have a chance of going to trial. Seeds, however, are so important to Tanzanians. Joseph Hella, a Professor from Sokoine University of Agriculture in Morogoro, Tanzania, in a documentary called Seeds of Freedom in Tanzania, insisted that “any effort to improve farming in Tanzania depends primarily on how we can improve farmers’ own indigenous seeds.” The practice of GMO crops does not take this into account. Janet Maro, director of Sustainable Agriculture Tanzania, said “These seeds are our inheritance, and we will pass them on to our children and grandchildren. These too are quality seed and a pride for Tanzania. But the law does not protect these seeds.”

However, if the drought tolerant white maize trial works, WEMA claims that farmers can choose to save the seeds for replanting. But as with all hybrid maize seed, maize production is heavily reduced with replanting of the harvested grain. Also, in order to make the improved seeds affordable, the new varieties will be licensed to the African Agricultural Technology Foundation (AATF), and distributed through local see suppliers on a royalty-free basis. According to Oliver Balch, freelance writer specialising in the role of business in society, if companies like Monsanto end up monopolizing the seed industry, African farmers fear becoming locked into cycles of financial obligation and losing control over local systems of food production. This is because unlike traditional seeds, new drought-tolerant seeds have to be purchased annually.

Lack of accessibility

The biggest problem Tanzania faces with adapting to drought-tolerant GM seeds is unavailibility and unaffordability. According to a study, Drought tolerant maize for farmer adaptation to drought in sub-Saharan Africa: Determinants of adoption in eastern and southern Africa, six African countries were studied to discuss the different setbacks to using drought-tolerant seeds.On a figure that represented these setbacks, seed availibility and seed price was the biggest concerns for Tanzanian small-holder farmers. High seed price was a commonly mentioned constraint in Malawi, Tanzania, and Uganda. Because many Tanzanian and Malawian farmers grow local maize, the switch to DT maize would entail a substantial increase in seed cost. Another observation in the study was that compared to younger people, older households were more likely to grow local maize which could reflect the unwillingness of older farmers to give up familiar production practices. Households with more educated people were more likely to grow DT maize and less likely to grow local maize, which justifies the point that general education and education on GM crops should be the primary goal before implementing any method in Tanzania. For example, some Tanzanian farmers were unwilling to try DT maize varieties as they were perceived as low yielding, late maturing and labor increasing. Educated people are more likely to process information about new technologies more quickly and effectively.

According to the study, there are a few things that need to be implemented if DT maize is to thrive. The seed supply to local markets must be adequate to allow farmers to buy, experiment with, and learn about DT maize. Second, to make seed more accessible to farmers with limited cash or credit (another major barrier), seed companies and agro-dealers should consider selling DT maize seed in affordable micro-packs. Finally, enhanced adoption depends on enhanced awareness, which could be achieved through demonstration plots, field days, and distribution of print and electronic promotional materials.

According to the Third World Network and African Centre for Biodiversity (ACB), the Wema project is set out to shift the focus and ownership of maize breeding, seed production and marketing almost exclusively into the private sector, in the process, forces small-scale farmers in Sub-Saharan Africa into the adoption of hybrid maize varieties and their accompanying synthetic fertilizers. Gareth Jones, ACB’s senior researcher says that Monsanto and the the rest of the biotechnology industry are using this largely unproven technology to weaken biosafety legislation on the continent and expose Africa to GM crops generally. With Tanzania’s unpredictable weather and seeds being incapable of growing without certain conditions like fertilizers, purchasing seeds annually becomes more of a burden and reduces farmers’ flexibility regarding their farming decisions. Also, Gareth Jones says the costly imputs and the very diverse agro-ecological systems in Sub-Saharan Africa mean the the WEMA project will only benefit a select amount of small-scale farmers, with evidently no consideration to the majority who will be abandoned. Again, the argument of seed costs and the monopoly of big seed companies comes up again as Jones also notes that the costs and technical requirements of hybrid seed production are presently also beyond the reach of most African seed companies and a focus on this market will inevitably lead to industry concentration, as has happened elsewhere, enabling the big multinational agro chemical seed companies to dominate.

Lack of progress in drought-tolerance

The United States is an example to take into consideration when evaluating GM crops, because after more than 17 years of field trials, only one GM drought-tolerant maize has been released. In fact, according to Gareth Jones, independent analysis has shown, under moderate drought conditions, the particular maize variety that has been reliaed only increased maize productivity by 1% annually, which is equivalent to improvements gained in conventional maize breeding.

Monsanto’s petition to the USDA cites results from two growing seasons of field trials in several locations in the United States and Chile that faced varying levels of water availibity. Company scientists measured drought through the amount of moisture in soil, and compared the crop’s growth response with that of conventional commercial varieties of corn grown in regions where the tests were performed. Monsanto reported a reduction in losses expected under moderate drought of about 6 percent, compared with non-GE commercial corn varieties, although there was considerable variability in these results. That means that farmers using Monsanto’s cspB corn could see a 10 percent loss of yield rather than a typical 15 percent loss under modern drought- or an increase of about 8 bushels per acre, based on a typical 160-bushel non-drought yield. However Monsanto’s cspB corn, the USDA asserts that it is effective primarily under moderate, not severe, drought conditions so there is no real benefit under extreme drought conditions. Because the cspB corn isn’t beneficial under severe drought conditions, farmers This would not be effective in semi-arid regions in Tanzania, like the Dodoma region that is drought-strickened.

Former Environmental Secretary Owen Paterson accused the EU and Greenpeace of condemning millions of people in developing countries to starvation and death by their stubborn refusal to accept the benefits of genetically modified crops. In response to this, Esther Bett, a farmer from Eldoret in Kenya, said last week: ”It seems that farmers in America can only make a living from GM crops if they have big farms, covering hundreds of hectares, and lots of machinery. But we can feed hundreds of families off the same area of land using our own seed and techniques, and many different crops. Our model is clearly more efficient and productive. Mr Paterson is wrong to pretend that these GM crops will help us at all.” Million Belay, coordinator of the Alliance for Food Sovereignty in Africa, highlights that “Paterson refers to the use of GM cotton in India. But he fails to mention that GM cotton has been widely blamed for an epidemic of suicides among Indian farmers, plunged into debt from high seed and pesticide costs, and failing crops.”

He also declared that,

“The only way to ensure real food security is to support farmers to revive their seed diversity and healthy soil ecology.”

Legume Biological Nitrogen Fixation vs. Nitrogen Fertilizers

The sustainable practice of intercropping nitrogen- fixing legumes with cash and food crops comes with both pros and cons. For farmers who cannot afford nitrogen fertilizer, biological nitrogen fixation (BNF) is could be a key solution to sourcing nitrogen for crops. BNF can be a major source of nitrogen in agriculture when symbiotic N2- fixing systems are used, but the nitrogen contributions from nonsymbiotic microorganisms are relatively minor, and therefore requires nitrogen fertilizer supplementation. The amount of nitrogen input is reported to be as high as 360 kg N ha-1. Legumes serve many purposes including being primary sources of food, fuel , and fertilizer, or to enrich soil, preserve moisture and prevent soil erosion. According to a study, Biological nitrogen fixation and socioeconomic factors for legume production in sub-Saharan Africa: a review, that reviews past and on-going interventions in Rhizobium inoculation in the farming systems of Sub-Saharan Africa, the high cost of fertilizers in Africa and the limited market infrastructure for farm inputs, current research and extension efforts have been directed to integrated nutrient management, in which legumes play a crucial role. Research on use of Rhizobium inoculants for production of grain legumes showed it is a cheaper and usually more effective agronomic practice for ensuring adequate N nutrition of legumes, compared with the application of N fertilizer.

Tanzania’s total fertilizer consumption was less than 9 kilograms (kg) of fertilizer nutrient per hectare of arable lands in 2009/10, compared with Malawi’s 27 kilogrammes and 53 kilogrammes in South Africa and that represented a substantial increase from the average 5.5 kg/ha that was used four years ago. 82 percent of Tanzanian farmers do not use fertilizer mainly because they lack knowledge of its benefits, rising cost of fertilizer, and not knowing how to go about accessing credit facilities. Although commercial banks in the country claim that they support agriculture, many farmers continue to face hurdles in readily accessing financing for agricultural activities, including purchasing fertilizer. The lack of high-yield seed varieties and level of fertilizer use of either traditional or improved seeds is a major contributor to low productivity in Tanzania and thus the wide gap between potential yields and observed yields.

Many will believe that nitrogen fertilizers are mostly responsible for eutrophication and the threatening of fish species. However, according to Robert Howarth, a biogeochemist, ecosystem scientist, active research scientist and professor at Cornell Univesersity, says that the real perpetrators of this in countries like Tanzania are the insufficient treatement of water from industries, erosion in infrastructure construction, runoff of feed and food waste from both municipal and industrial areas, atmospheric nitrogen deposition and nutrient leaching. In fact, in Tanzania, the average nitrogen balance in Tanzania in 2000 was as low as -32 Kg N ha-1 yr-1. This amount was similar to many other Sub-Saharan countries.

However, if Tanzania is to continue using nitrogen fertilizers, the nitrogen agronomic use efficiency needs to be improved. Nitrogen agronomic use efficiency is defined as the yield gain per unit amount of nitrogen applied, when plots with and without nitrogen are compared.** Right now smallholder farmers fields are still low because of poor agronomic practices, including blanket fertilizer recommendations, too low fertilizer application rates to result in significant effect and unbalanced fertilization.Recent interventions in Sub-Saharan Africa, including fertility management showed that nitrogen agronomic use efficiency could be doubled when good agronomic practices are adopted.The dilemma in SSA, including Tanzania, farming is mainly practiced by resource-disadvantaged smallholder farmers who cannot afford most of the inputs at the actual market prices.

In a study called Narrowing Maize Yield Gaps Under Rain-fed conditions in Tanzania: Effect of Small Nitrogen Dose, the authors evaluated the potential of the use of small amount of nitrogen fertilizer as a measure to reduce maize yield gap under rain fed conditions.From the experiment, it was observed that grain yields were similar in all water stressed treatments regardless of nitrogen dose, suggesting that water stress imposed after critical growth stage has no significant effect on final grain yield. The explanation they came up with is that within 45-50 days after sowing, the plant should have accumulated the required biomass for grain formation and filling, and water stress occuring afterwards has no effect on yield. For resource poor farmers, low doses of nitrogen fertilizer applied after crop establishment may make a substantial contribution to the food security over non-fertilized crop production. This approach can work well in environments with low seasonal rains because yield gain is higher than when high nitrogen quantities are applied in water scarce environment. In the study’s conclusion, it highlights that there is a limitation as the yield gap narrowing strategy was evaluated at a plot scale. Further study is needed to investigate the necessary response of small nitrogen doses as a strategy in bridging the maize yield gaps in multiple fields and many seasons especially under farmer’s management.


To increase agricultural productivity, there are many factors to consider, drought-tolerance is just one of them. Semi-arid regions in Tanzania pose a serious problem for agriculture that depends on rainfall, however drought- tolerant GM crops could be a possibility, however a lot of work still needs to be done. Implementing these drought-tolerant seed varieties can only be a solution if:

The WEMA project for GM white maize is successful

Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. This ensures that farmers are given the flexibility to control their crop production.

Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation.

Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and efficiency of all three systems.

Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.


Working with hazard group 2 organisms within a containment level 2 laboratory

There are many aspects that must be reviewed when entering the laboratory and there are many regulations that need to be followed to ensure not just your own safety but the safety of your workers around you. Inhalation is one issue that could occur within the laboratory. Within the laboratory many procedures involve the breaking of fluids containing organisms and the scattering of tiny droplets names aerosols. These droplets have the potential to fall contaminating hands and benches while others are very small and dry out immediately. The organisms containing within the aerosol is names droplet nuclei and is airborne and move about in small air currents. If inhaled there could be potential risk of infection so it is important that nothing is inhaled within the laboratories.

Ingestion of organisms is another problem within the laboratory. There are many ways in which organisms may be introduced into to the mouth such as thorough using the mouth to the pipette by direct ingestion, fingers contaminated by handling spilled cultured or from aerosols can potentially transfer micro-organisms to the mouth directly or indirectly by eating, nail biting, licking labels etc. injection is another problem within the laboratory through infectious materials that may be injected by broken culture containers, glass Pasteur pipettes or other broken glass or sharp objects. Through the skin and eye small abrasions or cuts on the skin may not be visible to the naked eye and may allow microbes to enter the body, or splashed of bacterial culture into the eye could result in infection.

This laboratory consisted of working with Hazard group 2 organisms within a containment level 2 laboratory. The hazard level is the level given to the organism which indicates how dangerous the organism could be. Hazard level 2 organisms can cause human disease and may be a hazard to employees although it is unlikely to spread to the community and there is usually effective prophylaxis or treatment available, examples include examples include Salmonella typhimurium, Clostridium tetani and Escherichia Coli.

Within containment level 2 laboratories there are many health and safety procedures to follow, below are examples of health and safety procedures set for containment level 2 laboratory:

• Protective eye equipment is necessary within the laboratory apart from when using microscopes

• There must be specified disinfection procedures in place

• Bench surfaces must be impervious to water, easy to clean and resistant to acids, alkalis, solvents and disinfectants.

• Laboratory procedures that give rise to infectious aerosols must be conducted in a microbiological safety cabinet, isolator or be otherwise suitably contained.

• When contamination is suspected, hands should immediately be decontaminated after handling infective materials and before leaving the laboratory.

• Laboratory coats which should be side or back fastening should be worn and removed when leaving the laboratory.

Within this laboratory glitter bug was applied to the hands and analysed under the light box. Glitter bug is a hand lotion that has a UV fluorescent glow. When place under UV light the glitter bug glows on the places where germs are located which cannot be seen to the human eye.

Loffler’s Methylene blue is a simple stain that was used to stain Saccharomyces cerevisiae.

This is a simple stain which is used for the analysis and understanding of bacterial morphology. It is a cationic dye which stains the cell blue in colour and can be used for the staining of gram-negative bacteria.


Below are the results gathered from the glitter bug before washing our hands. The blue areas indicate where the glitter bug was most fluorescent under the light.


Gram Stain

In microbiology one of the most common stains to carry out is the gram stain to understand and observe the differentiation between microbiological organisms. It is a differential stain which can differentiate between gram positive bacteria and gram negative bacteria. The gram-positive bacteria will stain purple/blue in colour and gram negative bacteria will stain red/pink in colour. The results indicating this differentiation can be seen within the variation of the arrangement, cell wall and cell shape structure.

The gram stain has many advantages such as it is very straightforward to partake in, it is cost effective and is one of the quickest methods used to determine and classify bacteria.

The gram stain is used to provide essential information regarding the type of organisms present directly from growth on culture plates or from clinical specimens. The stain is also used within the screening of sputum specimens to investigate acceptability for bacterial culture and could reveal the causative organisms within bacterial pneumonia. Alternatively, the gram stain can be used for the identification of the existence of microorganisms in sterile bodily fluids such as synovial fluid, cerebrospinal fluid and pleural fluid.

Spore stain

An endospore stain is also a differential stain which is used in visualizing bacterial endospores. The “production of endospores is an essential characteristic for some bacteria enabling them to become resistant within many detrimental environments such as extreme heat, radiation and chemical exposure. Spores contain storage materials and possess a relatively thick wall. Possession of a thick wall cannot be penetrated by normal stains either heat must be administered to allow the stains to penetrate the spore or the stain must be left for a longer period to allow penetration. The identification of endospores is very important within clinical microbiology within the understanding and analysis of a “patient’s body fluid of tissue as there are very few spore forming areas. There are two extensive pathogenic spore forming groups which are bacillus and clostridium, together resulting in a variety of different lethal disease such as tetanus, anthrax and botulism.”

The Bacillus species, Geobacillus species and Clostridium species all form endospores which develop within “the vegetative cell. These spores are immune to drying and have the purpose to survive. They develop in unfavourable conditions and are metabolically dormant and inactive until the conditions are favourable for the process of germination returning to their vegetative state.”

The Schaeffer Fulton method is a technique in which is “designed to isolate endospores through the process of staining. The Malachite green stain is soluble within the presence of water and has a small affinity for cellular material potentially resulting in the vegetative cells decolourising with water. Safranin is then applied to counterstain any of the cells which may have been decolourised. Resulting in the vegetative cells being pink in colour and the endospores being green.”

1. The bacteria that were used in this laboratory was Salmonella poona and Bacillus cereus. Both bacteria were identified as gram negative and are rod shaped cells. Other bacteria with identical shape characteristics as Salmonella poona and Bacillus cereus is Klebsiella pneumoniae which belongs to the Genus Klebsiella and the species K. pneumoniae. Another bacterium that has the same shape to those used in thos laboratory is Acinetobacter baumannii which belongs to the genus Acinetobacter and the species Acinetobacter.

2. The loop is sterilised within the Bunsen burner flame by placing the circular portion of the loop into the cold (blue) part of the flame and moving it up into the hot orange part of the flame until it is cherry red. If the loop is placed into the hot part of the flame first the material on the loop (including bacteria) might spurt out as an aerosol and some bacteria may not be destroyed. Once the loop is cherry red this indicates it is sterilised by incineration through dry heat and is then ready for immediate use. If the loop is then laid down or touched against anything it will need to be desterilised again however loops should never be laid on benches.

3. There are many possible problems that could affect a slide smear, for example excessive heat during fixation can result in altering the cell morphology making the cells much easier to decolourise. Another problem could be having a low concentration of crystal violet; this could result in stain cells which are easily decolourise. A third possible problem affecting the slide smear could be excessive washing between the steps as the crystal violet has the ability to wash out with the addition of water when exposed for too long. The last possibility which could affect a slide smear results is excessive counterstaining as it is a basic dye it is possible to replace the crystal violet-iodine complex within gram positive cells with an over exposure to the counter stain.

4. Hand hygiene is a necessity within the laboratories. It is the first line of defence and is considered the most crucial procedure from preventing the spread of hospital acquired infection.

The following steps is the appropriate hand washing technique:

• Wet hands with warm running water

• Enough soap must be applied to cover all surfaces

• Thoroughly wash all parts of the hands and fingers up to the wrist, rubbings hands together for at least 15 seconds

• Hands should then be rinsed under running water and dried thoroughly with paper towels

• Paper towels should be used to turn off taps before discarding the towels in the waste bin.

1. An example of a gram positive bacteria is Propionibacterium propionicus which belongs to the genus Propionibacterium and the species P.propionicus.

An example of gram negative bacteria is Yersinia enterocolitica which belongs to the genus Yersinia and the species Y. enterocolitica.

2. The gram stain has the ability to differentiate between gram positive and gram negative bacteria. Gram positive bacteria possess a thick layer of peptidoglycan within their cell walls but the lipid content of the cell wall is low resulting in small pores which are closed because the cell wall proteins are dehydrated from the alcohol resulting in the CV-I complex being retained within the cells which remain blue/purple. However Gram negative bacteria possess a thinner peptidoglycan wall and a high volume of lipid within their cell walls resulting in large pores that remain open when acetone-alcohol is added. “The CV-I complex is then lost through these large pores. The gram-negative bacteria then appear colourless. Once the counterstain is applied to the bacteria the cells turn pink. This is due to the counterstain entering the cells through the large pores in the wall.”

3. There are many problems which could arise during the production of a bacterial smear. These include having a dirty slide which is greasy or perhaps coated with dirt and dust. Having this will result in unreliable results due to the smear containing the desired microbes washing off the slide during the staining process or when the bacterial suspension is placed on the microscope slide it will not spread out evenly. Another possible problem could be having a smear that is too thick which results in too many cells being on the slide and the penetration of the microscope light through the smear is poor. However, if the smear is too thin then seeking for the bacteria cells is time-consuming.

Germination is also a complex process and is normally triggered by the presence of nutrients (although high temperatures are also sometimes required to break the dormancy of the spore). The events during germination include:

♣ Swelling of the spore

♣ Rupture or absorption of spore coat(s)

♣ Loss of resistance to environmental stresses

♣ Release of the spore components

♣ Return to metabolically active state

Outgrowth of the spore occurs when the protoplast emerges from the remains of the spore coats and develops into a vegetative bacterial cell.


The human body and the environment both consist of a vast number and variety of bacteria that are within mixed populations such as within the gut and soil. The bacteria being mixed with such a different variety of populations must be separated in pure culture to investigate and diagnose the identify of each bacterium. The aim of pure culture for bacteria requires that the number or organisms present is decreased until single, isolated colonies are obtained. This can be accomplished through the process of successful streak plate technique or through liquid culture dilutions on a spread plate.

The streak plate technique is used to analyse the purity of cultures that must be managed over long lengths of time. Contamination “by other microbes can be seen through the process of regularly sampling and streaking. The streak plate technique is used in several different aspects such as expert practitioner to begin a new maintained culture through selecting an appropriate isolated colony of an identifiable species with a sterile loop and then going on to grow those cells in a nutrient” broth.

When bacteria in a mixed population are streaked onto a general-purpose medium for example nutrient agar this results in the production of single, isolated colonies however the morphology of the colony does not indicate immediate, reliable means of identification. In practice, microbiologists use differential and selective media in the early stages of separation and provisional identification of bacteria before sub culturing the organisms to a fitting general purpose medium. The identity of the sub-cultured organisms can then be approved using a range of suitable tests.

Selective and differential media are used for the isolation of identification of particular organisms. A variety of selective and differential media are used within medical diagnostics, water pollution laboratorie and food and dairy laboratories.

Differential media normally contain a substrate that can be broken down (metabolised) by bacterial enzymes. The effects of the enzyme can then be observed visually in the medium. Differential media may possibly contain a carbohydrate for example glucose or lactose as the substrate.

Selective media are media that consist of one or more antimicrobial chemicals these could be salts, dyes or antibiotics. The anti-microbial chemicals can select out the specific bacteria while inhibiting the growth and development of other unwanted organisms.

Cysteine lactose electrolyte deficient agar (CLED) is a differential culture medium which is used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED Agar sustains the growth and development of a variety of different contaminants such as diphtheroids, lactobacilli, and micrococci.

CLED can be used to differentiate between naturally occurring gut organisms e.g. E.coli and gut pathogens e.g. Salmonella poona in a sample of faeces. There are many advantages of using CLED agar for urine culture, one being that CLED agar is a good discrimination of gram negative bacteria through the process of lactose fermentation and on the appearance of the colonies. Another advantage of using CLED Agar is it is very cost effective and also it inhibits the gathering of Proteus spp which is frequently involved in urinary tract infections.

CLED also possesses lactose as a substrate and a dye names Bromothymol Blue which demonstrates changes in pH. The pH of CLED plates neutral resulting in the plates being pale green in colour. Bacteria such as E. coli that produce the enzyme β galactosidase break down lactose by fermentation to produce a mixture of lactic and formic acid for the pH to become acidic. The colonies and medium then transform into a yellow colour which indicates lactose positive. Lactose negative bacteria cannot ferment lactose due to not possessing the ability to produce β galactosidase resulting in pale colonies on CLED.

MacConkey Agar (MAC) is a selective medium due to the presence of bile salts and crystal violet which inhibits most gram-positive cocci. The bile salts and crystal violet encourage the growth and development of gram positive organism with lactose providing a source of fermentable carbohydrate. MacConkey is designed to isolate and differentiate enterics based on their ability to ferment lactose. Bile salts and crystal violet inhibit the growth of Gram positive organisms. Lactose provides a source of fermentable carbohydrate, allowing for differentiation.  Neutral red is a pH indicator that turns red at a pH below 6.8 and is colourless at any pH greater than 6.8.

Organisms that ferment lactose and thereby produce an acidic environment will appear pink because of the neutral red turning red.  Bile salts may also precipitate out of the media surrounding the growth of fermenters because of the change in pH.  Non-fermenters will produce normally-colored or colourless colonies. In MacConkey agar, the substrate is lactose which is fermented by lactose positive bacteria e.g. E. coli to lactic acid and formic acid resulting in the medium being acidic. The dye neutral red then changes colour and colonies of E. coli are now violet red. Lactose negative bacteria are describes as possessing pale colonies and therefore mac can be used to select out and differentiate between naturally occurring gut organisms and gut pathogens.

1. Figure 13 elicits how the majority of the colonies used in laboratory 3 took up the entire colony edge and where flat in the elevation of the colonies. The streak plate method can obtain single colonies through firstly streaking the portion of the agar plate with an inoculum and then streaking successive areas of the plate to dilute the original inoculum so that single colony forming units (CFUs) will give ruse to isolated colonies.

2. Potential problems that could lead to the production of unsuccessful plates or slants could be that when sterilising the loop, it was placed in the inner blue flame and not given time to cool down being instantly placed directly into the plate, killing all bacteria within the plate. Another problem could be insufficient flaming between the quadrants leading to the loop not being sterile leading to contamination of organisms.

3. A Bacterial cell is a microscopic single- celled organism which thrive in diverse environments. A Bacterial colony is a discrete accumulation of a significantly large number of bacteria, usually occurring as a clone of a single organism or of a small number.

4. Refer to Figure 15 and 16

5. Cled it is a solid medium used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED contains lactose as a substrate and a dye called Bromothymol blue which indicates the changes in pH. Prior to inoculation, plates of cled are pale green in colour. This is due to the pH of the plates being neutral. Bacteria such as E. coli that produce the enzyme β Galactosidase break down lactose through the process of fermentation to produce a mixture of lactic and Formic acid so that the pH is acidic. Resulting in colonies and medium turning yellow (lactose positive). Lactose negative bacteria e.g. Salmonella poona is unable to ferment lactose due to them not being able to produce β Galactosidase and usually producing pale colonies on cled. Cled can therefore be used to differentiate between naturally-occurring gut organisms.

6. From the results, we can conclude that the bacterium that fermented lactose was Escherichia. Coli and the none fermenting bacterium was Salmonella poona.

7. Mannitol Salt Agar (MSA) is utilised as a selective and differential medium in the process of isolating and identifying Staphylococcus aureus from clinical and non clinical specimens. Mannitol Salt Agar contains the carbohydrate mannitol, 7.5% sodium chloride and the pH indicator phenol red. Phenol red is yellow below p.H 6.8, red at pH 7.4 to 8.4 and pink at 8.4. the sodium chloride makes this medium selective for staphylococci since due to most bacteria not being able to live in such levels of salinity.

The pathogenic species of staphylococcus ferment mannitol and thus produce acid. This acid then turns the pH indicator to a yellow colour. Non-pathogenic staphylococcal species grow however there is no colour changed produced.

The formation of yellow halos surrounding the bacterial growth is the predicted evidence that the organism is a pathogenic Staphylococcus. Significant growth that produces no colour change is the presumed evidence for none pathogenic Staphylococcus. Those staphylococci that do not ferment mannitol produce a purple or red halo around the colonies.

A viable count

A viable count is a method for estimating the number of bacteria cells in a specific volume of concentration. The method relies on the bacteria growing a colony on a nutrient medium. The colonies then become visible to the naked eye and can then be counted. For accurate results the total number of colonies must be between 3-300. Fewer than 30 indicate the results are not stastically valid and are unreliable. more than 300 colonies often indicate an overlap in colonies and imprecision in the count. To establish that there is an appropriate final figure for the total colony count several dilations are normally cultured. The viable count method is used by microbiologists when undergoing examination of bacterial contamination of food and water to ensure that they are suitable for human consumption.

Serial Dilution

A serial dilution is the process of consecutive dilutions which are used to reduce a dense culture of cells to a more applicable concentration. within each dilution the concentration of bacteria is reduced by a certain amount. through calculating the total dilution over the entire series the number of initial bacteria can be calculated. After the dilution of the sample an estimation of the number of bacteria visible is carried out using the surface plate count known as the spread plate technique and the pour plate technique. Once incubated the colonies are then counted and an average is calculated. The number of viable bacteria per ml or per gram of the original sample is also calculated however this is calculated on the principle that one visable colony is the direct result of the growth of one single organism. Nonetheless bacteria has the capability of clumpimg together and this could result in a colony being produced from a clump. For that reason counts are expressed as colony forming units (cfu) per ml or per gram as this gives the explanantion as to why counts are estimations.

Spread plate technique

The spread plate technique is used for viable plate counts for when the total number of colony forming units on a single plate is counted. There are many reasons as to why the spread plate technique is so useful within microbiology for example is can be used to calculate the concentration of cells in a tube from which the sample was initially plated. The spread plate technique is also routinely used in enrichment, selection and screening of experiments. However, there are some disadvantages for when using this technique such as crowding of the bacterial colonies could make the enumeration much more challenging.

Pour plate technique

The pour plate method is used in the counting of the coloy-forming bacteira present in a liquid form. The pour plate has many advantages fo example it allows the growth and quanitifation of microaerophiles as there is little oxygen within the surface of the agar, and identification of anaerobes, aerobe or facultive aerobes is much easier as they have the ability to frow within the media. Howver there are a feew disadvatges in using the pour plate technique for example the temperature of the medium needs ti be tightly regulated. If the temperature is too warm the mirocrogansims will die and if the temperature is too cold the agar will clump together which can sometimes be mistaken for colonies.


In microbiology understanding the characteristics that bacteria possess is critical to the knowledge and understanding of microbiology. To enable a full understanding of the characteristics bacteria possess they undergo simple tests named primary tests which can be used to establish if the cells are gram negative or gram negative cells, if the cells are rods or cocci shape and if the bacteria is catalase positive or catalase negative.

The catalase test is a primary test which is used in the detection of catalase enzymes through the decomposition of hydrogen peroxide resulting in the release of oxygen and water as demonstrated by the equation below:

2 H2O2→2 H2O + O2

Hydrogen peroxide is produced through various bacteria as an oxidative product of the aerobic breakdown of sugars. However, it is highly toxic to bacteria and could lead to cell death. The catalase test serves many purposes such as differentiating between the morphologically similar Enterococcusor Streptococcus which is catalase negative and Staphylococcus which is catalase positive. The test is also valuable within differentiating between an aerobic and obligate anaerobic bacterium and can be used as an aid within the identification of Enterobacteriaceae.

The oxidase test is also an example of a biochemical primary test which is used in the identification of if bacteria produce cytochrome c-oxidase which is an enzyme of the bacterial electron transport chain.

Oxidase positive bacteria possess cytochrome oxidase or indophenol oxidase which both catalyse the transport of electrons from donor compounds such as NADH to electron acceptors which is usually oxygen. If present, the cytochrome c oxidizes the reagent (tetramethyl-p-phenylenediamine) to (indophenols) producing a purple color as the end product. When the enzyme is not present, the reagent remains reduced and is colourless.

Organisms which are known as oxidase positive are- Pseudomonas, Vibrio, Brucella, Pasturella, and Kingella. Organisms which are oxidase negative are Acinetobacter, Staphylococci, Streptococci and all Enterobacteriaceae.

Primary tests are helpful in the understanding of the initial characteristics bacteria possess. However more advanced methods may be used to finalise the identification to the level of Genus and Species to enable treatment for patients and to enable appropriate action to be taken to prevent any further transmission of infection. Laboratories today now rely on rapid id kits which analyse the biochemical aspects of bacteria and this is known as bio typing.

Rapid identification kits are used for the identification and differentiation of different bacteria. The ID32E is commonly used in the identification of members of the Enterobacteriaceae. There are two types of kits one is IDSTAPH which us used in the identification of members of the staphlycococcaceae while the IDSTREP strip is used in the identification of streptooccaceae. The kits consist of wells which contain dried substrates such as sugars or amino acids. These dried substrates are then reconstituted through the addition of saline suspension of bacteria. The results are then read on a computer profile which is linked to an identification software. From then the genera and species can be analysed and differentiated from each other.


The genocide of Darfur: college essay help

How would you feel to be without a home, family, and basic needs? What about having to struggle everyday just to live your life? If that is not bad enough, imagine being in a constant state of danger. The genocide of Darfur is rooted in decades of conflict and has lasting effects on the community that have resulted in an unstable environment. “The Sudanese armed forces and Sudanese government-backed militia known as Janjaweed have been fighting two rebel groups in Darfur, the Sudanese Liberation Army/Movement (SLA/SLM) and the Justice and Equality Movement (JEM),”(

The first civil war ended in 1972 but broke out again in 1983. This is what really initiated the genocide. However, the genocide escalated and was credited for starting in February of 2003. It was considered to be the first genocide of the 21st century. “The terrible genocide began after rebels, led mainly by non-Arab Muslim sedentary tribes, including the Fur and Zaghawa, from the region, rose against the government.”(www.jww). “This genocide is the current mass slaughter and rape of Darfuri men, women, and children in western sudan.” (www.worldwithoutgenocide). Unrest and violence continue today. The group that is carrying out the genocide is the Janjaweed . They have destroyed those in Darfur by “burning villages, looting economic resources, polluting water sources, and murdering, raping, and torturing civilians.”(www.worldwithoutgenocide).

Believe it or not, this genocide is still going on today. As a result, Darfur is now facing very great long term challenges and will never be the same. There are millions of displaced people who depend on refugee camps. However, at this point these camps are not much a source of refuge, but more so a danger themselves. The cause of this is severe overcrowding. (3). It is often unsafe for anyone to leave the camps. Women who would normally go in search of firewood cannot anymore because they may end up being attacked and raped by the Janjaweed militias (www.hmd). The statistics of this genocide show how bad it really is. Since 2003 when it began, it has accumulated over 360,000 Darfur refugees in Chad, been the cause of death for over 400,000 people, and has affected 3 million people in some way. (www.jjw) On top of that, more than 2.8 million people have been displaced (www.worldwithoutgenocide). An interview that was taken suggests that 61% of the respondents had witnessed the killing of one of their family members. In addition, 400 of Darfur’s villages have been wiped out and completely destroyed (www.borgenproject). To prove that this is a real problem, here is a personal experience. “Agnes Oswaha grew up as part of the ethnic Christian minority in Sudan’s volatile capital of Khartoum. In 1998, Agnes immigrated to the United States, specifically to Seattle. She has now become an outspoken advocate for action against the atrocities occurring in Darfur” (www.holocaustcenterseattle). Agnes has used her struggles to inspire others. She is a prime example that you can make something good out of something so devastating and wrong.

There are many help groups that are working to inform people about this problem. The two that I am going to highlight are the Darfur Women Action Group (DWAG) and the Save Darfur Coalition. The first group, or the Darfur Women Action Group (DWAG) is an anti-atrocities nonprofit organization that is led by women. They envision a world with justice for all, equal rights, and the respect for human dignity. They provide the people of Darfur with access to tools that will allow them to oppose violence. This group also addresses massive human rights abuses in their societies and works with others to prevent future atrocities. They do this all while promoting global peace. Along with that, they ask us to speak out and spread the word. Their ultimate goal is to bring this horrific situation to the attention of the world to end it for good (www.darfurwomenaction). The next help group is the Save Darfur Coalition. They have helped develop strategies and advocate for diplomacy to encourage peace.They have also helped conquer the deployment of peacekeeping forces in Darfur. Because of them, there have been billions of dollars in U.S. funding for humanitarian support. Violence against women has been used as a weapon of genocide and because of them, the awareness in Congress of this issue has grown (www.pbs).

As Americans, we can do many things to stop this issue. First, we must put aside domestic politics and help those even if they are not a part of our country. The growing genocide in Darfur is not a partisan issue but one that strecthes across a wide variety of constituencies, or bodys of voters and supporters. Some of these include religious, human rights, humanitarian, medical, and legal communities. All of these, and some others, are advocating a pugnacious worldwide response to the crisis (www.wagingpeace).

The genocide of Darfur is atrocious. It is rooted in decades of conflict and has lasting effects on the community of Darfur. This conflict has resulted in an unstable environment for all those who belong to the country of Sudan. It has made normal people live in fear every day. Millions of people are affected, and 2.8 million displaced people. Also, 400,000 innocent people have been killed. This is all because of the actions of the Janjaweed gang. This genocide is an overall horrendous thing that is actually going on in the world around us. There is much that can be done to help, but can we, being in the good situations that we are in, take time out of our own lives to think about those who really need our help? Do we care enough to spend time and money on people we don’t even know? If we choose to do so, we could be making a huge difference in the lives of people. Even though they might live across the world from us and live very different lives than us, they are very similar to us in many ways.


Status of income groups and housing indicators: essay help

1. Introduction

Buying a house is often the biggest deal for a family in its lifetime. Furthermore, economic, social, and physical properties of the neighborhoods have short term and long term impacts on the residents’ physical and psychological status (Ellen et al., 1997). Accordingly, inappropriate housing would bring about many health risks, in a way that it would inflict adults, as well as children, with a variety of mental and physical disorders (Bratt, 2000; Kreiger & Higgens, 2002). Instable housing conditions, moreover, lead to stress and thus have manifold negative impacts on people’s education and professions (Rothstein, 2000). Despite the importance of housing in human life, the provision of adequate and affordable housing for all people is one of the current problems of human society, since almost half of the world’s population lives in poverty and about 600 million to 800 million people are residing in sub-standard houses (Datta & Jones, 2001). Despite poor housing in developing countries, there are no organizations and institutions in order to supply services and organize institutional developments so as to strengthen different classes of the society (Anzorena, 1993; Arrossi et al., 1994). For example, 15% of people in Lagos, 51% in Delhi, 75% in Nairobi, and 85% in Lahore live in substandard housing. It has been estimated that thousands of low-income residents do not use healthy water transported from pipes and thus are pushed to use infected or substandard water (Hardoy, Mitlin & Satterthwaite, 2001). For instance, 33% of people in Bangkok and 5 million in Kolkata don not have access to healthy water and 95% of people in Khartoum live without sewage system. According to a report by the World Health Organization, the probability of death in children who live in substandard settlements is 40 to 50 percent more than the children in European and North-American children (Benton-short and Short, 2008). That is because where they live lacks security and essential infrastructures and facilities like water, electricity and sewage; in addition, they are also vulnerable to numerous risks (Brunn, Williams and Ziegler, 2003). In 2005, about 30 environmental disasters led to a death toll of almost 90 thousand people, a majority of whom were from poor countries and low-income people (Chafe, 2006).

Planning in the housing sector in Iran lacks an efficient statistical system. Despite the paradoxes, gaps and inconsistencies in the data and statistics from the housing sector, reaching to a comprehensive and clear plan to address the problems of this sector is almost beyond any possibility. Lack of integrity among organizations responsible for collecting and arranging housing index information (Statistical Center of Iran, the Central Bank, Ministry of Housing and Urban Development, municipalities, etc.) should be considered as a serious problem. Aiming at evaluating the status of income groups and housing indicators — such as the average level of infrastructures, the average level of income, etc. — in existing deciles, the present study, therefore, has estimated housing demand and evaluated the financial power of low-income groups in the city of Isfahan so as to apply the given results in accurate planning of housing for the low-income groups in the city if Isfahan.

2. Theoretical framework

Housing is the smallest component of accommodations and is the concrete representation of development. According to Williams (2000), cities embracing social justice are those cities that have the greater share of high-density housing and provide services and facilities. Rappaport (1969) maintains that the factors of culture and human understanding of the universe, together with life, have played a crucial role in housing and its spatial divisions. According to Le Corbusier’s viewpoint, a house must response to both physical and spiritual needs of people (Yagi,1987). Housing is the basic environment of family, and it is a safe place to rest away from the routines of work and school, and is a private space. According to Fletcher, home is a paradoxical ground of both tenderness and violence. Gaston Bachelard in the book, The Poetics of Space, has called home as an ”atmosphere of happiness”, wherein rest, self-discovery, relaxation and maternity becomes important. According to Short (2006), housing is the nodal point of all dualities and paradoxes. Housing and housing planning has been analyzed from different perspectives. Development and growth pole theory put the acute housing problems as something transitional and as parts of development programs (Shefa’at, 2006). On the contrary, the theory of dependency counter-urbanization theories have recognized inequality and the one-sided product distribution from margin to the center as the main reason for housing deterioration (Athari, 2003). From an economic viewpoint of the market, housing issues should be left to the market mechanism (Dojkam, 1994), and housing needs of the market system should be provided by the private sector (Seifaldini, 1994). The government should also avoid spending funds for low-income housing (Chadwick, 1987). Urban management approach, which, from the point of view of political economy, is a very important orientation, believes that wider social and economic contexts play a role in the formation of urban residential patterns. One of the most important parts of urban planning is the planning of housing development; economic factors such as cost of living, employment bases and instability of income play a very important role in the housing planning. Beside the economic factor, architectural style is the most determining factor in housing planning. Regional indigenous languages, stylistic trends, weather, geography, local customs, and other factors influence the development of housing planning and housing design. The five characteristics of housing are: the type of building, style, density, the size of project, and location (Sendich, 2006). Housing planning should be designed in such a way that in addition to adequate housing, basic ecological variables also to be included in (Inanloo, 2001). Governments often do housing planning in three categories of national, regional and municipal levels so as to be able to employ it as a technique to solve the housing problems of its citizens (Ziari & Dehghan, 2003). The fundamental goal of housing planning in national level is to balance the housing supply and demand regarding its position in macroeconomics (Sadeghi, 2003). In regional planning of housing, supply and demand are evaluated in regional level and the aim is to balance them. The difference between housing planning in regional level and national level is that in regional level the relation between housing and macroeconomics is not considered, but it emphasizes on the economic potentials inside the regions (Zebardast, 2003). Local planning of housing is conducted in three scales of town, city and urban areas. Housing planning can be approached in two different ways. The first approach is the distribution of the goals and credits of national and regional plans to smaller geographic units of region, city or town. The second approach is to investigate the housing status in local levels and to estimate the needed land for future housing development and suitable differentiation of the lands (Tofigh, 2003). The other approach is concerned with the low-income housing and presents three programs: 1. Programs that provide subsidies for rental housing, either as individual or in complexes; 2. Providing tax credits that result in the production of low-rent housing units; 3. Supportive programs for affordable housing of the lower classes (Mills et al, 2006). Such a policy is along with tools such as tax-deductibility, long-term loans, insurance and so on and so forth. The UN addresses the housing of those in need through the Commission on Human Settlements in the form of Habitat Program. In 1986, the UN codified the Global Shelter Strategy for the Homeless until the year 2000 with an empowerment approach. In 1992, there were addressed in Habitat 2 the security of the right for housing, particularly for the low incomes. In 2001, in the special session of the UN General Assembly in New York, the need to address the issue of urban poverty and homelessness received serious considerations.

3. Methodology

This study is a basic-applied piece of research adopting a descriptive-analytical methodology. The geographic area of the research is the political and administrative area of the city of Isfahan in 2014. Variables of the research are the income deciles, housing quantity developments, land and housing prices, the system of housing finance, housing status in the expenditure basket of the low-income households, the Gini coefficient of housing costs, the effective demand for housing in the income deciles considering the area of infrastructure and the access to housing index. The Statistical Centre of Iran provided the statistics and the city of Isfahan provided the cost/income scheme. Methods used include statistical techniques of the population deciles and for the financial power of the groups there has been used the indirect method function.

4. Results

4.1. Changes in the quality of housing in Isfahan

According to the statistics, the population and urban growth had a great growth in decade from 1996 to 2006. The average of population growth in the years from 1996 to 2006 was 1.37%, the growth of the households was 52.3% and the growth of housing was 7.3% (Table 1).

According to the 2011 census, the population was about 1,908,968 people residing in 602,198 households, hence the size of each household is 3.17 individuals. However, the number of housing units constructed in the same year for 602,198 households in the city of Isfahan was not standard and there was a housing shortage to the level of 0.193. But with the growth in the number of housing units in 2011, it can be stated that the 215000-unit policy of the Mehr Housing Project has had a large impact on the number of housing units of the city of Isfahan (see Table 1).

Table (1): Changes in the quantity of housing in the city of Isfahan compared to its population changes (period 1996-2011)


Population number of households size of the household Number of typical residential units Proportion of households to residential units Population growth rate Urbanization rate in percent Shortage of housing regarding the households Percent of housing shortage to the existing ones

1996 1310659 326581 4.03 325225 1.239 1.51 74 1356 0.416

2006 1642996 466760 3.52 458852 7.541 1.37 83 7908 1.723

2011 1908968 602198 3.17 601035 5.264 1.3 85 1163 0.193

Source: the researcher’s calculations, 2014

4.2. Evaluation of the price changes of land and housing in the city of Isfahan

The value of properties (land, housing, and rent prices) is one of the main factors in determining the quality and quantity of people’s housing; while housing and land become the playground of one’s capital (as in today’s Iran), the tendency to have private housing increase and this leads to an increase in demand. Considering the instable status and the risks following certain other investment areas (such as manufacturing and agriculture, etc.), the tendency towards investing in housing sector has always been a safer investment and this has led to increase in the prices and widen the gap between the effective demand and the potential demand.

During the evaluation, we encounter huge fluctuation in the value of housing lands, both dilapidated and new, and an increase in the housing rent prices, as has been presented in detail in Table 2.

Table (2): changes in prices of land, housing, and rents, 2003-2013

Year A square meter of dilapidated residential building A square meter of residential units Rent prices for one square meter

Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate

2003 2632 – 3007 – 13034 –

2004 3562 35.3 3373 10.8 13177 1.08

2005 4035 11.7 4251 20.6 14582 9.6

2006 3839 -5.1 4702 9.5 16313 10.6

2007 5706 32.7 8181 42.5 20975 22.2

2008 7278 21.5 8485 3.5 24600 17.5

2009 4929 -47.6 8211 -3.3 25195 18.2

2010 4612 -6.8 8676 5.3 28333 11.07

2011 4978 7.3 9549 9.1 30075 5.7

2012 6332 10.2 12385 22.8 35809 16.01

2013 8571 26.1 16624 25.4 45261 20.8

Source: Statistical Center of Iran, Statistical Yearbook of Isfahan Province, 2003-2004; and author’s calculations, 2014

Considering the price per square meter of housing units in the city of Isfahan, we see that the market of such land had a great fluctuation between the years of 2004 and 2006 and immediately fell in 2007. Bu the most fluctuation was in the year 2009, when the price of dilapidated units had a decrease of -47.6% and this decrease continued in the following year and it then found an increase process. The outset of the Mehr Housing Project, the economic downturn in the investing countries (foreign participation sharply declined), and political issues can nevertheless be assumed as the main reasons of the price drop in the years between 2009 and 2010.

However, during the period, the price of housing units per square meter was with a greater stability, in a way that except the year 2009, when a 3.3% negative growth was experienced, there experienced no more descending in prices. One of the characteristics of the housing market during the research period was the increase in the prices. Evaluation of the inflation indexes, the prices of manufacturing factors, and housing prices show that the housing market in Isfahan has not been mainly exempted from self-seeking interventions and the bulk of the increase in housing prices were resulted from rising prices of production factors and the overall inflation.

An important issue here is that the difference between the rent prices and housing prices was its continuous increase during the years under study.

Figure 1: changes in the prices of housing, land and rent during the period 2003-2013

Source: Statistical Center of Iran and author’s calculations, 2014

4.3. Study of the changes in the production of housing in the city of Isfahan

Increase in land prices, as one of the most important components of the production of housing, has brought about the grounds for a decrease in the production of housing on the one hand, and an increase in building density factor on the other hand. The cost of housing in the city of Isfahan per square meter is another indicator that can be objectively applied in the analysis of access to adequate housing. Of course, the cost of housing will increase over time, but the process of the increase is of importance. At the beginning of the period under study the cost of making one meter square of housing in the city of Isfahan was 430,000 IRR, while this amount was 3,102,000 IRR at the end of the period. Although changes in housing investment, in short-term, are affected by the changes in the factors affecting the demand, such as housing prices and the granting of loans, the long-term factors such as land prices, construction costs and inflation also affect these changes. The cost of housing production during the period has had always an upward trend, but the acceleration of this growth has been different. Much of the increase in the construction cost was due to the rising prices of the land, materials, and labor. Some have been affected by the decline in productivity.

Table 3: Changes in the cost of housing construction per square meter, 2001-2012 (one thousand IRR)


index 2001 2006 2008 2009 2010 2011 2012

The cost of one square meter construction 430 1308 1542 2030 2441 2582 3102

Percent of growth comparing to the previous year – 204 17.8 31.6 20.2 5.7 20.1

Source: Statistical Center of Iran, the Central Bank of the Islamic Republic of Iran, 2001-2012 and author’s calculations

Chart 2: Changes in the cost of housing construction per square meter 2001-2012 (percent)

4.4. Evaluation of housing finance system in the city of Isfahan

4.4.1. Household savings

In the area of private housing, the greatest and most reliable source in providing housing is a household’s savings. Household’s savings are part of their disposable income that are not used for consumes of the family and are used for income purposes. In this study, to achieve cost savings level there has been used the difference and household income is the difference between the cost and revenue (Table 4).

Table (4): the average of income, cost, and savings of an urban household over the years 2006-2012 (IRR)

Year Price Income Savings

2006 69059825 57289929 -11769896

2009 88508931 74529939 -13978992

2009 101319582 87730581 -13589001

2010 114495202 93390015 -21105187

2011 137279114 109217181 -28061933

2012 157761405 145924872 -11836533

Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014

During the study period, household savings have been negative for most of the period. Accordingly, assuming the consumption pattern of households to be constant, households’ savings cannot be a source of funding for housing. But it is possible that by changing the consumption patterns and lifestyles, and by increasing the savings, it can be insisted on as a source of housing.

4.4.2. Bank credits

Credit and banking facilities are financing or guaranteeing the obligations of the applicants based on the interest-free banking law, where the methods of banking facilities include: loan, civic management, legal management, direct investment, partnership, forwards, sales installment, hire-purchase, contract of farm letting .

Table (5): the amount of the grants by the banks of Isfahan Province to private sector based upon major economic sectors, 2001-2012

Index 2001 2006 2008 2009 2010 2011 2012

The number of facilities and bank credit 170128 384418 262110 160060 308439 567426 328695

Percentage of total housing per year 28.67 35.21 16.5 9.49 12.18 21.45 12

Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014

In total, the annual average of 20 percent of bank credits has been allocated to the housing sector which is a substantial amount. Therefore, considering the financial sources and potentials and absorption rate of bank deposits by the banks of the province, 20% of them can be considered as sources of investment in the housing sector.

4.4.3. Bank development credits

Government’s development credits are the budget allotted annually and based on annual budget rules for implementing development plans and for expanding current expenditure on the government’s economic and social plans, nationally and provincially. This budget is divided into the three general, economic and social categories.

Table (6): Government credits as divided by budget seasons (2006-2012)

Year Sum General affairs Social affairs Economic affairs

Credit percentage Credit percentage Credit percentage

2006 8928397 6947286 77.8 556603 6.2 1424508 15.9

2008 5461355 1175093 21.5 1250856 22.9 3035406 55.5

2009 3177898 1866020 58.7 544190 17.1 767688 24.1

2010 3996923 2074964 51.9 700944 17.5 1221015 30.5

2011 4031126 2338934 58.01 605371 15.01 1086821 26.9

2012 2888659 2304195 79.7 161633 5.5 422831 14.6

Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014

Table (7): Share of credits of the housing sector in social affairs program 2006-2012 (million rials)

Year Sum of credits Share of credits of the economic sector in the total credits Share of credits of the housing sector from the economic credits Share of credits of the housing sector in the total credits

Credit Percentage Credit Percentage Percentage

2006 8928397 1424508 15.9 513921 36/07 5.7

2008 5461355 3035406 55.5 675548 22.2 12.3

2009 3177898 767688 24.1 207621 27.04 6.5

2010 3996923 1221015 30.5 419433 10.4 10.4

2011 4031126 1086821 26.9 419154 38.5 10.3

2012 2888659 422831 14.6 126148 29.8 3.4

Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014

Considering the results, it can be argued that the credit of the sector has varied over years, and the these changes ranges from 5 to 12 percent of the entire budget, which is an important figure in its own right.

Of course, this might be due to the role of government and private sector in housing investment. Although in the recent years the government’s role has become more serious with the emergence of plans such as Mehr Housing, retroffiting plan, and renovation of distressed areas. (Of course, one must notice that as of 2006 the economic sector has always received the highest credits. And, in the economic sector, the housing, and urban and rural sectors received the greatest amounts of credits).

4.4.4. Determining the position of housing in the expenditure basket of low-income households of Isfahan city

To investigate housing costs in each of the income groups, first the study households were ordered in descending order during the years 2005-2011based on the amount of income. Afterwards, households were divided into equal groups (indicating Deciles). In the next step, based on the data regarding housing costs and the entire food and non-food expenditure of each household, the amount of housing costs and the entire food and non-food expenditure of each of the income Deciles was calculated for different years. Finally, the mean income and housing costs and the total food and non-food costs of households in each Decile were investigated. In order to present authentic and real analyses, all variables were calculated considering the ratio of price index in the city of Isfahan to the fixed price of the year 2005.

Table (8): Mean housing cost of low-income household of Isfahan City, 2005-2011.

Year Mean cost Proportion of growth compared to previous year

2005 14124382 –

2006 17433765 18.9

2007 21248483 17.9

2008 26704381 20.4

2009 26390670 -1.18

2010 28886146 8.6

2011 33238345 13.09

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011.

As displayed in Table 8, housing costs for Isfahan city have an increasing trend (except for the year 2009). The highest increase in housing costs took place in 2008 (.4) and the lowest increase occurred in 2010 (%8.6).

In this regard, it should be noted that the government’s policies to establish stability and regulate market prices and prevent unduly increase have been very effective, such that the rate of price increase in relation to the previous year has been usually in the same price range. However, to present precise results of estimation of housing costs of low-income groups in Isfahan city, the results were analyzed in income Deciles (Table 9).

Table (9): Variation in the mean housing cost of urban households of Isfahan city, 2007-2011

Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10

2007-2011 25.14 28.2 28.9 25.7 32.5 33.68 33.3 35.58 34.33 36.1

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011

As can be seen, enforcement of various economic policies in the housing sector and establishment of a balance between housing supply and demand in Isfahan city during the study period has been such that the average housing costs for high-income households was higher than low-income households. The highest increase belonged to the 10th Decile (.1) and the lowest increase belonged to the 4th Decile (.7).

In the following sections, to present precise results, the share of housing costs in the total costs of urban households will be analyzed in income Deciles (Table 10).

Table (10): Share of housing costs in the total costs of urban households of Isfahan, 2007-2011.

Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10

2007-2011 47.4 46.49 44.15 42.2 40.38 38.6 35.56 34.33 33.9 31.5

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011

According to the results, in general, the share of housing costs in the high-income households (31.5), the share of housing costs is lower than that in the low-income households (47.4). Put otherwise, Isfahan urban households belonging to the lower Deciles of the society spend a great proportion of their income (food and non-food expenses) on housing costs. In contrast, households of upper Deciles devoted a smaller proportion of their income on housing costs. Hence, in the upper income Deciles, the share of housing costs compared to the total costs of households is lower than that in households of the lower Deciles of Isfahan city.

Diagram (3): Share of housing costs in the total costs of urban households in Isfahan city within the framework of income Deciles 2007-2011

4.4.5. Estimation of Gini coefficient of housing costs of households of Isfahan city

This index id usually used to investigate class difference and income distribution among the society’s Deciles. Therefore, the closes the value to 1, the less unequally distributed of the index, and the closer it is to zero, the more equally distributed the index.

Here, the Abunoori equation is used to calculate Gini coefficient.

Where ‘y’ stands for the upper limit of expenditure groups, f(y) is the relative cumulative frequency of households with expenditure up to ‘y’, and ‘u’ stands for regression error. Table 11 presents the values of Gini coefficient for the housing costs of households of Isfahan city.

Table (11): Gini coefficient of housing costs of urban households in Isfahan city

Year 2003 2004 2005 2006 2007 2008 2009 2010 2011

Gini coefficient 0.323 0.306 0.294 0.309 0.317 0.343 0.370 0.400 0.405

Source: Author’s calculations based on Plan for costs and income of urban households of the Province 2003-2011.

During the study period, the Gini coefficient for housing costs for urban households had a generally descending order, which indicates a decrease in housing costs among urban households. Of course, this decrease per se could not be a source of optimism about the housing costs of lower income groups, because unless such a decrease is accompanied by decreased equality in the income of households, it indicares an increase in the share of housing in the budget of household and hence greater pressure on them. This gap has been widening as of 2006. Lorenz curve, which was obtained in this study by drawing households’ cumulative frequency for cumulative percentage of housing costs, was used to further explain the degree of inequality. The more the Lorenz curve distance from the equal distribution line, the more inequality.

Diagram (4): Lorenz curve of the gap between housing costs in income Deciles of urban households in 2011.

4.4.6. Effective housing demand in income Deciles in terms of substructure area in Isfahan city

The following equation was used to estimate the effective demand of housing units in income Deciles of Isfahan urban households.

In the above equation:

Q stands for the amount of effective demand per square meter

CH represents housing costs of household

Bu stands for the substructure of the housing unit of household

Table 12 presents in detail the average amount of effective demand in different years in each of the income groups in Isfahan city.

Table 12. Amount of effective demand in different years in each income Decile of Isfahan city (square meter)


Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011

Decile 1 4.6 7 4.5 6 5 4 2 1.3 1.9

Decile 2 7.5 12 8 7.8 5.5 4.5 3.7 2.4 2.5

Decile 3 9.4 14 8.5 9 5.6 5.6 5.4 3 3.8

Decile 4 11 15.5 10 10.1 6.6 7 6 5 4.8

Decile 5 13 17 12.4 12 7.8 8 8 4.9 5.5

Decile 6 15 21 13 14 10.5 9 9 5.6 6.7

Decile 7 17 26.2 14 17 12 10.1 11 1.7 6.4

Decile 8 21 28 9 33 14.8 10.6 17 8 7.4

Decile 9 26 31 23 39 26.2 13 19 12 9

Decile 10 49 45 41 45 31 23 25.6 24 17

Source: Author’s measurements based on the Plan for Costs and Income of Urban Households of the province, 2003-2011

Based on the combination of variations of the two factors of income of households of Isfahan city and variation of housing price in urban areas, the outcome of changes in the effective demand among different income Deciles of Isfahan city were presented. As displayed by Table 12, investigation of effective demand among income Deciles of the city indicates a wide gap in terms of affording housing between high-income groups and low-income groups of households of Isfahan city.

A more important point is that based on the results, effective demand for housing units had experienced a decrease in all income Deciles in the end years of the study period. Besides, with respect to the ability of effective demand among low-income groups, while Deciles 1 to 4 could afford 4 to 11 square meters of housing in 2003, these numbers reduced to 1.9 to 5 square meters in 2011.

Diagram (5): Amount of effective demand in the 4 lowest-income Deciles of Isfahan city

Diagram (6): Sum of amount of effective housing demand of income docuiels of Isfahan city, 2003-2011

4.4.7. Housing accessibility index in different income groups of Isfahan

Housing accessibility index is obtained by dividing each unit of goods by the consumer income in a certain time unit. It shows how many time periods does the consumer have to work to obtain one unit of the intended commodity. Given the fact that the annual consumer’s income is considered, assuming that the income is distributed equally among all days of the year, the accessibility index shows how many days’ income of a household can buy one square meter of a housing unit.

Accordingly, in upper income dociels, households can own housing units by saving their income in a smaller number of days than the households in the lower Deciles do. Of course, considering the annual increase in the price of each square meter of housing unit, the days whose income must be saved to buy a square meter of housing unit increased in all income Deciles. The results show that in the low-income Decile in 2003, a household can afford a square meter of housing unit by saving the complete income of 75 days, while this number rose up to 206 days in late 2011.

Table (13): Number of household days whose income is set aside in order to purchase one square meter of housing unit in Isfahan city for income Deciles in different years.


Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011 Mean of period

Decile 1 75 65 81 94 119 122 135 175 206 107.2

Decile 2 68 58 52 77 80 93 103 143 129 80.3

Decile 3 63 53 42 56 63 81 96 121 104 67.9

Decile 4 58 47 37 43 55 67 82 101 88 57.7

Decile 5 50 41 32 40 48 52 63 80 76 50.4

Decile 6 49 36 28 34 41 45 50 73 66 42.2

Decile 7 43 30 25 30 34 40 43 65 57 39.9

Decile 8 34 25 19 27 31 34 33 55 48 30.6

Decile 9 29 19 16 22 22 28 28 34 39 23.7

Decile 10 10 13 13 14 15 17 17 19 23 13.7

Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.

Besides, while in 2001 the highest-income Deciles of Isfahan urban society needed to save 10 days’ income to afford one square meter, this number rose up to 23 days at the end of the period. The important point is the wide gap between high-income and low-income Deciles in waiting (saving) days for obtaining a square meter, which is 10.5 times between the 1st and 10th Deciles. This trend indicates increased inconvenience and inability in provision of housing.

Diagram (7): Number of days whose income is set aside to buy one square meter of housing unit.

As can be seen, in average, the households of the three upper income Deciles of Isfahan can buy a square meter of land by spending the savings of less than one month of their income, while the households of the three lower Deciles have to spend the savings of 65 days of their income to purchase a square meter of land.

Besides, housing unit accessibility index has also been determined as divided by income groups per year. Table 36.6 shows the trend continuing from 2003 to 2011.

In this regard, it should be noted that by assuming that one-third of the income of households of each income group is saved for obtaining housing, the years required to obtain an average housing unit (75 square meters) in Isfahan city for each income group is as follows:

Table 14. Number of years needed for household income (waiting period) to buy a housing unit (75 square meters in average) as divided by income Deciles 2003-2011.


Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011

Decile 1 48.4 33 36 58 70 75 84 91 103

Decile 2 30 21 30 38 48 50 71 85 97

Decile 3 24 18.6 26.2 29 43 42.3 63.6 64 87

Decile 4 20.2 16.5 21.8 23.4 36 35.5 50.5 58.6 68.6

Decile 5 17.2 15 17.1 20 29.6 30.7 35.4 48 53

Decile 6 15 12 16.5 18.5 26 27.8 31 40.7 47.4

Decile 7 14 10.7 15 16 22.3 23.6 25.6 34 35

Decile 8 11.1 10 14 13.3 18 21 22.2 27.8 29.6

Decile 9 9.8 8.6 10.1 9.3 15 16.1 17.5 21 24

Decile 10 5.5 5 6 5 7.5 9.5 9.2 9.2 13

Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.

In sum, investigation of housing accessibility index in Isfahan city shows a substantial difference among different income groups of this city in affording housing. In addition to the fact that, compared to other income groups, the low income groups of the city have to spend a longer period in order to obtain housing, the housing accessibility index have even worsened in low-income groups over time. For instance, whereas in 2003 the households of the first Decile could afford housing by working 49 years, this number rose up to 103 years in 2011, which means this group can never afford housing under current circumstances. Besides, investigation of the conditions of the beginning and end of the period shows that the waiting period for obtaining housing has almost tripled for all Deciles, although there have been fluctuations in the increase and decrease of the waiting period.

4.5. Conclusion

Generally the trend of formation of economic and spatial duality and diversity in cities and regions commenced and was gradually consolidated after the industrial revolution in Europe and then the emergence of modernity in the periphery countries. In Iran, this trend commenced in the early 20th century, and, as a consequence, increased urbanization resulting from this growth led to the accumulation of various problems such as poverty, homelessness and poor housing in Iran’s cities. Faced with horizontal development restrictions, Isfahan city, as one of the province capitals, concentrated a great population of low-income groups in itself. Geographical distribution of low-income groups in Isfahan city is such that most of them reside in dilapidated areas. These areas include old areas containing or non-containing monuments, as well as neighborhoods with informal settlements.

The findings demonstrate that in 2011, the cost of constructing one square meter of lands in Isfahan city is 430,000 rials, whereas in the end of the study period this amount rose to 3,102,000 rials. The greatest increase in housing costs occurred in 2008 (.4), and the lowest in 2010 (%8.6). Mean housing costs for households of the upper Deciles were higher than those of households of the lower Deciles, such that the greatest increased belonged to the 10th Decile (.1) and the lowest increase belonged to the 4the Decile (.7). Generally, by 2005, the Gini coefficient of housing costs for urban households followed an increasing trend, and the gap has been widening as of 2006. Investigation of effective demand among income Deciles of the city demonstrated a wide gap between the ability to obtain housing among upper and lower income groups of Isfahan city. With respect to effective demand ability among low-income groups, it can be argued that while in 2003 Deciles 1 to 4 owned 4 to 11 square meters of land, these numbers decreased to 1.9 to 5 square meters in 2011. Based on housing accessibility index, the results demonstrated that the low income Decile couled afford one square meter of housing by saving the income of 75 days; however, at the end of the study period (2011), this number rose up to 206 days. Moreover, in 2001 while the households of the highest-income Decile of Isfahan urban society needed to save the income of 10 days to afford one square meter, this number reached up to 23 days. The important point is the gap between high-income and low-income Deciles in the expected days (saving) for obtaining one square meter which was 10.5 times between the 1st Decile and the 10th one. Furthermore, while in 2003 the households of the 1st Decile could afford their required housing by working 46 years, this number rose up to 103 years in 2011, which meant that it was practically impossible for these households to afford housing.


Fracture union

Management of trauma has always been one of the surgical subsets in which oral and maxillofacial surgeons over the years. The mandibular body is a parabola-shaped curved bone composed of external and internal cortical layers surrounding a central core of cancellous bone. The goals of treatment, are to restore proper function by ensuring union of the fractured segments and re-establishing pre-injury strength; to restore any contour defect that might arise as a result of the injury and to prevent infection at the fracture site. Since the time of Hippocrates, it has been advocated that immobilisation of fractures to some degree or another is advantageous to their eventual union. The type and extent of immobility vary with the form of treatment and may play an essential part in the overall result. In common fractures, a certain amount of time is required before bone healing can be expected to occur. This reasonable time may vary according to age, species, breed, bone involved, level of the fracture, and associated soft tissue injury.

Delayed union, by definition, is present when an adequate period has elapsed since the initial injury without achieving bone union, taking into account the above variables. The fact that a bone is delayed in its union does not mean that it will become nonunion. Classically the stated reasons for a delayed union are problems such as small reduction, inadequate immobilisation, distraction, loss of blood supply, and infection. Inadequate reduction of a fracture, regardless of its cause, may be a prime reason for delayed union or nonunion. It usually leads to instability or poor immobilisation. Also, a small reduction may be caused by superimposition of soft tissues through the fracture area, which may delay healing.

Nonunion defined as the cessation of all reparative processes of healing without a bony union. Since all of the factors discussed under delayed union usually occur to a more severe degree in nonunion, the differentiation between delayed and nonunion is often based on radiographic criteria and time. In humans, failure to show any progressive change in the radiographic appearance for at least three months after the period during which regular fracture union would be thought to have occurred is evidence of nonunion. Malunion is defined as a healing of the bones in an abnormal position; Malunions can be classified as functional or nonfunctional. Functional malunions are usually those that have small deviations from normal axes that do not incapacitate the patient. A minimum of at least nine months has to elapse since the initial injury, and there should be no signs of healing for the final three months for the diagnosis of fracture nonunion. There are a few different classification systems of nonunions, but nonunions are most commonly divided into two categories of hypervascular nonunion and avascular nonunion. In hypervascular nonunions, also known as hypertrophic nonunion, fracture ends are vascular and are capable of biological activity. Here is evidence of callus formation around the fracture site and it is thought to be in response to excessive micromotion at the fracture site. Avascular nonunions, also known as atrophic nonunion, are caused by avascularity, or inadequate blood supply of the fracture ends. There is no or minimal callus formation, and fracture line remains visible . is nonunion requires natural enhancement in addition to adequate immobilisation to heal.

Treatment of mandibular aims in achieving the bony union, right occlusion, preserve IAN and mental nerve function, to prevent malunion and to attain optimal cosmesis. Rigid plate and screw fixation have the advantage of allowing the patient to return to the role without the need of 4–6 weeks of IMF; but the success of rigid fixation depends upon accurate reduction. During adaptation of manipulating in a champys line of osteosynthesis in symphysis region, even main bar applied to the tooth for proper occlusion, but still, the bone fragments overlap bone prominence. Gaps will be present. To achieve bone contact for healing various method are devices for the same to hold the fracture segments together like Towel clamps, Modified towel clamps. Stress patterns generated by Synthes reduction forceps, orthodontic brackets, allis forceps, manual reduction, elastics internal traction reduction, bone holding forceps, tension wire method and vacuum splints, as without which there is always a gap and inability to fix using mini plate intraoperatively. Proper alignment and reduction are essential for mastication, speech, and normal range of oral motion.

Compression during plate fixation has been shown to aid in the stability and healing process of a fracture site. The primary mechanism is thought to be due to increased contact of bony surfaces. Reduction forceps can hold large segments of bone together to increase surface contact while plate fixation is performed. An additional benefit of using reduction forceps is that a single operating surgeon can achieve plating of body fractures because the forceps hold the fracture in reduction while the plates and screws are placed. Reduction gaps of more than 1 mm between fracture segments result in secondary healing, which occurs in callus formation and increases the risk of a nonunion irrespective of any fixation method. Direct bone contact between the fracture segments promotes primary bone healing, which leads to earlier bone regrowth and stability across the fracture site. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. Ingrowth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which is referred to as Haversian remodelling. Clinical experience shows that fractures that are not adequately reduced are at higher risk for malunion, delayed union and non-union and infection leading to further patient morbidity to achieve bone contact plate.

Studies by Choi et al. using silicon mandibular models have established the optimum position of the modified towel clamp–type reduction forceps relative to symphyseal and parasymphyseal fractures. Fractured models were reduced at three different horizontal levels: midway bisecting the mandible, 5 mm above midway, and 5 mm below midway. Besides, engagement holes were tested at distances of 10, 12, 14, and 16 mm from the fracture line. The models were subjected to heating up to 130°C for 100 minutes and then were cooled to room temperature. Stress patterns were then evaluated using a polariscope. Optimal stress patterns (defined as those distributed over the entire fracture site) were noted when the reduction forceps were placed at the midway or 5 mm below midway and at least 12 mm from the fracture line for symphyseal or parasymphyseal fractures and at least 16 mm for mandibular body fractures.

Shinohara et al. in 2006 used two modified reduction forceps for the symphyseal and parasymphyseal fractures. One was applied at the inferior border and another one in the subapical zone of the anterior mandible, to reduce lingual cortical bone sufficiently. In the other clinical studies, the reduction was achieved by using one clamp or forceps in the anterior and posterior region of the mandible.

One study describes that two monocortical holes were drilled, each 10 mm from the fracture line (Žerdoner and Žajdela, 1998). A second study describes monocortical holes at approximately 12 mm (Kluszynski et al., 2007) from the fracture line at midway down the vertical height of the mandible. The third study describes either monocortical or bicortical holes depending on difficulties. These difficulties are not described in detail. In this study, the distance of 5-8 mm from the fracture was chosen (Rogers and Sargent, 2000) at the inferior margin of the mandible.

Taglialatela Scafati et al., (2004) used elastic rubber bands stretched between screws placed across both sides of the fractured parts to reduce mandibular and orbit- maxillary fractures. Orthodontic rubber bands and two self-tapping monocortical titanium screws with 2 mm diameter and 9-13 mm length used. The heads of the screws protruded about 5 mm and the axis had to be perpendicular to the fracture line. It is similar in concept to other intraoperative methods of reduction used in orthopaedic or maxillofacial surgery such as the tension band technique or the Tension Wire Method (TWM), where EIT utilises rubber bands tightened between monocortical screws placed onto the fracture fragments

Vikas and Terrence Lowe in 2009 in their technical note on Modification of the elastic internal traction method for temporary inter-fragment reduction prior to internal fixation described a simple and effective modification of the Elastic Internal Traction method as previously described by Scafati et al.The modification utilizes 2 mm AO mono-cortical screws and elastomeric orthodontic chain (EOC) instead of elastic bands. 9–12mm length mono-cortical screws strategically placed to a depth of 4–5 mm approximately 7 mm either side of the fracture.

Based on studies by Smith at el in 1933 a series of 10 x 1 cm ‘turns’ of the elastic should resist a displacing force of between 30.-40 Newtons approximately.

Degala and Gupta, (2010) used comparable techniques for symphyseal, parasymphyseal and body fractures. Titanium screws with 2 mm diameter and 8 mm length were tightened at a distance of 10-20 mm from the line of fracture, and around 2 mm screw length remained above the bone to engage a 24 G wire loop. However, before applying this technique, they used IMF.

Rogers and Sargent in 2000 modified A standard towel by bending two ends of a clamp approximately 10 degrees outward and was done to prevent disengagement from the bone. Kallela et al. in 1996 modified a standard AO reduction forceps through shortening the teeth and made notches at the ends to grasp tightly in the drill holes. Shinohara et al. in 2006 used two modified reduction forceps: one was positioned at the inferior border and the other in the neutral subapical zone.

Choi et al. in 2005 included two treatment groups (reduction forceps and IMF group) and used a scale of 1 to 3 to assess the accuracy of anatomic reduction in the radiographic image. A score of 1 indicated a poorly reduced fracture which required a second operation, while a score of 2 indicated a slight displacement but an acceptable occlusion. A score of 3 indicated a precise reduction. The reduction forceps group had a higher number of accurate anatomic alignments of the fractures than the IMF group.

New reduction forceps were developed by Choi et al., 2001; Choi et al., 2005 for mandibular angle fractures based on the unique anatomy of the oblique line and body; one end of the forceps designed for positioning in the fragment medial to the oblique line, and another end was placed in the distal fragment below the oblique line . The reduction-compression forceps of Scolozzi and Jaques, in 2008 was designed similar to standard orthopaedic atraumatic grasping forceps.

Zerdoner and Zajdela in 1998 used a combination of self-cutting screws and a repositioning forceps which has butterfly-like shaped prongs. First, two screws fastened on each side of the fracture line and then the reposition forceps is placed over the heads of the screws

The use of reduction forceps has known for many years in general trauma surgery, orthopaedic surgery and plastic surgery. In OMF surgery traditionally the dental occlusion was used to perform and check reduction of mandibular fractures. Notwithstanding this historical background, reduction forceps can be used in mandibular fractures as in any other fracture as long as there is sufficient space and as long as the fracture surface permits stable placement and withstands the forces created by such a forceps.

George concluded by saying that the use of IMF for the management of angle fractures of the mandible is unnecessary provided there is a skilled assistant present to help manually reduce the fracture site for plating.

Other fracture reduction methods such as traction wire or elastic tension on screws are simple to use in the area of anterior mandibular fractures. This method may cause a gap at the lingual side of the fracture as an effect of the resultant of the force exerted on the protruding screws (Ellis & Tharanon 1992, Cillo & Ellis 2007). This lingual gap can occur as well when using reduction forceps but as they grab inside the bone and when they are positioned at a distance of the fractures site of at least 8-10 mm this should be prevented ( Žerdoner & Žajdela 1998, Rogers & Sargent 2000, Kluszynski et al. 2015). Choi et al., (2003) even suggested that tips of repositioning forceps should be placed at least 12mm from each site of the fracture line in case of symphyseal and parasymphyseal fractures. In the mandibular body fractures, adequate stress pattern at the lingual site found at least 16 mm from the fracture line.

Traditional wiring is a potential source of ‘needlestick’ type injury in the contaminated environment of the oral cavity and represents a health risk to surgeons and assistants. Conventional elastic or rubber rings may be difficult to place, and large numbers often need to be applied to prevent displacement of the fragments from the wafer. Such elastic exerts a pull of approximately 250-500 g per ‘turn’ depending on its specification (De Genova et al., 1985), and multiple ‘turns’ around anchorage points increase the firmness of retention. It is resilient, and even if displaced by stretching, tends to return the segments to their correct location in the splint or wafer, whereas wire ties once pulled, or if inadequately tightened become passive and allow free movement. The chain is relatively expensive, but the ease of use and the rapidity and flexibility with which it can be applied and retrieved save valuable operating time. It can be cold sterilised if desired and is designed to retain its physical properties within the oral environment. On removal, unlike wires and elastic rings which easily break or tear and may be difficult to retrieve from the mouth or wound, it can be recovered in one strip and, as an additional check, the holes can. The force exerted by elastic modules is known to decrease over time (Wong, 1976) and the strength decays by 17-70% (Hershey & Reynolds, 1975; Brooks & Hershey, 1976) over the first 24 h, depending on the precise material and format of the chain, and whether it has been pre-stretched (Young & Sandrik, 1979; Brantley et al., 1979).

Symphysis, parasymphysis, and mandibular body can be differentiated from other regions of the mandible because of a ridge of compact cortical bone (alveolar ridge) located on its cranial aspect that allows for tooth-bearing. This horizontally oriented tooth-bearing portion then becomes vertically oriented to form its articulation with the cranium. The change in orientation occurs at the mandibular angle, and subsequently, the mandible continues as the mandibular body and condyle as it travels Along the entire course of the mandible are muscle attachments that place dynamic internal forces on the mandible. These muscles can be divided into two primary groups: muscles of mastication and suprahyoid muscles. The muscles of mastication include the medial and lateral pterygoids, the temporalis, and masseter muscles. Together these muscles aid in chewing by generating forces along the posterior aspects of the mandible (angle, ramus, coronoid process).

Furthermore, two of the muscles of mastication, the medial pterygoid and masseter muscles, combine to form the pterygomasseteric sling, which attaches at the mandibular angle. Conversely, the suprahyoid group (digastric, stylohyoid, mylohyoid, and geniohyoid) functions, in part, to depress the anterior mandible by applying forces to the mandibular symphysis, parasymphysis, and a portion of the body. Together, these muscle attachments act to place dynamic vectors of force on the mandible that, when in continuity, allow for proper mandibular function, but when in discontinuity, as occurs with mandible fractures, can potentially disrupt adequate fracture healing. Works of literature looking at the relationship between the timing of surgery and subsequent outcomes have demonstrated no difference in infectious nonunion complications between treatment within or after three days status postinjury but did find that complication because of technical errors increased after this time. As a result, the authors commented that if surgery was to commence or more days after the injury, a technically accurate surgery by the surgeon is necessitated to overcome factors such as tissue oedema and inflammation. In cases where a delay in treatment is necessary, consideration should be given for temporarily closed fixation to reduce fracture mobility and patient pain.

Treating mandibular fractures involves providing the optimal environment for bony healing to occur: adequate blood supply, immobilisation, and proper alignment of fracture segments. Plate length is generally determined to allow for the placement of more than one screw on either side of the fracture to nullify the dynamic forces that act on the mandible. In ideal conditions, three screws are placed on either side of the fracture segments to allow for assurance against inadequate stabilisation, with screws placed at least several millimetres from the fracture site. Proper plate thickness determined by the forces required to stabilise fractured bone segments. Options for stabilisation can be divided into either load-sharing fixation or load-bearing fixation. Mandible that would only require monocortical plates to allow for stable fixation along the symphysis, parasymphysis, and angle of the mandible. These regions have subsequently been called Champy’s lines of tension, with the superior portion of lines also referred to as the tension band of the mandible.

A study by George Dimitroulis in 2002 in which he gave Postreduction orthopantomograph scoring criteria. These radiographs were assessed using a score of from 1 to 3. A score of 3 given to radiologic evidence of an accurate anatomic reduction in the fracture site. A score of 2 assigned to reduced fractures that were slightly displaced but had a satisfactory occlusion. The lowest score of 1 was for poorly decreasing fractures that required a second operation to correct the poor alignment and unacceptable occlusion.

The assessment of fracture healing is becoming more and more critical because new approaches used in traumatology. The biology of fracture healing is a complex biological process that follows specific regenerative patterns and involves changes in the expression of several thousand genes. Although there is still much to be learned to comprehend the pathways of bone regeneration fully, the overall paths of both the anatomical and biochemical events have thoroughly investigated. These efforts have provided a general understanding of how fracture healing occurs. Following the initial trauma, bone heals by either direct intramembranous or indirect fracture healing, which consists of both intramembranous and endochondral bone formation. The most common pathway is incidental healing, since direct bone healing requires an anatomical reduction and rigidly stable conditions, commonly only obtained by open reduction and internal fixation. However, when such conditions achieved, the direct healing cascade allows the bone structure to immediately regenerate anatomical lamellar bone and the Haversian systems without any remodelling steps necessary It is helpful to think of the bone healing process in a stepwise fashion, even though in reality there is an excellent overlap among these different stages. In general, it is possible to divide this process into an initial hematoma formation step, followed by inflammation, proliferation and differentiation, and eventually ossification and remodelling. Shortly after a fracture occurs, the vascular injury to periosteum, endosteum, and the surrounding so tissue causes hypoperfusion in the adjacent area. The coagulation cascade is activated which leads to the formation of a hematoma rich in platelets and macrophages. Cytokines from these macrophages initiate an inflammatory response, including increased blood ow and vascular permeability at the fracture site. Mechanical and molecular signals dictate what happens subsequently. Fracture healing can occur either through direct intramembranous healing or more commonly through indirect or secondary healing. The significant difference between these two pathways is that direct healing requires absolute stability and lack of interfragmentary motion, whereas, in secondary healing, the presence of interfragmentary motion at the site of fracture creates relative stability. In secondary healing, this mechanical stimulation in addition to the activity of the inflammatory molecules leads to the formation of fracture callus followed by woven bone which eventually remodelled to lamellar bone. At a molecular level secretion of numerous cytokines and proinflammatory factors coordinate these complex pathways. Tumour necrosis factor-𝛼 (TNF-𝛼), interleukin-1 (IL-1), IL-6, IL-11, and IL-18 are responsible for the initial inflammatory response. ReRevascularisationan essential component of bone healing, s achieved through different molecular pathways requiring either angiopoietin or vascular endothelial growth factors (VEGF) ).EGF’s importance in the process of bone repair hahas shown any studies involving animal models. S the collagen matrix invaded blood vessels, the mineralisation the so callus occurs through the activity of osteoblasts resulting in hard callus, which is remodelled into lamellar bone. Inhibition of angiogenesis in rats with closed femoral fractures completely prevented healing and resulted in atrophic non-unions

If the gap between bone ends is less than 0.01 mm and interfragmentary strain is less than 2%, the fracture unites by so-called contact healing. Under these conditions, cutting cones are formed at the ends of the osteons closest to the fracture site. The tips of the cutting cones consist of osteoclasts which cross the fracture line, generating longitudinal cavities at a rate of 50–100 μm/day. The primary bone structure is then gradually replaced by longitudinal revascularized osteons carrying osteoprogenitor cells which differentiate into osteoblasts and produce lamellar bone on each surface of the gap. This lamellar bone, however, is laid down perpendicular to the long axis and is mechanically weak. This initial process takes approximately 3 and eight weeks, after which a secondary remodelling resembling the contact healing cascade with cutting cones takes place. Although not as extensive as endochondral remodelling, this phase is necessary to fully restore the anatomical and biomechanical properties of the boneDirect bone healing first described in radiographs after complete anatomical repositioning and stable fixation. Its features are lack of callus formation and disappearance of the fracture lines. Danis (1949) described this as soudure autogène (autogenous welding). Callus-free, direct bone healing requires what is often called “stability by interfragmentary compression” (Steinemann, 1983).

Contact healing of the bone means healing of the fracture line after stable anatomical repositioning, with perfect interfragmentary contact and without the possibility for any cellular or vascular ingrowth. Cutting cones can cross this interface from one fragment to the other by remodelling the Haversian canal. Haversian canal remodelling is the primary mechanism for restoration of the internal architecture of compact bone. Contact healing takes place over the whole fracture line after perfect anatomical reduction, osteosynthesis, and mechanical rest. Contact healing is only seen directly beneath the miniplate. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. In- growth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which referred to as Haversian remodelling. After ten weeks the fracture is replaced by newly reconstructed cortical bone. Gap healing is seen, for example, on the inner side of the mandible after miniplate osteosynthesis. Gap healing plays a vital role in direct bone healing. Gaps are far more extensive than contact areas. Contact areas, on the other hand, are essential for stabilisation by interfragmentary friction. Contact areas protect the gaps against deformation. Gap healing was seen far from the plate.

Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. Moed conducted a larger prospective study which showed that ultrasound findings at 6 and nine weeks have a 97% positive predictive value (95% CI: 0.9-1) and 100% sensitivity in determining fracture healing in patients with acute tibial fractures treated with locked intramedullary nailing [52]. Time to the determination of healing was also shorter using ultrasound (6.5 weeks) compared to a nineteen-week average of radiographic data (𝑃 < 0.001). Ultrasound has additional advantages over other imaging modalities including lower cost, no ionising radiation exposure, and bisnoninvasive. HHowever, ts use and interpretation of findings are thought to be highly dependent on operator’s expertise. Furthermore, thick layers of such tissue can obscure an adequate view of bones with ultrasound. CT scans showed some advantages over radiographs in early detection of fracture healing in radius fractures. A limitation of CT is a beam-hardening artefact from internal and external fixation. Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. The author concluded that When used to evaluate hindfoot arthrodeses, plain radiographs may be misleading. CT provides a more accurate assessment of the healing, and we have devised a new system to quantitate the fusion mass. In seven cases MDCT led to operative treatment while on X-ray the treatment plan was undecided. Bhattacharyya et al. examined the evaluation of tibial fracture union by CT scan and determined an ICC of 0.89, which even indicates excellent agreement. These studies suggest that using CT scan has high inter-observer reliability, better than the inter-observer reliability of plain radiography. According to the authors, interobserver reliability of MDCT scan is not higher than conventional radiographs for determining non-union. However, an MDCT scan did lead to a more invasive approach in equivocal cases.MDCT provides superior diagnostic accuracy to panoramic radiography and has been to characterise mandibular fracture locations with greater certainty. Because of the high soft tissue contrast, MDCT may reveal the relation of a bone fragment and adjacent muscle, needing and the existence of some foreign bodies in traumatic injury. So in cases of severe injuries of soft tissue, an MDCT is mandatory. A 33% CT fusion ratio threshold could accurately discriminate between clinical stability and instability By 36 weeks, healing was essentially complete according to both modalities, although there still were small gaps in the callus detectable on computed tomography but not on plain films. Authors concluded by stating that Computed tomography may be of value in the evaluation of fractures of long bones in those cases in which clinical examination and plain radiographs fail to give adequate information as to the status of healing. A study in 2007 used a PET scan with Fluoride ion in the assessment of bone healing in rats with femur fractures. Fluoride ion deposits in regions of the bone with high osteoblastic activity and high rate of turnover, such as endosteal and periosteal surfaces. They concluded that Fluoride ion PET could potentially play an essential part in the assessment of fracture healing given its ability to quantitatively monitor metabolic activity and provide an objective evaluation of fracture repair. 18F-fluoride PET imaging, which is an indicator of osteoblastic activity in vivo, can identify fracture nonunions early point and may have a role in the assessment of longitudinal fracture healing. PET scans using 18F-FDG were not helpful in differentiating metabolic activity between successful and delayed bone healing. Moghaddam et al. conducted a prospective cohort study to assess changes in serum concentrations of a few serologic markers in normal and delayed fracture healing. He was able to show significantly lower levels of tartrate-resistant acid phosphatase 5b (TRACP 5b) and C-terminal cross-linking telopeptide of type I collagen (CTX) in patients who developed non-unions compared to patients with normal healing. TRACP 5b is a direct marker of osteoclastic activity and bone resorption, while CTX is an indirect measure of osteoclastic activity by reflecting collagen degradation Secretion of many of the cytokines and biologic markers is also influenced by other factors. For example, systemic levels of TGF-𝛽 were found to vary based on smoking status, age, gender, diabetes mellitus, and chronic alcohol abuse at different time points. On plain radiography, it is difficult to distinguish between desired callus formation and pseudoarthrosis. Therefore CT is an essential objective diagnostic tool to determine healing status. Computed tomography (CT) is superior to plain radiography in the assessment of union and visualising of fracture in the presence of abundant callus or overlaying cast. There have been studies to test the accuracy and efficacy of computed tomography in the assessment of fracture union in clinical settings. Bhattacharyya et al. showed that computed tomography has 100% sensitivity for detecting nonunion; however, it’s limited by a low specificity of 62%. Three of the 35 patients in the study were misdiagnosed as tibial nonunion based on CT scan findings but were healed when the fracture was visualised during surgical intervention. Seventy-seven studies involved the use of clinical criteria to define fracture union. The most common clinical standards were the absence of pain or tenderness (49%), the lack of pain or tenderness on palpation or physical examination (39%), and the ability to bear weight. The most common radiographic definitions of fracture-healing in studies involving the use of plain radiographs were bridging of the fracture site by callus, trabeculae, or bone (53%); bridging of the fracture site at three cortices (27%), and obliteration of the fracture line or cortical continuity (18%). Most commonly reported criteria for radiographic assessment of fracture union according to the location of the fracture. Two studies did not involve the use of plain radiographs to assess fracture-healing. In the study in which computed, tomography was used, the union defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, a union defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment.

Plain radiography is the most common way in which fracture union assessed, and a substantial number of studies defined fracture union by radiographic parameters alone. Hammer et al. combined cortical continuity, the loss of a visible fracture line, and callus size in a scale to assess fracture- healing radiographically but found conventional radiographic examination challenging to correlate with fracture stability and could not conclusively determine the state of the union. In animal models, cortical continuity is a good predictor of fracture torsional strength, whereas the callus area is not. Also, clinicians cannot reliably determine the concentration of a healing fracture by a single set of radiographs and are unable to rank radiographs of healing fractures in order of strength. Therefore, we rely heavily on a radiographic method without proven validity for predicting bone strength in the assessment of fracture union.

Computed tomography eliminates the problem of overlapping structures and allows axial sections to limit imaging of bone bridging CT directly in the evaluation. However, in fractures treated with external fixators, CT can determine the increasing amount of callus formation, which indicates favourable fracture healing . in this study CT was correlated with fractionmetry in the assessment of fracture healing of tibial shaft fractures. The amount of callus was serially quantified and correlated with fractionmetry . after axial imaging, two equal slices at two points of the fracture were analysed 1, 6, 12, 18 weeks after stabilisation. The principal fracture line was selected for longitudinal measurement because maximum callus formation was expected at that level . a rectangular region of interact within 200-2000 and 700-2000 HU. The callus was measured automatically after marking the area of interest. Multiple measurements after repositioning the limb were performed to evaluate the short-term precision of the method. The new formation of callus indicated stability of the fracture healing on CT after 12 weeks. Although the amount of callus is only an indirect indicator of fracture union, CT was able to assess the fracture stability. The ROC analysis showed that an increase > 50% of callus formation after 12 weeks indicated stability with a sensitivity of 100% and a specificity of 83 %.

In the study in which computed, tomography used, the union was defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, the union was defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment. One hundred and twenty-three studies proved to be eligible. Union was defined by a combination of clinical and radiographic criteria in 62% of the reviews, from radiographic criteria only in 37%, and by clinical tests just 1%. Twelve different approaches were used to define fracture union clinically, and the most common rule was the absence of pain or tenderness at the fracture site during weight-bearing. In studies involving the use of plain radiographs, eleven different approaches were used to define fracture union, and the most common criterion was bridging of the fracture site.

Several factors predispose a patient to be nonunion of bones, including mechanical instability, loss of blood supply, and infections. Bone production has been estimated to occur within 15 weeks after osteotomy; complete bone healing may take 3–6 months or even longer. The reliability of conventional radiographs for the determination of fracture healing has questioned in previous studies. CT has been used for the monitoring of bone production and fracture healing, and its advantages over conventional radiography in early fracture healing have reported. To avoid stairstep artefacts in CT, the isotropic or near-isotropic resolution is necessary and has become attractive with the introduction of MDCT scanners. Experimental studies have shown that MDCT reduces stairstep artefacts with multiplanar reconstruction when compared with single-detector CT From these data, authors reconstructed thin axial slices with 50% overlap to yield near-isotropic voxels (almost identical to the length of the voxel in the x, y, and z-axes) for further processing. This allows 2D and 3D reconstructions with a resolution similar to the source images that form the basis of good-quality multiplanar reconstructions (MPRs). MPRs reconstructed from contiguous axial slices ranging from 1.5 to 3 mm thick, depending on the anatomic region. Orthogonal to the fracture or arthrodesis plane. Fusion of osseous structures was scored with a semiquantitative approach for both techniques (MDCT, digital radiography) as complete (c), partial (p), and no bone bridging (0). Definitions of fusion were as follows: full, bone bridges with no gap; partial, some bone bridges with gaps between; and no bridging, no osseous bridges. Two musculoskeletal radiologists assessed all MDCT examinations and digital radiographs in a consensus interpretation.

Conventional tomography has been used for many years for the evaluation of the postoperative spine after posterior spinal arthrodesis. Thin-section tomography had good correlation with surgery in the diagnosis of pseudarthrosis after fusions for scoliosis and was superior to anteroposterior, lateral, and oblique radiography. However, conventional tomography also suffers from certain disadvantages. The standard linear movement is mechanically easy to produce but will give rise to rather thick tomographic sections and a short blurring path (the length of the tomographic section). If thinner parts are required, more complex movements are needed. Because conventional tomography does not entirely blur out all distracting structures, the inherent lack of sharpness of the traditional tomographic image could assess bone bridges problematic. Thinner sections of conventional tomography, in particular, suffer from greater background blur. In dentistry radiology, the technique is called orthopantomography and is still widely used, although for practical reasons other conventional tomographic methods have been mostly replaced by CT, and the commercial availability of traditional tomography scanners has decreased substantially.

CT eliminates the blurring problem of conventional tomography and increases the perceptibility of fracture healing. MDCT has the advantage that the X-ray beam passes through the whole volume of the object in a short time, and, when using isotropic or near-isotropic resolution, volumetric imaging with the reconstruction of arbitrary MPRs is useful. The CT technique also has an essential impact on the severity of artefacts, with high milliampere-second and high peak kilovoltage settings leading to the reduction of artefacts. With MDCT and low pitches, the high tube current is achieved, which is the basis for good-quality MPRs. With 16-MDCT scanners, the trend is first to reconstruct an overlapping secondary raw data set and then to obtain MPRs of axial, coronal, or arbitrarily angulated sections with a predefined section width. Bone bridges are high-contrast objects and are reliably detected on 1.5- to 3-mm-thick MPRs, depending on the anatomic region, with thicker MPRs preferable for the lumbar spine and somewhat thinner MPRs superior for the hand region

The use of computed tomography (CT) scanning technology improves anatomical visualisation by offering three-dimensional reconstructions of bony architecture and has contributed to the assessment of healing in certain fractures. However, CT scans and plain radiographs detect mineralised bone formation, which is the late manifestation of the fracture healing process.

Moreover, CT scans demonstrate low specificity in the diagnosis of fracture nonunions in long bones

MRI has not been useful in evaluating delayed fracture healing in the long bones. Scintigraphic studies with 99mTc-labeled compounds have also been used to assess carpal bones; however, multiple studies have demonstrated no significant differences in tracer uptake between tibia fractures that usually heal and those that form nonunions.

In our study 48 patients were divided equally into two groups: group A (study group) and group B (control group) based on reduction method to compare the accuracy of reduction and bone healing of mandible fractures using elastic guided reduction v/s bone reduction forceps. Where both groups were evaluated based on sex, type of mandible fracture, confined or nonconfined , intermaxillary fixation method , type of reduction method used , postoperative opg scores , and ct scan assessment scores after 6 weeks for lingual , buccal cortices and medullary bone , calculation of fusion percentage using ct scan , and development of any late post-op complication .

So based on sex, fracture types, intermaxillary fixation method, late post-op complications post-op opg assessment scoring, ct scan assessment scoring fusion percentage results were nil significant (P <0.5). Whereas based on the confined or non-confined type of fractures results were substantial Which denotes that use of bone holding forceps for non confined type of fracture (P 0.011)


Biological development: college admission essay help

Biological Beginnings:

Each human cell has a nucleus which contains chromosomes made up of deoxynucleic acid, or DNA. DNA contains the genetic information, or genes, that are used to make a human being. All typical cells in a human body have 46 chromosomes arranged in 23 pairs, with the exception of the egg and sperm. During cell reproduction, or mitosis, the cell’s nucleus duplicates itself and the cell divides and two new cells are formed. Meiosis is a different type of cell division in which eggs and sperm, or gametes, are formed. During meiosis, a cell duplicates its chromosomes, but then divides twice, resulting in a cell with 23 unpaired chromosomes. During fertilization, an egg and sperm combine to form a single cell, zygote, with information from both the mother and the father.

The combination of the unpaired chromosomes leads to variability in the population because no two people are exactly alike, even in the case of identical twins. A person’s genetic make-up is called their genotype; this is the basis for who you are on a cellular level. A person’s phenotype is what a person’s observable characteristics are. Each genotype can lead to a variety of phenotypes. There are dominant and recessive genes contained in the genetic material that we acquire. For example, brown eyes are dominant over blue eyes, so if the genetic code is available for both, brown eyes will prevail.

Abnormalities can also be linked to the chromosomes and genes that are inherited from your parents. Some examples are down syndrome, cystic fibrosis, and spina bifida. This is caused when either chromosomes or genes are missing, mutated or damaged.

Genetically, I received my height and brown eyes from my mother, and my brown hair from both my parents. As far as I know, I don’t have any abnormalities linked to my chromosomes or genes that were passed down during my conception.


The prenatal stage starts at conception, lasts approximately 266 days, and consists of three different periods: germinal, embryonic and fetal. This is an amazingly complex time that allows a single cell composed of information from both the mother and the father to create a new human being.

The first period of the prenatal stage occurs in the first two weeks after conception and is called the germinal period. During this time the zygote (or fertilized egg) begins its cell divisions, through mitosis, from a single cell to a blastocyst, which will eventually develop into the embryo and placenta. The germinal period ends when the blastocyst implants into the uterine wall.

The second period of prenatal development that occurs in weeks two through eight after conception is called the embryonic period. During this time, the blastocyst from the first stage develops into the embryo. Within the embryo, there are three layers of cells that form: the endoderm, which will develop into the digestive and respiratory systems, the ectoderm which will become the nervous system, sensory receptors and skin parts, and the mesoderm which will become the circulatory system, bones, muscles excretory system and reproductive system. Organs begin to form in this stage also. During this stage, the embryo development is very susceptible to outside influences from the mother such as alcohol consumption and cigarette usage.

The fetal period is the final period of the prenatal stage which lasts from two months post conception until birth. It is the longest period of the prenatal stage. During this period, continued growth and development occur. At approximately 26 weeks post conception, the fetus would be considered viable, or able to survive outside the mother’s womb. If birth would occur at 26 weeks, the baby would most likely need help breathing at this point because the lungs are not fully mature, but all organ systems are developed and can function outside of mom.

The brain development during the prenatal period is also very complex, and if you think about it, an amazing thing. When a baby is born, they have 100 billion neurons that handle processing information. There are four phases of brain development during the prenatal period: formation of the neural tube, neurogenesis, neural migration and neural connectivity.

During the prenatal period, a wide variety of tests can be performed to monitor the development of the fetus. The extent to which testing is used, depends on the doctors’ recommendations as well as the mothers age, health and potential genetic risk factors. One common test utilized is the ultrasound. This is a non-invasive test that is used to monitor the growth of the fetus, look at structural development and determine the sex of the baby. Other tests that are available, but are more invasive and riskier for both the fetus and the mother, include Chorionic villus sampling, amniocentesis, fetal MRI, maternal blood screening.

The mother’s womb is designed to protect the fetus during development. However, if a mother doesn’t take care of herself, it can have a negative impact on the developing fetus. A woman should also avoid x-rays and certain environmental pollutants during the pregnancy. A woman should avoid alcohol, nicotine, caffeine, drugs and teratogens. They should also have good nutrition during the pregnancy as the fetus relies solely on the mother for its nutrients during development. Along with good nutrition, extra vitamins are also recommended during the pregnancy period, the main one recommended in folic acid. Emotional health is also very important. Higher degrees of anxiety and stress can also be harmful to the fetus and have long term effects on the child.

The birth of a child marks the transition from the prenatal to post-partum stage, which last from approximately 6 weeks, or until a mother’s body is back to her pre-pregnancy state. During this time a woman may be sleep deprived due to the demands of the baby and trying to take care of any other family members. There are also hormonal changes that woman experiences as well as the uterus returning to its normal size. Emotional adjustments are also occurring during this stage. It is common for most women to experience the post-partum blues, in which they feel depressed. These feelings can come and go, and usually disappear within a couple of weeks. If major depression occurs beyond this time, this is referred to as postpartum depression and it is important for a woman to get treatment to protect herself and her baby.

My prenatal development and delivery were fairly uneventful for my mother. The only complication that was had during her pregnancy was low iron levels which would cause her to pass out. Once she started on iron pills, this problem was eliminated. During her pregnancy, since it was in the early 1970’s, it wasn’t common for any testing or ultrasounds to occur unless there were major complications. As my mom said, you get pregnant and have a baby. After I was born, my mom said that she had no complications from post-partum depression or baby blues.


Infancy is the period of time between birth and two years of age. During this time, extraordinary growth and development occur following a cephalocaudal pattern (or top down) and a proximodistal pattern (center of body to extremities). A baby can see before it speaks, move its arms before its fingers, etc. An infant’s height increased by approximately 40 percent by the age of 1. By the age of 2, a child is nearly one-fifth of the its weight and half its height as they will be as an adult. Infants require a great deal of sleep, with the average in this period of 12.8 hours a day. The sleep an infant gets can have an impact of their cognitive functions later in life, such as improved executive function (good sleep) or language delays (poor sleep).

Proper nutrition during this period is also imperative for infant development. Breast feeding an infant exclusively during the first six months of life provides many benefits to both the infant and the mother including appropriate weight gain for the infant and a reduction in ovarian cancer for the mother. However, both breast feeding and bottle feeding are appropriate options for the baby. As the infant gets older, appropriate amounts of fruits and vegetables are important for development as well as limiting junk food.

Motor skills development is thought to follow the dynamic systems theory in this the infant assembles skills based on perceptions and actions. For example, if an infant wants a toy, he needs to learn how to reach for that toy to grasp it. An infant is born with reflexes, which are required for them to adapt to their environment before they learn anything, such as the rooting reflex and sucking reflex for eating. Some of these reflexes are specific to this age, some are permanent throughout their life, such as blinking of the eyes. Gross motor skills are the next major skill that an infant develops. These involve the large muscle groups and are skills such as holding their head up, sitting, standing, and pulling themselves up on furniture. The first year of life, the motor skills help the infant provide themselves independence, while the second year is key to honing in the skills they have learned. Fine motors skills develop secondary to gross motor skills. These include activities such as grasping a spoon and picking up food off of their high-chair tray.

Infant senses are not developed during the prenatal period. Visual acuity in the infant that is comparable to an adult, occurs by about 6 months of age. A fetus can hear in the womb, but is unable to distinguish loudness and pitch which is developed during infancy. Other senses are present, such as taste and smell, but preferences are developed throughout infancy.

Jean Piaget’s theory on cognitive development is one that is widely used. This theory stresses that children develop their own information about their surroundings, instead of information just being given to them. The first stage of Piaget’s theory is the sensorimotor stage. This stage involves infants using their senses to coordinate with their motor skills they are developing. There is some research that has been completed that states that Piaget’s theories may need to be modified. For example, Elizabeth Spelke endorses a core knowledge approach, in which she believes that infants are born with some innate knowledge system in order for them to navigate the world in which they are born into.

Language development begins during this stage also and all infants follow a similar pattern. The first sounds from birth is babbling, crying and cooing which are all forms of language. First words are usually spoken by about 13 months with children usually speaking two word sentences by about two years. Language skills can be influenced both by biological and environmental considerations in the infant.

An infant displays emotion very early in life. The first six months of their life you can see surprise, joy, anger, sadness, and fear. Later in infancy, you will also see jealousy, empathy, embarrassment, pride, shame and guilt. The later developed emotions are emotions that require thought, which is why they don’t develop until after the age of 1. Crying can indicate three different emotions in an infant – basic cry – typically related to hunger, anger cry and pain cry. A baby’s smile can also mean different things – such as a reflexive smile or a social smile. Fear is an emotion that is seen early in a baby’s life. One that is often talked about is “stranger danger” or separation protest.

There are three classifications of temperaments of a child that were proposed by Chess and Thomas. These include an easy child, difficult child, and slow-to-warm up child. These temperaments can be influenced by biology, gender, culture and parenting styles. The remaining personality traits that are developed in the period include trust, developing sense of self and independence. Erik Erickson first stage of development occurs within the first year of life with his trust vs mistrust theory. The concept of trust vs mistrust is seen throughout the development of a person and is not limited to this age group. The second year of life Erickson’s theory of autonomy vs shame and doubt. As an infant develops his skills, they need to be able to do this independently or feelings of shame and doubt develop. The development of autonomy during infancy and the toddler years can lead to greater autonomy during the adolescent years.

Social interactions occur with infants as early as 2 months of age, when they learn to recognize facial expressions of their caregivers. They show interest in other infants as early as 6 month of age, but this interest increases greatly as they reach their 2nd birthday. Locomotion plays a big part in this interaction allowing the child to independently explore their surrounding and others that may be around them. Attachment theories are widely available. Freud believes attachment is based on oral fulfillment, or typically the mother who feeds them. Harlow said that attachment is based on the comfort provided based on his experiment with wire monkeys. Erikson’s theory goes back to the trust vs mistrust theory which was talked about earlier.

As a new baby is brought into a family, the dynamic of the household changes. There is a rebalancing of social, parental and career responsibilities. The freedom that was once had prior to the baby is no longer there. Parents need to decide if a parent stays home to take care of the child or if the child is placed into a daycare setting. Parental leave allows a parent to stay home with their child for a period of time after their birth, but then requires them to be place in some type of child care setting. Unfortunately, the quality of child care varies greatly. Typically, the higher the quality, also the higher the price tag. A parent needs to be an advocate for their child and monitor the quality of care they are receiving, no matter the location they are at. There has been shown to be little effect on the children who are placed in child care instead of being cared for by a full-time parent.

As an infant, I was a bottle-fed baby. My mother was able to be home with me full time, so I was not exposed to outside childcare settings. Unfortunately for my parents, I was very colicky until I was about 6 weeks old. This was very stressful for my parents as they were adjusting to life as a family with a new baby. After the colic ended, I was a very happy, easy baby when I wasn’t sick. I developed febrile seizures about the age of 7 months and they lasted until about 2 years when I was put on phenobarbital to control them. I talked and walked at a very young age (~9 months). I was very trusting of everyone and had no attachment issues. I was happy to play by myself if no one was around, but if company was over, my parents said I always wanted to be in the middle of the action, I was especially fond of adult interactions.

Early childhood:

The next developmental stage is early childhood which lasts from around the ages of 3-5. During this stage, height and weight are slowed from the infancy stage, but a child still grows about 2 ½ inches a year and gains 5-7 pounds per year during this stage. The brain continues to develop by combining the maturation of the brain with the external experiences. During this stage, increased. The size of the brain doesn’t increase dramatically during this or subsequent periods, but the local patterns within the brain do. The most rapid growth occurs in the prefrontal cortex which is key in planning and organization as well as paying attention to new tasks. The growth during this phase is caused by the increase in the number and size of the dendrites as well as the myelination continuing during this stage.

Gross motor skills continue to increase with children being able to walk easily as well as beginning movements of hopping, skipping, climbing, etc. Fine motor skills continue to improve as well with children being able to build towers of blocks, do puzzles, or writing their name.

Nutrition is an important aspect in early childhood. Obesity is a growing health problem in early childhood. Children are being fed diets that are high in fats and lower in nutritional value. They are eating out more that they have historically. Parents need to focus on better nutrition and more exercise for their children. Childhood obesity has a strong correlation to obesity later in life.

Piaget’s preoperational stage lasts from age 2 to 7, is the second stage in his theory of development. During this stage, children begin to represent things with words, images and drawings. They are egocentric and hold magical beliefs. This stage is divided into the symbolic function substage (age 2-4) and the intuitive thought substage (age 4-7). In the symbolic substage, the child is able to scribble designs the represent objects, and can engage in pretend play. They are limited in this stage by egocentrism and animism. The intuitive thought substage, the child begins to use primitive reasoning and is curious. During this time, memory increases as well as their ability to improve their attention span.

Language development during this phase is great. A child goes from saying two word utterances, to multiple word combinations, to complex sentences. They begin to understand the phonology of language as well as the morphology. They start to also apply the rules of syntax and semantics. The foundation for literacy also begins during this stage, using books with preschoolers provides a solid foundation for which the rest of their life successes can be based.

There are many early childhood education options available to parents. One option is the child centered kindergarten which focuses on the whole child. The Montessori approach allows the children more freedom to explore and the teacher is a facilitator rather than an instructor. There are also government funded programs such as Project Head Start available for low-income families to give their children the experience they need before starting elementary school.

Erickson’s stage of development for early childhood is initiative vs guilt. This stage the child has begun to develop an understanding of who they are, but also begin to discover who they will become. Usually children of this age describe themselves in concrete terms, but some also begin to use logic and emotional descriptors. Children also begin to perceive others in terms of psychological traits. Children begin to be more aware of their own emotions, understand others emotions and how they relate to them and are also able to begin regulating emotions during this stage.

Moral development also begins during this stage. Freud talks about the child developing the superego, the moral element of personality, during this stage. Piaget said children go through two distinct stages of moral reasoning: 1) heteronomous morality and 2) autonomous morality. During the first, the child thinks that the rules are unchangeable and judge an action by the consequence, not the intention. The autonomous thinker thinks about the intention as well as the consequence.

Gender identity and roles begin to play a factor during this stage. Social influences on gender roles provide a basis for how children think. This can be through imitation of what they see their parent doing or can be through observation of what they see around them. Parental and peer influences on modeling behavior is apparent. Group size, age, interaction in same-sex groups and gender composition all are important aspect of peer relations and influences.

Parenting style vary differently. Diana Baumrind describes four parenting styles in our book: authoritarian, authoritative, neglectful and indulgent. She shows a correlation between the different parenting styles and the behaviors in children.

Play is important in the child’s cognitive and socioemotional development. Play has been considered the child’s work by Piaget and Vygotsky. This allows a child to learn new skills in a relaxed way. Make-believe play is an excellent way for children to increase their cognitive ability, including creative thought. There are many way a child can play – including sensorimotor and practice play, pretense/symbolic play, constructive play, and games. Screen time is becoming more of a concern in today’s world. They are good for teaching, but can also be distracting/disruptive if screen time is not limited.

As a young child, I was very curious about thing and loved to play pretend. I attended preschool for two years, which aided in my cognitive development. My parents said I was able to read and do age advanced puzzles by the time I was 3. I was able to regulate my emotions and understand the emotions of others. My parents utilized an authoritarian style of discipline when I was younger, being the first child, they wanted their kids to be perfect. This relaxed as my siblings came along and as we got older.

Middle/late childhood:

During this period, children maintain slow, consistent physical growth. They grow 2-3 inches per year until the age of about 11, and gain about 5-7 pounds per year. The size of the skeletal and muscular systems is the main contributor to the weight gain.

The brain volume stabilizes by the end of this stage, but changes in the structures continue to occur. During this stage, there is synaptic pruning, in which some areas of the brain which are not used as frequently lose connection, while other areas increase the amount of connections. This increase is seen in the prefrontal cortex which orchestrates the function of many other brain regions.

Development of both gross and fine motor skills continue to be refined. Children are able to ride a bike, swimming, skipping rope; they can tie their shoes, hammer a nail, use a pencil and reverse numbers less often. Boys usually outperform girls in their gross motor skills, while girls outperform boys in the fine motor skills. Exercise continues to be area of concern at this age. Children are not getting the exercise they need. Studies have shown that aerobic exercise, not only helped with weight, but also with attention, memory, thinking/behavior and creativity.

Obesity is a continued health concern for this age group which leads to medical concerns such as hypertension, diabetes, and elevated cholesterol levels. Cancer is the second leading cause of death of children in this age group. The most common childhood cancer is leukemia.

Disabilities are often discovered during this time as many don’t show up until a child is in a school setting. There are learning disabilities, such as dyslexia, dysgraphia and dyscalculia; attention deficit hyperactivity disorder (ADHD), and autism spectrum disorders, such as autistic disorder and Asperger syndrome. Schools today are better equipped to handle children with these disabilities to help them receive the education they need.

This stage of development as described by Piaget Cognitive Development Theory is the concrete operational stage. The child in this stage can reason logically, as long reasoning can be applied to concrete examples. In addition, they can utilize conservation, classification, seriation and transitivity.

Long term memory increases during this stage, in part in relation to the knowledge a child has with a particular subject. Children are able to think more critically and creatively during this period as well as increases in their metacognition. Along with the topics already mentioned: self-control, working memory and flexibility are all indicators of school readiness/success.

Changes occur during this stage on how a child’s mental vocabulary is organized. They begin to improve their logical reasoning and analytical abilities. They also become have more of metalinguistic awareness, or knowledge about language. Reading foundations are important during this stage. Two approaches currently being explored are the whole-language approach and the phonics approach. The whole-language approach teaches children to recognized words or whole sentences. The phonic approach teaches children to translate written symbols into sounds.

The child during this stage, begins to better understand themselves and are able to describe themselves utilizing psychological characteristics and can describe themselves in reference to social groups. High self-esteem and self-concept are important for this age group. Low self-esteem has been correlated to instances of obesity, depression, anxiety, etc.

Erickson’s fourth stage of development, industry vs inferiority appears in this stage. Industry refers to work and children wanting to know how things are made and how they work. Parents who dismiss this interest can create a sense of inferiority in their children.

Emotional development during this stage involves the child becoming more self-regulated in their reactions. They understand what lead up to an emotional reaction, can hide negative reactions and can demonstrate genuine empathy. They are also learning coping strategies to learn to deal with stress. Moral development continues also during this stage as proposed by Kohlberg’s 6 stages of development.

Gender stereotypes are prevalent in this development phase. They revolve around physical development, cognitive development and socioemotional development of a child.

This stage of life, parents are usually less involved with their children although they continue to remain an important part of their development. They become more of the manager, helping the child learn the rights/wrongs of their behaviors. If there is a secure attachment between the parent and the child, the stress and anxiety that is involved in this phase is lessened.

Friendships are important during this stage of a child’s life. Friends are typically similar to the child in terms of age, sex, and attitudes towards school. School is a sign of new obligations to children. As with the younger age group, there are different approaches to school at this stage. A constructivist approach focuses on the learner and having the individuals constructing their knowledge. A direct instruction approach is more structured and teacher centered. Accountability in the schools is enforced through the application of standardized testing. Poverty plays a role in the learning ability of children, oftentimes creating a barrier to learning for the student, including parents with low expectations, not able to help with the homework of inability to pay for educational materials.

My parents said that by this age I was able to reason logically with them, and in my day to day life. I remained curious about what things were and how they worked. My mom told me about a test I took for an accelerated learning program (ULE) in my elementary school. I missed one question, I couldn’t answer what a wheelbarrow was. After that, my mom said I was interested in learning what they were and what they were used for. The ULE program helped me satisfy my curiosity above and beyond what was taught in school by providing additional learning opportunities.


Adolescence lasts from about 12 – 18 years of age. The primary physical change during adolescence is the start of puberty. This is a brain-neuroendocrine process that provides stimulation for rapid physical changes that take place. This is when a child takes on adult physical characteristic, such as voice changes, height/weight growth for males and breast development and menstruation begins for females. Females typically enter puberty two years prior to males. The process is hormonal driven and include actions from the hypothalamus and pituitary gland. During this time, adolescents are preoccupied with their body image, as their bodies are rapidly changing. Females are typically more dissatisfied with their bodies than males, however, body image perception becomes more positive for both genders as they end the adolescent period.

Brain development during this time includes significant structural changes. The corpus callosum thickens, improving their ability to process information. The prefrontal lobes continue to develop, increasing reasoning, decision making and self-control. The limbic system, specifically the amygdala is completely developed by this stage.

This stage also marks a time of sexual exploration, forming a sense of sexual identity, managing sexual feelings, and developing intimate relationships. Most adolescents are not emotionally prepared for sexual experiences and can lead to high risk sexual factors. Contraceptive use is not prevalent in this age group, even though it can lessen or eliminate the risk of sexually transmitted diseases and unwanted pregnancy. Teen pregnancy, while reduced from years past, is still too high. Sex education continues to be a topic of discussion as to what is most appropriate for the schools – abstinence only or education that emphasizes contraceptive knowledge.

Health during this stage of development is of concern as bad health habits learned here, can lead to death in early adult life. Obesity due to poor nutrition and lack of exercise remains a consistent theme. Sleep is also important for this age group as most reported getting less than 8 hours of sleep per night. Substance use is also seen in this age group. Another health concern is eating disorders including both anorexia and bulimia, these disorders can take over a person’s life due to distorted body images.

Piaget’s final stage of cognitive development occurs during this stage – the formal operational stage. Adolescents are not bound by concrete thoughts or experiences during this stage. They can think abstractly, idealistically, and logically.

Executive function is one of the most important cognitive changes that occurs in this stage. This involves an adolescent ability to have goal directed behavior and the ability to exercise self-control.

The transition between elementary school to junior high school during this stage can be very stressful for adolescents. It occurs during a period of time when many other physical changes (puberty) are occurring at the same time. This can create stress and worrying for the child.

Erickson’s fifth developmental stage that corresponds to this period in life is Identity vs identity confusion. This stage is aided by a psychosocial moratorium, which is the gap between adolescence and adulthood. This period a person is relatively free of responsibility to determine what their true identity is. This is the path that one takes toward adult maturity. Crisis during this stage is a period in which a person is identifying alternatives. Commitment is a personal investment in an identity. It is believed that while identity is explored during this stage, finalization does not occur until early adulthood, with life review.

Parents take on a managerial role during this stage; monitoring the choices that are made regarding friends, activities, and their academic efforts. Higher rates of parental monitoring leads to lower rates of alcohol and drug use. The adolescents need for autonomy can be hard for a parent to accept. The parents feel like the child is “slipping away” from them. There is also gender differences as far as it relates to how much autonomy is granted, with males receiving more autonomy than females. Conflict during this escalates during the early adolescent stage, but then lessens towards the end of the stage.

Friendships during this stage are often fewer, but more intimate than in younger years and take on an important role of meeting social needs. Positive friendships are associated with positive outcomes, including lower rates of substance abuse, risky sexual behavior, bullying and victimization. Peer pressure at this stage in life is high, with more conformance to peer pressure if they are uncertain about their social identity. Cliques and crowds emerge and provide a more important role during this stage of development. Dating and romantic relationships begin to evolve. Juvenile delinquency is a problem that emerges, with illegal behaviors being noted. This can be due to several factors including lower socioeconomic status, sibling relationships, peer relationships, and parental monitoring. Depression and suicide also increase during this stage of life.

During this stage of my life, I was very goal oriented, more so academically than socially. I chose to take higher level classes that weren’t required and continued to work with a program that allowed me to do projects outside of school. During this time, I began to think about what direction my life would take. I decided that I would attend college to major in pharmacy, a decision that would later be reviewed and changed.

Early adulthood:

Becoming an adult involves a lengthy transition. Early adulthood occurs from 18 to 25 years of age. During this time, an individual is still trying to figure out “who” they are, exploring career paths, determining their identity and understanding what kind of lifestyle they want to live. Early adulthood is characterized by 5 key features as explained by Jeffrey Arnett. These include: identity exploration, instability, self-focused, feeling in-between and the age of possibilities – basically they can transform their lives. In the US, entry into adulthood is primarily characterized by holding a permanent, full-time job. Other countries consider marriage the marker for adulthood. Just as going from elementary school to middle school causes stress in adolescents, the transition from high school to college can evoke the same emotions.

Peak physical performance is often reached between the ages of 19 and 26. Along with physical performance decline, body fatty tissue increases and hearing begins to decline in the last part of early adulthood. Health during early adulthood is subpar. Although most know what is required to be healthy, many fail to apply this information to themselves. The bad habits started during adolescence are increased in early adulthood, including inactivity, diet, obesity, sleep deprivation and substance abuse. These lifestyles, along with poor health, also have an impact on life satisfaction. Obesity continues to be a problem in this developmental stage. Losing weight is best achieved with a diet and exercise program rather than relying on diet alone. Exercise can help prevent diseases such as heart disease and diabetes. Exercise can also improve mental health as well and has been effective in reducing depression. Alcohol use appears to decline by the time an individual reaches their mid-twenties, but peaks around 21-22 years of age. Binge drinking and extreme binge drinking are a concern on college campuses. This can lead to missing classes, physical injuries, police interactions and unprotected sex.

Sexual activity increases in emerging adulthood, with most people having experienced sexual intercourse by the time they are 25. Casual sex is common during this development stage involving “hook-ups” or “friends with benefits”.

Piaget’s stages of development ended with formal operational thought that was discussed in the adolescent stage. However, he believes that this stage covers adults as well. Some theorists believe that it is not until adulthood that formal thoughts are achieved. An additional stage has been proposed for young adults – post-formal thought. This is reflective, relativistic, contextual, provisional, realistic and influenced by emotion.

Careers and work are an important theme in early adulthood. During this time an individual works to determine what career they want to pursue in college by choosing a major. By the end of this developmental stage, most people have completed their training and are entering the work force to begin their career. Determining one’s purpose, can help ensure that the correct field of study/career choice is made. Work defines a person by their financial standing, housing, how they spend their time, friendships and their health. Early jobs can sometimes be considered “survival jobs” that are in place just until the “career job” is obtained.

Erickson’s sixth stage of development that occurs during early adulthood is intimacy vs isolation. Intimacy, as described by Erickson, is finding oneself while losing oneself in another person, and it requires a commitment to another person. Balancing intimacy and independence is challenging. Love can take on multiple forms in adulthood. Romantic love, or passionate love, is the type of love that is seen early in a relationship, sexual desire is the most important ingredient in romantic love. Affectionate love, or compassionate love, is when someone desires to have the other person near and has a deep, caring affection for the person, typically a more mature love relationship. Consummate love involves passion, intimacy and commitment, and is the strongest of all types of love.

Adults lifestyles today are anything but conventional. Many adults choose to live alone, cohabitate, or live with a partner of the same sex in addition to the conventional married lifestyle. Divorce rates continue to remain high in the US, with most marriages ending early in the course of their marriage. Divorced adults have higher rates of depression, anxiety, suicide, alcoholism and mortality. Adults who remarry usually do so within three years of their divorce, with men remarrying sooner than women. Making a marriage work takes a great deal of commitment from both parties. John Gottman determined some principals that will help make a marriage successful. These include: establishing love maps, nurturing fondness and admiration, turning towards each other instead of away, letting the partner influence you and creating shared meaning. In addition, to these a deep friendship, respect for each other and embracing the commitment that has been made are things that will help to make a marriage last.

During early adulthood, many become parents for the first time. Sometimes this is well planned out, other times it is a complete surprise. Parenting is often a hybrid of utilizing techniques that their parents used on them and their own interpretation of what is useful. The average age an individual has their first child is increasing and the number of children they choose to have is declining. This is due to women wanting to establish their careers prior to becoming a mom. The results of this is that parents are often more mature and able to handle situations more appropriately, may have more income, fathers are more involved in the child rearing but also children spend more time in supplemental care than when mothers stayed home to provide the child care.

During early adulthood, I went to college, decided that a pharmacy major wasn’t for me and ended up obtaining a degree in microbiology and a minor in chemistry. I met my first husband during college and we ended up marrying a couple months before I graduated. After graduation, we had a child and eventually ended up getting a divorce. I think the stress of going right from college to marriage to having a family took a toll on us. We were able to maintain civility to co-parent our son even though we were not able to make our marriage work. The first few years after our divorce were very hard, being a single-mom, trying to get a career established and make sure I was providing for our child. Thankfully I had a huge support system with my parents and siblings that were able to get us through the tough times. About 10 years later, I met my now husband and was able to find the intimacy again that was needed in my life. We both have brought children from previous relationships into our marriage and have also had two children together. This has created some conflict of its own, but we work through it all together. I feel that we are much more equipped and mature to be parents of our younger children than we were when our older ones were little.


Human identification using palm print images



1.1. Background

The term “identification” means the act or process of establishing the identity of or recognising, the treating of a thing as identical with another, the act or process of recognising or establishing as being a particular person, but also the act or process of making representing to be, or regarding or treating as the same or identical.

Computerized human identity is one of the most essential and difficult tasks to meet developing call for stringent protection. The usage of physiological and/or behavioral characteristics of people, i.e., biometrics, has been drastically employed within the identity of criminals and matured as an essential tool for law enforcement departments. The biometrics-primarily based automatic human identity is now rather popular in a extensive range of civilian packages and has an end up effective alternative to traditional (password or token) identification systems. Human fingers are easier to offer for imaging and may screen a diffusion of statistics. Therefore, palmprint studies has invited a number of attention for civilian and forensic usage. But, like a number of the popular biometrics (e.g., fingerprint, iris, face), the palmprint biometric is likewise at risk of sensor level spoof assaults. Far flung imaging the usage of a excessive-resolution digicam may be hired to reveal important palmprint information for feasible spoof assaults and impersonation. consequently, extrinsic biometric capabilities are predicted to be greater vulnerable for spoofing with mild efforts. In precis, the blessings of clean accessibility of those extrinsic biometric tendencies additionally generate a few concerns on privateness and security. on the other hand, intrinsic biometrics characteristics (e.g., DNA, vessel systems) require greater hard efforts to accumulate with out the knowledge of an person and, consequently, extra tough to forge. but, in civilian applications it is also crucial for a biometrics trait to ensure excessive collectability at the same time as the consumer interacts with the biometrics tool. on this context, palm-vein popularity has emerged as a promising opportunity for non-public identification. It has the benefit of the high agility.

Biometrics is authentication using biological data. It is a powerful method for authenticating. The general purpose of biometry is to distinguish people from each other by using features that cannot be copied or imitated. There is less risk than other methods because it is not possible for people to change, lose, and forget their physical properties. The use of these features, defined as biometric metrics, in ciphers is based on an international standard established by INCITS (International Committee for Information Technology).

In recent years, many people have been able to distinguish amount of work has been done. Some of the patterns studied are characters, symbols, pictures, sound waves, electrocardiograms. Usually complex difficult to interpret due to calculations or human evaluations of overload problems are used in computerized identification. The path maps to the template. In this case, a template for each pattern class the set of templates is stored in memory in the form of a database. Unknown class of each class template. The classification is based on a previously determined mapping criterion or similarity criteria. Compare the pattern with the complete pattern it is faster to compare some features rather than the more accurate result most of the time. For this reason, pattern recognition process, feature extraction and classification examined in two separate phases.

In Picture 1.2 future extraction, makes some measurements on the pattern and turns the results into a feature vector. This feature may vary considerably depending on the nature of the problem. Also, the importance ratings and costs of the features may be different. For this reason, properties should be selected to distinguish the classes from each other and to achieve lower costs.

Features are different for every pattern recognition problem.

Based on the properties extracted in the classification stage, it is decided to which class the given object belongs to. Although the feature extraction does not differ according to the pattern recognition problem, the classifiers are collected in specific categories[6].

Template mapping is the most common classification method. In this method, each pixel of the view is used as a feature. The classification is done by comparing the input image to all the class templates. The comparison results in a similarity measure between the input information and the template, with the template, the pixel-based equivalence of the input image increases the degree of similarity, while the corresponding of the corresponding pixels reduces the similarity. After all the templates are compared, the class of the template giving the most similarity grade is selected. Structural classification techniques use structural features and decision rules to classify patterns. For example; line types in characters, holes and slopes are structural properties. Rule-based classification is performed by using these extracted features. Many pattern recognition systems are based on mathematical bases to reduce misclassification. These systems are pixel-based and use structural features. Examples include Gabor features, contour properties, gradient properties, and histograms. As a classifier, classifiers including discriminant function classifiers, Bayesian classifiers and artificial neural networks can be used[1].

In its simplest terms, it is a necessary tool to process darkness, manipulate images, and two important input-output niches are demanded image digitization and imaging devices. Due to the inherent nature of these devices, images do not create a direct source for computer analysis. Since computers work with numeric values rather than with image data, the image is transformed into a numeric format before processing begins. Picture-1.1 shows how a numbered array of numbers can represent a material image. The material image is divided into small regions called “shape elements” or “pixels”. The rectangular mesh cage device, which is the most comprehensive subdivision scheme, is also shown in Picture-1.1 In the digital image, the value placed on each pixel is given the brightness of that spot.

The conversion process is called numerical conversion. This situation is completely transferred to a diagram in Picture 1.2 The brightness of each pixel is used as examples and numerically. This part of the operation shows the brightness or darkness of each pixel in that place. When this process is applied to all pixels, the image is displayed in a rectangular shape. Each pixel has a full place or a trace (number of lines and columns), and at the same time has a full value called gray level. This sequence of numeric data is now available for processing on a computer. Picture 1.3 shows the numerical state of a continuous view.

1.2. Human Identity

Human beings cannot live without systems of meaning. Our primary impulse is the impulse to find and create meaning. But just as important, human beings cannot exist without an identity and often human identity is tied closely to the systems of meaning that people create. The systems of meaning are how people express their identity.

There are many elements that shape identity- family, community, ethnicity, nationality, religion, philosophy, science and occupation. For much of history, human identity has been oriented to small bands of extended families with belief systems that validated that lifestyle. With the movement toward domestication and state formation, along with the larger communities of such states, the boundaries of human identity were widened. But the small band mentality has persisted over subsequent millennia and is still evident even within modern states in the form of ethnic divisions, religious differences, occupation, social status, and even in the form of organizational membership[1].

The presence and origin of the small band mentality can be explained in terms of the inherited animal brain and its primitive drives. Animal life from the earliest time developed an existence of small groups of extended family members. This existence was shaped by the base drives to separate from others, to exclude outsiders, and to dominate or destroy them as competitors for resources. This in-group thinking and response was hardwired into the animal brain which has continued to influence the human brain. Unfortunately, small band mentality has long had a powerful influence on the creation of systems of meaning and the creation of human identity. People have long identified themselves in terms of some localized ethnic group, religion or nation in opposition to others who are not members of their group. This has led to the exclusion of outsiders and crusades to dominate or destroy them as enemies.

More recent discovery on human origins and development confirms the early Greek intuition. We now know that we are all descendent of a common hominid ancestor (the East Africa origin hypothesis). Race is now viewed as a human construct with little if any basis in real biology. So-called racial differences amount to nothing of any real distinction in biology. One scientist has even said that, genetically, racial features are of no more importance than sunburn.

This information points to the fact that we are all descendent of Africans. In the great migrations out of Africa some early hominids moved to Europe and endured millennia of sunlight deprivation and this led to a redistribution of melanin in their skin. They still possessed the same amount of melanin as that of darker skinned people but it was not as visible.

All this is to say that the human race is indeed one family. And modern human identity and meaning must be widened to include this fact. The small band mentality of our past which focuses human identity on some limited subset of the human race has always led to the creation of division, barriers, opposition and conflict between people. It is an animal view of human identity.

But we are no longer animals. We are now human and we need to overcome the animal tendency to separate from others, to exclude them, and to view them as outsiders or enemies to be dominated or destroyed.

It is also useful to note here how tightly many people tie their identity to the system of meaning that they adopt (their belief system or viewpoint). Consequently, any challenge to their system of meaning will produce an aggressive defensive reaction. The system may contain outdated ideas that ought to be challenged and discarded but because it comprises the identity of those who hold it, they will view any challenge as an attack on their very selves and this produces the survival response or reaction. Attacks on the self (self-identity) are viewed as attacks on personal survival and will evoke the aggressive animal defense. In this reaction we see the amygdala overruling the cortex.

This defensive reaction as an attempt to protect the self helps explain in part why people continue to hold on to outdated ideas and systems of belief/meaning. The ideas may not make rational sense to more objective outside viewers but to those who hold them, they make sense in terms of the dominant themes of their overall system.

It is true that we can’t live without meaning or identity. And our identity is often defined by our systems of meaning. This tendency to tie our identity too tightly to our systems of meaning calls for a caution: Human meaning and identity should not be placed in an object- a system of meaning, an ideology, an occupation, a state, a movement, ethnicity or some organization. Our identity and our search for meaning should be focused on the process of becoming human. This orients us to ongoing development and advance. We then remain open to make changes as new information comes along. It’s about the human self as dynamic process, not rigid and unchanging object.

So from our point of view, identity is used to mean the condition of being a specified person or the condition of being oneself and not another. It clusters with the terms personality and individualism, and less fashionably, “soul”.

Figure 1.1: Human Identity by face

1.3. Palm Print

Handprint recognition supplies fingerprint matching algorithms for nature: The two biometric systems are based on personal information, which is represented by the effects seen on the lines. Statistical analysis by FBI officials reflects the fact that handprint identification is a biometric system that is complementary to more popular fingerprint recognition systems. The findings of these studies show that 70% of traces left behind by criminals in crime scenes are from fingerprints and 30% are from palms. Because of the lack of processing capabilities and lack of live-scanning technologies, palm print recognition algorithms work more slowly when automated compared to fingerprint recognition algorithms. Since 1994, there has been a growing interest in systems that use fingerprint and palm print identification together. The palm print identification is based on massive information found on the friction ridge as on the fingerprint. The palm print, or fingerprint, consists of the dark lines representing the high and pointed portions of the lines of evidence, which are in sequence, and white lines representing the valleys between these lines of relief. The palm print recognition technology uses some of these characteristics.

Algorithms used for palm print detection and verification are similar to algorithms used in finger print recognition. These algorithms are basically based on correlation, based on feature points (minutiae) and based on ridges (ridges). Correlation-based matching involves two handfuls of images taken together to find corresponding lines in two images; feature-based matching is based on the determination of the location, orientation and orientation information of specific feature points in the palm image and comparison of these information. The line-based matching technique uses the geometric characteristics of the lines as well as the texture analysis, in addition to feature point analysis, while classifying the palm print.

Correlation-based algorithms work faster than other types of techniques, but they have less tolerance to distortions and rotation variances in the image. Algorithms based on feature points require high quality imagery and do not benefit from the textural or visual qualities of the player. Finally, line-based algorithms require a high-resolution sensor to produce good-quality images, as well as distinctive features of line characteristics that are significantly less than feature points. The positive and negative aspects of these techniques also apply to fingerprinting.

William James Herschel, the son of John Herschel, was an astronomer. His father asked him to choose a career other than astronomy, so he joined the East India Company, and in 1853 was posted to Bengal. Following the Indian Mutiny of 1858, Herschel became a member of the Indian Civil Service, and was posted to Jungipoor.

In 1858 he made a contract with Mr. Konai, he was a local man, for the construction of road building materials. To prevent Konai from rejecting his signature later, Herschel suppressed his handprint in his documentary figure 2.2 shows that Mr. Konai’s palmprints. Herschel continued to experiment with hand prints, soon he realized that it was necessary to use fingers. He collected prints from his friends and family, and the result was that one’s fingerprints did not change over time. The Governor of Bengal suggested that fingerprints should be used on legal documents to prevent impersonation and refusal of contracts, but this proposal was not addressed. [1]

Now we are using palm prints and fingerprints to investigate the criminal cases, for example in a crime scene we found some palmprints and fingerprints on a subject in the crime scene then we collect this print after that we compare the prints with the people who committed crime before. Also in government’s documents we use it like a sign of a person, and in health applications, so as we see there are many areas to use the palm print images.

Figure 1.2: Palm Print

1.3.1. Palm Print Features

Palm print has stable and rich line features, three types of line patterns are visible on the palm. They are principal lines, wrinkles, and ridges. Principal lines are the longest, and widest lines on the palm. The principal lines indicate the most distinguishing direction features on the palm. Most people have three principal lines, which are named as the heart line, head line, and life line. Wrinkles are regarded as the thinner and more irregular line patterns. The wrinkles, especially the pronounced wrinkles around the principal lines, can also contribute for the discriminability of the palm print. On the other hand, ridges are the fine line texture distributed throughout the palmar surface. The ridge feature is less useful for discriminating individual as they cannot be perceived under poor imaging source. Figure 4 shows the palm lines.

1.3.2. The importance of palm print identification

Every person’s palm print is unique, so palm print identification is a perfect form of authentication.
The palm print recognition system has a high level of security because it is impossible to steal.
Palm print is used in many industries such as healthcare, aviation, education, construction and banking. Thus, palm print identification is a user-friendly system.
The size of the palm print recognition system is small and portable.
Palm print recognition system is hygienic due to contactless use.

1.4. Biometric Features

Physiological features include DNA, iris, finger prints, palm prints, and facial features, while behavioral features include mimics, signature, and sound. When measuring physiological / behavioral characteristics, such factors as age, health or mental status of the person should be eliminated from the measurement. The existing identification systems are not sufficient, the conventional methods based on the use of a personal identification number (PIN) together with user name and plastic cards are both inconvenient and unsafe. The ideal biometric based person recognition system should identify the identity of the individual uniquely or verify identification within the database accurately, reliably and most efficiently. For this reason, the system should be able to cope with problems such as inlet deterioration, environmental factors and signal mixtures, and should not change over time and be easily applicable. The most commonly used biometric feature is the most reliable iris scan while fingerprinting.

In this Project, we will work on the palm print recognition system, which is one of the physical features. Palm tracking has advantages over other biometric features. The required images are collected with a low-cost operation and the image does not cause any deterioration; False Accept Rate and False Reject Rate are reasonable values. Incorrect acceptance and rejection rates for a system are part of the total number of identification attempts for total false acceptance / rejection..




In this chapter, I will talk about the approaches of palm print. Individual validation utilizing palmprint pictures has gotten extensive consideration the most recent 5 years and various methodologies have been proposed in the writing. The accessible methodologies for palmprint validation can be isolated into three classifications essentially on the premise of separated highlights; (I) surface based methodologies (ii) line-based methodologies, and (iii) appearance-based methodologies. The depiction of these methodologies is past the extent of this paper. However a synopsis of these methodologies with the run of the mill references can be found in Table 1. Analysts have indicated promising outcomes on inked pictures, pictures procured specifically from the scanner and pictures procured from advanced camera utilizing compelled pegged setup. However endeavors are as yet required to enhance the execution of unconstrained pictures gained from sans peg setup. Accordingly this paper uses such pictures to research the execution change. An outline of earlier work in Table 1 demonstrates that there has not been any endeavor to explore the palmprint confirmation utilizing its numerous portrayals [2].

A few coordinating score level combination methodologies for joining different biometric modalities have been displayed in the writing. It has been demonstrated that the execution of various combination methodologies is unique. Be that as it may, there has not been any endeavor to consolidate the choices of different score level combination techniques to accomplish execution change. The association of rest of this paper is as per the following; Section 2 depicted the square outline of the proposed framework. This area likewise points of interest include extraction strategies utilized in the tests. Area 3 subtle elements the coordinating paradigm and the proposed combination technique. Analysis comes about and their exchange show up in Section 4. At last the finishes of this work are compressed in Section.

2.2. Proposed Systems

Unlike previous work, we propose an alternative approach to palmprint authentication by the simultaneous use of different palmprint representations with the best pair of fixed combination rules. The block diagram of the proposed method for palmprint authentication using the combination of multiple features is shown in Fig. 1. The hand image from every user is acquired from the digital camera. These images are used to extract region of interest, i.e. palmprint, using the method detailed in Ref. [5]. Each of these images is further used to extract texture-, line- and appearance-based features using Gabor filters, Line detectors, and principal component analysis (PCA) respectively. These features are matched with their respective template features stored during the training stage. Three matching scores from these three classifiers are combined using fusion mechanism and a combined matching score is obtained, which is used to generate a class label, i.e., genuine or imposter, for each of the user. The experiments were also performed to investigate the performance of decision level fusion using individual decisions of three classifiers. However, the best experimental results were obtained with the proposed fusion strategy which is detailed in Section 4.

Figure 2.1: Block diagram for personal authentication using palmprint

2.2.1. Gabor Features

The surface highlights separated utilizing Gabor channels have been effectively utilized in unique mark characterization, penmanship acknowledgment and as of late in palmprint. In spatial area, an even-symmetric Gabor channel is a Gaussian capacity tweaked by a situated cosine work [3]. The motivation reaction of even-symmetric Gabor channel in 2-D plane has the accompanying general frame:

In this work, the parameters of Gabor filters were empirically determined for the acquired palmprint images. If we filter the image with Gabor filter, we get:

where ‘∗’ indicates discrete convolution and the Gabor channel cover is of size W×W. Accordingly every palmprint picture is sifted with a bank of six Gabor channels to create six separated pictures. Each of the separated pictures emphasizes the unmistakable palmprint lines and wrinkles in relating bearing i.e., while lessening foundation clamor and structures in different headings. The segments of palmprint wrinkles and lines in six unique ways are caught by each of these channels. Every one of these pictures separated picture is partitioned into a few covering squares of same size. The component vector from every one of the six sifted pictures is framed by figuring the standard deviation in every one of these covering squares. This component vector is utilized to remarkably speak to the palmprint picture and assess the execution [3].

Figure 2.2: Partial-domain representation

2.2.2. Extraction of line features

Palmprint distinguishing proof utilizing line highlights has been accounted for to be effective and offers high precision. The extraction of line highlights utilized as a part of our tests is same as definite in [4]. Four directional line indicators are utilized to test the palmprint wrinkles and lines arranged at each of the four bearings, i.e. 0 ◦, 45◦, 90◦ and 135 ◦. The spatial degree of these covers was experimentally settled as 9 × 9. The resultant four pictures are consolidated by voting of dark level extent from relating pixel position. The joined picture speaks to the consolidated directional guide of palm-lines and wrinkles in the palmprint picture. This picture is additionally partitioned into a few covering square pieces. The standard deviation of dim level in every one of the covering squares is utilized to shape the element vector for each palmprint picture[2].

The proposed palmprint verification technique was examined on a dataset of 100 clients. This informational index comprises of 1000 pictures, 10 pictures for each client, which were procured from advanced camera utilizing unconstrained sans peg setup in indoor condition. Fig. 5 indicates normal procurement of a hand picture utilizing the advanced camera with live criticism. The hand pictures were gathered over a time of 3 months from the clients in the age group of 16– 50 years. The hand pictures were gathered in two sessions from the volunteers, which were not very helpful. Amid picture procurement, the clients were just asked for to ensure that (I) their fingers don’t touch each other and (ii) the vast majority of their hand (rear) touches the imaging table. The mechanized division of locale of intrigue, i.e. palmprint, was accomplished by the technique itemized in Ref. [5]. Hence the palmprint picture of 300 × 300 pixels were acquired and utilized in our analyses. Every one of the obtained pictures was further histogram evened out.

2.2.3. Extraction of PCA features

The data substance of palmprint picture additionally comprises of certain nearby and worldwide highlights that can be utilized for distinguishing proof. This data can be extricated by enrolling the varieties in an ensemble of palmprint pictures, autonomous of any judgment of palmprint lines or wrinkles. Each N × N pixel palmprint picture is spoken to by a vector of 1 × N2 measurement utilizing line requesting. The accessible set of K preparing vectors is subjected to PCA which creates an arrangement of orthonormal vectors that can ideally speak to the data in the preparation dataset. The covariance framework of standardized vectors j can be gotten as takes after [2]:

2.3. Matching criterion

The grouping of separated component vectors utilizing each of three techniques is accomplished by closest neighbor (NN) classifier. The NN classifier is a basic nonparametric classifier which figures the base separation between the include vector of obscure example g and that of for gm in the mth class [5]:

where and individually speak to the nth part of highlight vector of obscure example and that of mth class. Every one of the three capabilities got from the three unique palmprint portrayals were explored different avenues regarding each of the over three separation measures (8)– (10). The separation measure that accomplished best execution was at long last chosen for the order of capabilities from the comparing palmprint portrayal.

The combination system goes for enhancing the joined grouping execution than that from single palmprint portrayal alone. There are three general techniques for joining classifiers; at include level, at score level and at choice level. Because of the expansive and shifting measurement of include vectors, the combination approach at highlight level has not been considered in this work. An outline [20] of utilized approaches for multimodal combination recommends that the score level combination of capabilities has been the most well-known approach for combination and has appeared to offer noteworthy change in execution. The objective of assessing different score level combination techniques is to create most ideal execution in palmprint verification utilizing given arrangement of pictures. Let LGabor(g, gm),LLine(g, gm) and LPCA(g, gm) signify the coordinating separation delivered by Gabor, Line and PCA classifiers separately. The joined coordinating score LC(g, gm) utilizing the outstanding settled tenets can be acquired

Figure 2.3: Combination of Gabor, Line, and PCA

I is the chosen consolidating guideline, i.e. I speaks to most extreme, aggregate, item or least lead (shortened as MAX, SUM, PROD and MIN separately), assessed in this work. One of the deficiencies of settled guidelines is the supposition that individual classifiers are autonomous. This supposition might be poor, particularly for the Gabor and Line based highlights. In this way SUM manage can be better option for uniting coordinating scores while joining Gabor furthermore, Line highlights. These merged coordinating scores can be additionally joined with PCA coordinating scores utilizing PROD manage (Fig. 3) as the PROD administer is evaluated to perform better on the supposition of free information portrayal [17]. The individual choices from the three palmprint portrayals were additionally joined (dominant part voting) to look at the execution change. The exhibitions of different score level combination techniques are unique. Along these lines the execution from straightforward crossover combination methodology that joins choices of different settled score level combination plans, as appeared in Fig. 4, was likewise examined in this work. of utilizing settled blend manages, the coordinating scores from the preparation set can likewise be utilized to adjust a classifier for two class, i.e. real and fraud, characterization. Hence the consolidated arrangement of three coordinating scores utilizing feed forward neural system (FFN) and bolster vector machine

(SVM) classifier has additionally been explored [5].

Figure 2.4: Hybrid fusion scheme

2.4. Image Acquisition & Alignment

Our picture procurement setup is innately basic and does not utilize any exceptional enlightenment (as in [3]) nor does it utilize any pegs to make any bother clients (as in [20]). The Olympus C-3020 computerized camera (1280 ‘ 960 pixels) was utilized to get the hand pictures as appeared in figure 2. The clients were just asked for to ensure that (I) theirs.

2.4.1. Extraction of hand geometry images

Every one of the gained pictures should be adjusted a favored way in order to catch the same highlights for coordinating. The picture thresholding activity is utilized to acquire a parallel hand-shape picture. The edge esteem is consequently figured utilizing Otsu’s technique [25]. Since the picture foundation is steady (dark), the limit esteem can be processed once and utilized in this manner for different pictures. The binarized state of the hand can be approximated by an oval. The parameters of the best-fitting circle, for a given double hand shape, is registered utilizing the minutes [26]. The introduction of the binarized hand picture is approximated by the significant pivot of the oval and the required edge of turn is the diffe rence amongst typical and the introduction of picture [6]. As appeared in figure 3, the binarized picture is turned and utilized for registering the hand geometry highlights. The assessed introduction of binarized picture is additionally used to turn dim level hand picture, from which the palmprint picture is extricated as nitty gritty in the following subsection.

Figure 2.5: Extraction of two biometric modalities from the hand image

2.4.2. Extraction of palmprint images

Each binarized hand-shape picture is subjected to morphological disintegration, with a known double SE, to register the locale of intrigue, i.e., the palmprint. Give R a chance to be the arrangement of non-zero pixels in a given parallel picture and SE be the arrangement of non-zero pixels, i.e., organizing component. The morphological disintegration is characterized as

where SE g signifies the organizing component with its reference point moved by g pixels. A square organizing component (SE) is utilized to test the composite binarized picture. The inside of twofold hand picture after disintegration, i.e., the focal point of rectangle that can encase the deposit is resolved. This inside directions are utilized to remove a square palmprint locale of settled measure as appeared in figure 3.

2.4.3. Extraction of hand geometry features

The two fold image‡ as appeared in figure 3(c), is utilized to register critical hand geometry highlights. An aggregate of 16 hand geometry highlights were utilized (figure 5); 4 finger lengths, 8 finger widths (2 widths for each finger), palm width, palm length, hand region, and hand length. In this way, the hand geometry of each hand picture is portrayed by a component vector of length 1×16. The various bits of confirmations can be consolidated by various data combination techniques that have been proposed in the writing. With regards to biometrics, three levels of data combination plans have been recommended; (I) combination at portrayal level, where the element vectors of different biometric are connected to shape a joined highlight vector, (ii) combination at choice level, where the choice scores of different biometric framework are consolidated to produce a ultimate conclusion score, and (iii) combination at dynamic level, where numerous choice from different biometric frameworks are combined. The first two combination plans are more important for a bimodal biometric framework and were considered in this work.

Figure 2.6: Hand geometry feature extraction



4.1. Background

In this project …. .



5.1. Background


A. Kumar, D. C. Wong, H. C. Shen, and A. K. Jain, “Personal verification using palmprint and hand geometry biometric,” in International Conference on Audio-and Video-Based Biometric Person Authentication, 2013, pp. 668-678.
A. Kumar and D. Zhang, “Personal authentication using multiple palmprint representation,” Pattern Recognition, vol. 38, pp. 1695-1704, 2010.
S. Pathania, “Palm Print: A Biometric for Human Identification,” 2016.
J. Kodl and M. Lokay, “Human Identity, Human Identification and Human Security,” in Proceedings of the Conference on Security and Protection of Information, Idet Brno, Czech Republic, 2010, pp. 129-138.
S. Sumathi and R. R. Hemamalini, “Person identification using palm print features with an efficient method of DWT,” in Global Trends in Information Systems and Software Applications, ed: Springer, 2012, pp. 337-346.
H. J. Asghar, J. Pieprzyk, and H. Wang, “A new human identification protocol and Coppersmith’s baby-step giant-step algorithm,” in International Conference on Applied Cryptography and Network Security, 2010, pp. 349-366.


Dental implications of eating disorders

Eating disorders are a type of psychological disorder characterised by abnormal or unhealthy eating habits, usually linked with restrictive food intake . The cause of onset cannot be linked to one reason alone as it is believed that there are multiple contributing factors, including biological, sociocultural and psychological influences. The sociocultural influences are linked with western beauty ideals that have recently been engraved into modern society, due to the increasing importance of social media and its dictation of the ideal body type. Studies show that eating disorders are significantly less common within cultures that have yet to be exposed to these ideals . The most commonly diagnosed disorders include anorexia nervosa (AN) and bulimia nervosa (BN), affecting more women than men (both AN and BM occur in ratios of 3:1, females to males) . Anorexia is a persistent restriction of energy intake and can be linked with obsessive behaviours that stem from severe body dysmorphia and fear of gaining weight. Bulimia is defined by repeated episodes of binge eating followed by measures to prevent weight gain such as forced regurgitation of stomach contents. The effects on an individual’s mental and physical health are commonly recognised, however dental implications may often be overlooked. The unknown cause of eating disorders increases the extent of their significance on dental health because without a definite root cause, it is difficult and sometimes even impossible to ‘cure’ an eating disorder ,thus, preventing the dental implications is more difficult. Association between eating disorders and oral health problems was initially reported in the late 1970’s therefore the established link is relatively recent . Oral complications may be the first and sometimes only clue to an underlying eating disorder. In the US, 28% of all bulimic patients were first diagnosed with bulimia during a dental appointment and this highlights the visibly clear and distinct impact that eating disorders can have on teeth . This report will investigate the main dental implications that may be caused by eating disorders. The significance will be analysed by looking at what causes the dental problems and how greatly these can be linked directly to eating disorders. The extent of significance will be analysed through looking at the extent of impact and whether these impacts are permanent or reversible.

Oral manifestations of nutritional deficiencies

Anorexia Nervosa is characterised by restriction of food intake and an extreme fear of weight gain, therefore it is common that sufferers are often malnourished and vitamin deficient. Aside from obvious health risks, these factors also lead to several oral manifestations. However, dietary patterns show great variability and will usually differ dependent on the individual. Dietary patterns include calorie restricting, eating healthily but at irregular intervals, binge eating, vomiting and fasting for prolonged periods . Therefore, there are limitations to the conclusions we can draw as to the significance of the effects on oral health, since there is an inconsistency in the contents and habits of daily food consumption. When calorie restriction is involved, as an attempt to keep major bodily functions running steadily, the body will attempt to salvage protein, vitamins and other nutrients and consequently, oral maintenance will be neglected . Studies show that patients with anorexia presented diets containing significantly lower values of all major nutrients compared with controls and specifically, intakes of vitamin A, vitamin C and calcium below RDA levels (recommended dietary values) were present in the majority of patients. However, low intakes (below RDA values) of vitamin B1, B2 and B3 were only reported in a few cases . In contrast to these findings, another source states that there is a clear reduced intake of B vitamins in anorexic and bulimic patients . A possible explanation for these results may be due to the previously discussed inconsistency in the daily intake of individuals with eating disorders but overall we can assume that nutrient deficiencies with varying severities are present in the majority of the anorexic population. The common deficiencies: vitamin D, vitamin C, vitamin B and vitamin A are associated with certain disturbances in the oral structure because they are essential for maintaining good oral health. A lack of vitamin A is related to enamel hypoplasia, which consists of horizontal or linear hypoplastic grooves in the enamel. Vitamin B deficiencies cause complications such as a painful, burning sensation of the tongue, aphthous stomatitis (benign mouth ulcers) and atrophic glossitis (smooth, glossy appearance of the tongue and is often tender). A lack of Vitamin A is responsible for infections in the oral cavity as the deficiency can lead to the loss of salivary gland function (salivary gland atrophy) which acts to reduce the defense capacity of the oral cavity, as well as inhibiting its ability to buffer the plaque acids. Inability to buffer these plaque acids could lead to an increased risk of dental caries. Additionally, vitamin B deficiencies can induce angular cheilitis; a condition that can last from days to years and is consists of inflammations focused in the corners of the mouth, causing irritated, red and itchy skin; often accompanied by a painful sensation. There is a consistency in the evaluation of calcium deficiencies among sufferers of eating disorders and this has clear significant impact on oral health. There is an established relationship between calcium intake and periodontal diseases therefore having an eating disorder increases a person’s susceptibility . The process of building density in the alveolar bone that surrounds and supports the teeth is primarily reliant on calcium. Alveolar bone cannot grow back so calcium is needed to stimulate its repair. This is important because the loss of alveolar bone can expose sensitive root surfaces of teeth, which can progress to further oral complications . If patients are not absorbing enough vitamin C, after an extended period there is a chance that they will develop osteoporosis. Although this is rare and most common amongst individuals with anorexia, it can lead to serious consequences because alongside the loss of density in the alveolar bone, it can progress to the loosening and eventually, loss of teeth: a permanent defect. With anorexic and bulimic patients, there is an increased likelihood of halitosis (bad breath) because in the absence of necessary vitamins and minerals, the body is unable to maintain the health of the oral cavity . If the vitamin C deficiency that most patients with eating disorders suffer from is prolonged and sufficiently severe, then there is a risk of scurvy development. In general, therefore, it seems that the nutritional deficiencies caused by anorexia and bulimia are significantly impacting oral health in ways ranging from unpleasant breath and physical defects to permanent loss of oral structures that need to be tackled with medical and cosmetic interventions.

Periodontal disease

As explained earlier when discussing calcium deficiency, the risk of periodontal disease may increase if an individual suffers from an eating disorder. General malnourishment is another factor that causes a quicker onset of periodontal disease, which always begins with gingivitis and only occurs in the presence of dental plaque . As discussed above, the relationship between calcium intake and periodontal disease is potentially controversial, except in rare cases of severe nutritional deficiency states. Patients dealing with extreme cases of anorexia nervosa may fall under this category. Due to the intense psychological nature of this disorder, the extremity of food restriction is likely to progress further as the need to lose weight quickly transforms into an addiction. After studying nutritionally deficient animals, the conclusions drawn suggest that nutritional factors alone are not capable of initiating periodontal diseases but are able to have an effect on their progression . This would suggest that having an eating disorder does not place an individual at greater risk of initiating periodontal diseases compared to an average person, despite their malnourished conditions. However, catalysing the progression of gingivitis into periodontal disease does suggest that having an eating disorder places patients at a significantly greater risk because their untreated gingivitis will evolve into periodontitis at a greater rate. This effect is significant because periodontitis is an irreversible condition that causes permanent damage. The evidence is limited; however, as it is based on animal research and may only accurately correspond to humans at a limited degree.

Turning now to the experimental evidence on the idea that dental plaque is an essential etiological agent in chronic periodontal diseases. It has been proven through experiments involving the isolation of human plaque and the introduction of the plaque bacteria into the mouths of gnotobiotic animals that a link exists between the bacteria in dental plaque and periodontal disease. Supporting this idea, epidemiological studies produced evidence to suggest a strong positive correlation between dental plaque and the severity of periodontal disease. Unlike some previous evidence mentioned, different clinical experiments done on both animals and humans show major findings that the accumulation of dental plaque is a result of withdrawing oral hygiene in initially healthy mouths . There is evidence to suggest that bulimics manifest a significantly higher retention of dental plaque so consequently, this disorder put patients at a greater risk of not only advancement into periodontal disease, but an increased risk of severe periodontal disease . As mentioned earlier, periodontal disease only occurs after the development of gingivitis, which consists of three stages: initial lesion, early lesion and established lesion. When an advanced lesion is present, it corresponds to chronic periodontitis: “a disease characterized by destruction of the connective tissue attachment of the root of the tooth, loss of alveolar bone, and pocket formation” . After discussing the increased likelihood of dental plaque being present in the mouths of bulimics, the strong association between dental plaque and periodontal disease can be linked directly to prove the significance of bulimia’s effects on oral health. Although the evidence is not as conclusive, anorexic patients are liable to malnourishment and since nutritional factors aid the development of gingivitis into periodontal disease, there is a significantly increased chance of anorexic patient’s oral condition transitioning from gingivitis to periodontal disease. This is extremely significant because unlike gingivitis, the oral damage of periodontal disease will be irreversible.

Eating disorders and caries

This increased likelihood of periodontal disease means that an individual is more likely to retain dental plaque, a significant factor that contributes to dental caries. Tooth decay (also known as dental caries), is defined as “the demineralisation of the inorganic part of the tooth structure with the dissolution of the organic substance”. It involves the anaerobic respiration of consumed dietary sugars where the organic acids formed in the dental plaque can demineralise the enamel and dentine . A possible contributing factor to dental caries is a common unhealthy habit adopted by people with eating disorders that involves the consumption of acidic drinks containing zero calories, an example being coke zero. According to professor colon, certain patients will drink as much as 6 litres a day in an attempt to reduce hunger and help with the process of SIV (self-induced vomiting). During episodes of “binge eating” (more common with bulimia), an individual will consume large amounts of food, usually high in sugar or fat within a short timeframe, usually with the intention of regurgitating the contents shortly afterwards. Increased amounts of sugary foods are ingested during this period, leading to an increased risk of dental caries . A study shows that prolonged periods of dietary restraint in anorexic patients did not result in changes to the bacteria associated with dental caries and consequently allows us to understand that malnourishment is not a significant factor when it comes to the risk of dental caries . Due to obsessive personality traits seen in anorexic patients, it is likely that these individuals are more fastidious in their oral hygiene, which discards dental caries as a risk compared to other complications such as dental erosion, which is to be explored later on. Although dental caries does not seem to arise as a direct issue, studies show that patients with anorexia had greater DMFS scores (decayed, missing and filled surfaces) than controls . This is likely a cause of previous factors such as the consumption of low calorie acidic drinks, not the restricted dietary intake.

Bulimia seems to place individuals at a significantly greater risk of dental caries than anorexia. A study of 33 females showed that bulimics had more intense caries when compared to healthy, age and sex matched controls . Another more recently discovered habit is CHSP (chewing and spitting) where an individual can seemingly “enjoy” the taste of certain foods by chewing the food for some time before proceeding to spit it out to avoid consuming any calories. A study shows that 34% of hospitalized eating disorder patients admitted to at least one episode of chewing and spitting in the month prior to admission . This habit can significantly increase dental problems by leading to cavities and tooth decay, presumably due to the high probability of excess residual carbohydrates. This assumption derives from the etiology of how dental caries progresses, which involves the action of acids on the enamel surface. When dietary carbohydrates react with bacteria present in the dental plaque, the acid formed initiates the process of decalcifying tooth substance and subsequently causes disintegration of the oral matrix. Abundant extracellular polysaccharides can increase the bulk of plaque inside the mouth, which interferes with the outward diffusion of acids and the inward diffusion of saliva. Since saliva has buffering properties and acts as a defence against caries by maintaining Ph, interference with the abundance of reduces defence against tooth decay. Dietary sugars diffuse rapidly through plaque and are converted to acids by the bacterial metabolism. Acid is generated within the substance of plaque to such an extent that enamel may dissolve and enamel caries leads to cavity formation. Binge eating or CHSP increases the acidity of plaque since ten minutes after ingesting sugar, the Ph of plaque may fall as much as two units. To support the scientific explanation, there is evidence supporting the association between carbohydrate intake and dental caries. For example, the decrease in prevalence of dental caries during WWII due to sucrose shortages followed by a rise in previous levels during the post-war period, following the increased availability of sucrose. Hopewood House (a childrens home) excluded sucrose and white bread from the diet: children had low caries rates which increased dramatically when they moved out. Alongside this, intrinsic factors such as tooth position, tooth morphology and enamel structure also affect the risk of caries development and this does not link directly to eating disorders because these variables differ throughout the whole population. However, an extrinsic factor that may reduce the incidence of caries is a greater proportion of fat in the diet because phosphates can reduce the cariogenic effect of sugar . Since individuals with anorexia generally avoid foods with high fat content, they are unlikely to ingest the necessary amount of phosphates to reduce their risk of caries. The evidence all relates to the significance of eating disorders (specifically bulimia) and the role they play to increase the likelihood of caries due to incidences of binge eating, CHSP, low fat intake and consumption of acidic drinks high in sugar.

Oral consequences of medication

After discussing dental caries, it is evident that saliva plays an important role in the maintenance of a healthy oral cavity. 20 women with bulimia and 20 age and gender matched controls were studied and the results showed that the unstimulated whole saliva flow rate (UWS) was reduced in the bulimic group, mainly due to medication . Although the UWS was affected, no major compositional salivary changes were found. This information is contrasted by another study that found bulimic patients did not present evidence of lower salivary flow rates but did have more acidic saliva . Another study was conclusive with the first one and found that the stimulated and resting salivary flow was poor amongst bulimic individuals compared to healthy controls. It also found that Ph levels of saliva were lower than the control group but were still within the normal range . Due to the range in findings and the limited sample size in the studies, these results are inconclusive in places and need to be interpreted with caution. However, it would make sense that habits that accompany eating disorders such as fasting or vomiting would potentially cause dehydration and result in a lower UWS.

Although we are unable to determine a strong link between eating disorders and their effect on saliva, there is conclusive evidence to support oral reactions to medication. If an eating disorder has been diagnosed, selective serotonin reuptake inhibitors such as fluoxetine (a common antidepressant), anti-psychotics and anti-cholinergic medication may be prescribed . Smith and Burtner (1994) found that 80.5% of the time, xerostomia (dry mouth) was a side effect of medications. Direct oral effects of xerostomia include diminishment or absence of saliva as well as alterations in saliva composition. These medications also have indirect effects on oral health by causing lethargy, fatigue and lack of motor control which can cause impairments in an individual’s ability to practice a good oral hygiene technique. The medications have anticolinergic of antimuscaric effects which block the actions of the parasympathetic system by inhibiting the effects of its neurotransmitter, acetylcholine on the salivary gland receptors meaning that it cannot bind to its receptors and consequently, the salivary glands cannot secrete saliva. The reason this causes such an immense impact on oral health is because of how important the functions of saliva in the mouth are. They include protection of the oral mucosa, chemical buffering (as mentioned previously when discussing dental caries), digestion, taste, antimicrobial action and maintenance of teeth integrity. Saliva contains glycoproteins that increase its viscosity and helps form a protective barrier against microbial toxins and minor trauma, protecting oral health both chemically and physically. However, a study by Nagler (2004) found that in up to one third of cases, xerostomia does not lead to a real reduction in salivary flow rate therefore this is a limitation to consider . Patients with xerostomia may experience difficulty chewing, swallowing or speaking and salivary glands may swell intermittently or chronically. Physical defects include cracked, peeling lips, a smoothed, reddened tongue and a thinner, reddened oral mucosa (the membrane lining the inside of the mouth). There are links between xerostomia and previously discussed oral complications as there was often a marked increase in caries and patients experiencing dry mouth where tooth decay could be rapid and progressive even in the presence of excellent hygiene . Overall, the extent of the impact caused by eating disorders in respect to xerostomia and a decreased salivary flow rate is fairly minimal for a few reasons. First, evidence related to salivary flow rate is inconclusive and there are several contrasting studies, therefore a confident assumption linking eating disorders to salivary flow rate cannot be made. On the other hand, there is a handful of strong evidence to suggest that xerostomia can be caused by medication which can then affect the flow of saliva, however, in terms of eating disorders, the link is weak and not exclusive. This is due to the simple fact that medication is taken by a large sum of the population for different conditions ranging from depression to heart disease. Therefore, eating disorders are not uniquely responsible for causing xerostomia. As well as this, xerostomia is a secondary effect because it is the medication that is responsible for the oral complication, not the psychological disorder. This gives reason to infer that eating disorders do not have a highly significant impact on this aspect of oral health.

Self-induced vomiting:

The most common symptom associated with bulimia is the binge-purge cycle. This involves an individual consuming large quantities of food in a short time period (binging), followed by an attempt to not gain weight by making themselves vomit or taking laxatives (purging). Linked with the previously discussed issue of xerostomia, since laxatives are medication, frequent use will significantly increase a patient’s likelihood of alterations in saliva contents and flow rate, which can lead to more significant dental issues. A case study evaluates a 25-year-old female patient who had suffered with bulimia for five years. It was found that this particular individual vomited 5-7 times per day and suffered from swelling on both sides of her face and mandible (lower jawbone). Although her symptoms were painless, on physical examination observations showed that she had bilateral enlargement of parotid glands despite a lack of tenderness in response to palpation. She had a reddened posterior pharynx and suffered enamel erosion on lingual surfaces of her maxillary teeth, likely due to the direct contact they come into with gastric acid and vomitus. After 2 weeks of no purging episodes, parotid swelling could no longer be observed, suggesting that this effect is reversible after a short period. However, after a 6-month gap, the patient returned with more severe bilateral swelling, submandibular gland enlargement and new complaints of tender, painful parotid glands. This suggests that persistent, frequent self-induced vomiting over a substantially extended duration can increase the severity of symptoms and reduces the chance of reversing these symptoms. Supporting this, when the patient was advised to use warm compresses and tart candies as forms of treatment that had previously worked, after this 6 month period there was no decrease in her gland size prior to using these treatments. This case study demonstrates the sheer significance of bulimia on dental health due to the severity of the consequences. Bulimic habits are difficult to change because even after working with a therapist, several relapses were experienced by the patient. After lack of improvements following the use of suggested treatments, the next potential option would be surgery to remove the patient’s parotid glands This would have a serious and significant impact, as there are risks of morbidity and facial scarring. As well as this, after a certain threshold, changes are irreversible and require the intervention of invasive procedures. .

The rapid ingestion of large amounts of food followed by forceful regurgitation may lead to trauma in oral mucous membranes due to insertion of fingers or foreign objects down the throat. This includes physical defects such as redness, scratches, cuts in the mouth and soft palate damage (upper surface of the mouth) as well as the previously mentioned salivary gland enlargements. These include enlargement of the parotid glands and occasionally of the sublingual and submandibular glands. Soft palate damage is often accompanied by cuts of bruises on the knuckles as a result of pressure on the skin from their teeth during an attempt to purge. However, enlargement of salivary glands will only occur if a binge-purge cycle becomes regular and frequent. Bulimic episodes and certain types of food can increase exposure to gastroesophageal reflux which often takes place during the night, meaning the patient is unaware, thus increasing the damaged caused because no measures are taken to prevent repercussions .

As mentioned earlier in the case study, a common and significant effect of self-induced vomiting is the onset of dental erosion due to acidic contents coming into repeated contact with tooth surfaces. After a while, these chemical alterations can become permanent and have exponentially negative effects.


Dental erosion is arguably the most commonly experienced complication amongst individuals with eating disorders, especially bulimia. It can be defined as the “irreversible process of demineralization of the external layers of tissues of the tooth” .The effects of gastric acid cause repeated purging to wear away at tooth enamel until it almost disappears and exposes sensitive dentine, the layer beneath the enamel. Dietary erosion is also a significant factor and happens mainly due to excessive intake of acidic beverages, which is a clear issue as discussed earlier when exploring dietary habits involved with anorexia. Microradiography has shown a gradual demineralisation of the surface enamel to a depth of about 100 micrometers . In a study comparing bulimics to healthy controls, results showed that the bulimia group experienced more dental erosion. There was a direct correlation between erosion and the duration of their disorder meaning that longer durations of the disorder can lead to a greater frequency and severity of dental erosion . Another study that supports these findings shows that patients with eating disorders had 5 times the odds of dental erosion compared with controls. For patients with self-induced vomiting, the erosion rate was 7 times higher. Overall, between 35% and 38% of patients with an eating disorder suffer from tooth erosion but again, patients with self-induced vomiting are at greater risk .

Linking back to the association between the duration of the disorder and its severity, erosive effects of regular vomiting are observable within the first 6 months. The effects initially observed on palatal surfaces of the maxillary anterior teeth because this is where the acid comes into contact. According to a different source, 86% of bulimic patients in a study had tooth erosion, compared to 0% of the non-vomiting group. Similar studies compared the severity of erosion linked with behavior following a “purge” which included rinsing with water, teeth brushing and no action. The results showed a statistically significant difference between dental erosion severity and found that patients who brushed their teeth straight afterwards experienced a greater level of erosion . This is most likely in an attempt to remove the unpleasant aftertaste or perhaps even a belief that brushing will reduce the damage done despite this having quite the opposite effect. A regular binge-purge cycle causes perimylolysis: the decalcification of the teeth from exposure to gastric acid. This begins with smooth erosion of the tooth enamel followed by the loss of enamel and eventually dentin on lingual surfaces of teeth. Both the chemical and mechanical effects of regurgitation cause this. After a duration of 2 years, the posterior teeth are affected and this leads to the loss of occlusal anatomy due to eroded surfaces. Teeth shape are affected and an individual may suffer from an anterior open bite and loss of vertical dimension of occlusion .

Dental erosion wears away the natural protective barrier of teeth, leaving the dentin exposed. This can cause complications such as hypersensitivity to cold, hot and sweet food or drink. The occlusal changes can lead to pain resulting from jaw movements and potentially even trigeminal neuralgia (a chronic pain condition that effects the trigeminal nerve) which causes extreme amounts of pain from simple movements such as teeth brushing which would further deteriorate an individual’s dental health. However, there are findings that do not support the previous evidence. Teeth were taken from a deceased patient and even after 4 years of daily regurgitation, almost a normal thickness of at-risk enamel surfaces were observed. Calcium, phosphate and fluoride-rich crystalline deposits were found and this shows that if oral hygiene measures are frequent and correct, this can substantially minimise the erosion of enamel . This evidence is a potential limitation because the results are specific to one case. However, we can conclude that there is evidence supporting the possibility of preventing dental erosion, even when an individual’s behavior includes daily self-induced vomiting. This means that the effect of bulimia on oral health can be slightly less significant because harmful effects in regards to erosion are avoidable. On the contrary, once erosion has developed, it is a permanent oral defect.


Acid eroded enamel is more susceptible to physical tooth wear than unharmed enamel which consequently places anorexic and bulimic patients at a greater risk. There are two types of physical tooth wear: attrition and abrasion.

Attrition is the loss of tooth surface due to tooth-to-tooth contact. Pathological attrition can result from bruxism, both of which can be attributed to eating disorders. Stress can be another influential factor that increases likelihood of bruxism. Bruxism is the action of excessive tooth grinding or jaw clenching and can be attributed to eating disorders and stress which is also highly prevalent in eating disorder patients. Attrition can lead to exposure of the dentine which can cause hypersensitivity.

Abrasion is defined as the “pathological wearing away of tooth substance by the friction of a foreign body independent of occlusion”. Due to excessive brushing habits previously mentioned, abrasion can be an issue for patients with eating disorders, particularly those who partake in SIV. This is due to the increased susceptibility to abrasion when patients brush teeth with soft, demineralised surfaces. Abrasion produces wedge shaped grooves with sharp angles and highly polished dentine surfaces so similarly to attrition, causes a pronounced change in appearance .

The permanent change in appearance of tooth wear can lead to further psychological issues relating to self-image. This is a particularly sensitive issue for patients due to the body dysmorphia they experience which is largely responsible for the onset and development of eating disorders.


After evaluating the different oral complications related to eating disorders, it is clear that there is a range in the extent of the consequences as well as differences in how greatly the eating disorder influences the onset of the oral complication. The most common symptom of eating disorders is general malnourishment due to a reduced dietary intake which can cause several nutrient deficiencies. After analysing the common deficiencies affecting ED patients (calcium, vitamin A, B, C) we can conclude that the effects can be as serious as permanent loss of oral structures. However, a limitation of this conclusion is that every individual will acquire varying eating habits thus we cannot confidently say that all patients will suffer from dental complications caused by nutrient deficiencies. The varying diets mean that not patients will experience different deficiencies and thus, different oral health consequences with varying extremities. Retention of plaque is common among ED patients and this causes both dental caries and periodontal disease. Both of these problems can be avoided with the correct dental care but ED patients are at greater risk due to a higher prevalence of dental plaque. The extent of significance is fairly high as there is an established link between plaque and these complications. Both occur in stages and develop into increasingly serious dental problems e.g periodontal disease develops from gingivitis and dental caries starts in the enamel and dentine before the production of cavities. This decreases the significance because the issues are reversible to a certain extent until they become permanent. Once the development turns into a irreversible dental problem, the extent of the significance of eating disorders is greatly increased. Again, each individual will have differing levels of dental plaque however, overall eating disorders do increase chances of dental caries and periodontal disease. A significant cause of xerostomia is medication, including ones prescribed to ED patients. Although this had an impact, patients with eating disorders are at the same risk of xerostomia as any other individual who has been prescribed certain medications. Therefore, this is a secondary effect of eating disorders on oral health and thus the extent of significance is lower. SIV is common and affects salivary glands and increases acid erosion. Again, erosion is a permanent defect but similarly to tooth decay and periodontal disease, it occurs in stages and develops slowly. Acid erosion increases the likelihood of physical tooth wear such as attrition and abrasion therefore eating disorders significantly impact the chance of patients experiencing tooth wear.

Eating disorders are hard to treat due to the psychologically rooted habits that become hard to break. This means that patients will suffer for long periods up time, commonly years and this means that the extent of impact on dental health is much more significant because the longer an eating disorder lasts, the greater the development of permanent dental problems. If the complications are treatable or improvable then the extent of significance is lower. In this case, a lot of the issues are reversible however, it is likely that they will develop into an irreversible state which means that significance is a lot higher. The stigma around mental health alongside feelings of shame or guilt that a person may experience will prevent them going to the dentist or seeking medical help when the complications are less serious. This means that problems are more likely to develop into permanent damage before a patient seeks help. Additionally, mental health is only being looked into relatively recently and there are many aspects that we are unsure about which means that there are limitations when discussing how significant eating disorders can be on dental health. However, eating disorders have increased in magnitude, incidence and prevalence which means that the subsequent oral complications will also increase in these categories. A greater frequency of incidences increases the significance of the impact of eating disorders on dental health. The most common complications discussed will ultimately lead to the extraction of teeth which cannot be reversed. Therefore, eating disorders do have an overall significant impact on dental health by increasing the likelihood of complications developing. The extent of significance will vary depending on the individual’s lifestyles, therefore we can conclude that the impacts are significant however the extent will vary among individuals.


Global marketing strategies of Peeps: college essay help

Organization strategy

Global marketing strategies of Peeps to adapt to Japan and global marketing is more than selling the Peeps products in Japan. It is the making the entire process and creating habits, positioning, and market share in the Japanese market. Therefore, Peeps need to know their strategies and implement those strategies effectively. Peeps need to improve the effectiveness of the Peeps product. This is essential for the company because it is the most effective way to grow. Since the Japanese market is highly competitive Peeps organization strategies must fit in this competitive market. However, implementing the strategies is not enough alone, while Japanese consumers have been using the high-quality product, Peeps need to apply effective quality management process well such as lean management, six-sigma etc.

Knowing the market

Knowing the market is the most significant strategy that Peeps need to understand before getting into the Japanese market. As soon as Peeps decided to be getting into the Japanese market every research must be done carefully and detailed. The context must have been understanding by Peeps perfectly because every country has different behaviors and norms. If the Peeps knowing the market well, the company can get a high position in the market.

Cost leadership

Offering the best prices will support the Peeps to make attractive on its consumer perspective. However, cost leadership strategy needs to analyze well because the product must be profitable. Once the Peeps meet with profitable requirements, Peeps need to determine its quality standards because of Japanese consumer expectation of quality relatively higher than the US.


Peeps need to differentiate themselves from other competitor’s brands in Japan. Differentiation is a significant issue that allows the Peeps to gain more market share in Japan. The most visible differentiation method is product differentiation. Since the Peeps making candies visibility has a significant role. Peeps can make Pikachu shape on its candies, which will be useful to differentiate themselves in the Japanese market.

Price differentiation is the second most significant issue for the Peeps. Since the Japanese market is highly competitive on quality, Peeps need to make its product high quality with affordable prices. The food industry in Japan is relatively expensive than US food industry. Therefore, people in Japan are looking for a high-quality product at affordable price. Peeps will be the new player in the Japanese market if the Peeps making their product cheaper than its competitors. The company can easily differentiate themselves in the short run.

Peeps strategic development areas;

Alignment visioning
Detailed planning
Governance management

Supply chain

Going through the global supply chain supports the Peeps entry into the Japanese market. Therefore, managing the global supply chain one of the keystones for success. There are many components in global supply chain management such as customer service, customer transactions, production line, employee engagement etc (Busse, 206). It fuses the executives’ forms that incorporate the system of providers, makers, distribution centers and retail outlets with the goal that the correct kind of products are sourced, provided, created and sent in the correct amounts, to the correct areas, at the opportune time and are getting in sound condition. To accomplish fruitful mix, streams of data, (for example, buy orders, shipping notification, waybills, and solicitations), materials (counting crude and completed items) and accounts (installments and discounts) through the inventory network must be coordinated successfully.

Three major areas that successful supply chain management needs?

Supply chain management basically has an impact on the entire organization. Therefore, to be a success in supply chain management, it requires strong various approach, employee engagement, technology, and process (Wolf, 2011) .


Employees are the keystone behind the supply chain management because workers belong to the core of the companies. Therefore, Peeps must bring the employees who have skills and knowledge to manage the manufacturing, inventory control and transportation of products. Peeps should show its strategic business vision to their employees. Also, the company needs to make a strategy to increase the employee engagement and be able to show employee role of business effectiveness to their employee.


Companies create their processes in the supply chain management to increase its customer satisfaction. Processes have different dynamics such as sourcing, transportation, sales etc. Peeps need to be making an effective process in their supply chain management system. That will help the Peeps to improve their total quality management abilities in process-based.


Technology has been growing rapidly and technological developments help the companies increase their effectiveness in every aspect of it. For the supply chain, technology has a significant role and power to connect people and processes. However, adaptation and implementation are the two major challenges for companies. Technological tools to stabilize the supply chain management might be costly for the Peeps at the first time. However, Peeps must have invested in both financially and educationally on their employees. It is because technological tools for supply chain management such as SAP, big data, data mining, AI are the tools that users must be educated. Therefore, Peeps need to educate their employee and make sure the employees can use.

Target audience profile

Numerous new entrepreneurs oppose characterizing an objective market since they need to have the capacity to “pitch to everybody”. They may feel that such a definition would constrain their group of onlookers, and they need their item or administration to be accessible to the broadest market conceivable (Hawlk, 2017). They would prefer not to lessen the number of potential clients that may stroll through their entryway. However, determine the target audience profile is the significant step before running a business and investing (Dowhan, 2013). Targeting audience profile helps the company avoid the profit loss. Peeps need to analyze their current customers with survey or evaluation scientifically. Once Peeps get the data from current customers, the company can customize the target audience profile to the Japanese market. During the marketing communication strategies process, Peeps can use the develop new areas that will help in the Japanese market.

The demographics of Peeps buyers are interactive individuals among the ages of 15 to 40, in all genders, who have educated, and socially engaged. Public influencers are the significant target audience profile for the Peeps because those targets are the people that can make the advertise the brand and positive effects on individual targets. Target audience profile is the significant step for the Peeps because it will lead the business for Peeps. As a producer Peeps, company primary goal is to find the target market. Also, to be able to create an effective marketing campaign based on a company target audience. However, Peeps need to separate their target audience based on geographic location. Peeps can collect the data from the US and making them customize to the Japanese market. Based on our research, Peeps determine the consumer expectations from a company which is high quality with good price.


Tesla strategy

Tesla is now a public company (IPO in 2010). I feel like Tesla is going for a cost-leadership strategy in that the company is purposefully trying to lower costs to make electric cars affordable to the general population. However, Tesla does not fit perfectly into either generic competitive strategy. Tesla produces very expensive cars and is trying to produce commercial cars. This tells me that Tesla is trying to preserve its image as a luxury brand, but make their cars more affordable to consumers, which is one of the company’s main objectives. Tesla differs from its competitors because of government regulations. Companies that produce fuel cars are subject to regulations, while Tesla has actually acquired credits for complying with regulations and commitments. Tesla’s business model is extremely different, as the government subsidizes the company in exchange for it complying with regulatory credits. Tesla is valued at 50 billion dollars but has racked up billions in debt. The idea behind Tesla is that when electric cars become mainstream, the company will begin to be profitable. Other companies do not have as much leeway selling this position to investors, and they are also not backed by government subsidies like Tesla is. Tesla’s business model is different because the company needs to take into account infrastructures, such as charging stations and batteries. This fits into the overall costs that Tesla charges and incurs.

Tesla went for the brand first; they produced an extremely expensive sports car, and now it is trying to move into the commercial market. Tesla has transformed into a company that wants to offer cars through direct sales, service centers, and supercharger network. Tesla has gone from some futuristic company to a company that has tried to move the present to the future.

Tesla has been extremely successful in the past. The company has been able to leverage incremental innovation in order to continue to stay competitive. Tesla needs to be able to cross the chasm. Right now, most people do not want an electric car because they believe it is too technologically advanced (costly) for them.

Tesla should reduce its costs in order for it to reduce its debt and become profitable. This would allow for more shareholder wealth and also allow the company to normalize and integrate into the normal car market.

Yes, Tesla needs to be able to invest in its processes to figure out how to lower costs and be able to leverage itself into the market.

Tesla needs to be able to invest and construct more charging stations. Tesla can go into a strategic alliance with an electrical company to build charging stations around the globe. This will allow Tesla to cross the chasm, something it has yet to fully do.


Gasoline prices tend to fluctuate, and when they are relatively low Tesla sees a decline in sales.
Internal competition from car manufacturers who are trying to get into the electric car market.
Lowering costs to become price competitive and profitable.

Tesla can solve these challenges through the use of strategic alliances and the ability to lower the costs of electricity to charge the cars. Tesla needs to find a way to undercut the price of gasoline enough so that consumers are comfortable investing in an electric car.

Tesla has already done a great job of convincing the U.S. government to provide subsidies to the company in order to allow for more R&D and innovative strategies. Furthermore, Tesla should try to broaden its strategic alliance by vertically integrating itself on the supply chain inputs.

The costs of vertically integrating are always high if the vertical integration ends up not working. Tesla needs to ensure that it has the infrastructure in place in order to successfully vertically integrate down its supply chain. Tesla already is nearly fully integrated towards the top of the supply chain, as it holds auctions and direct sales of its cars. Tesla also has a terrific customer support center.

Tesla is currently a question mark, according to the BCG matrix. Tesla needs to leverage its first mover’s advantage and become a star if it is going to succeed. Tesla must invest more in its research and development, aiming to lower its operating costs in order to become a more profitable company. Right now, Tesla is operating with so much debt; investors will eventually not tolerate this and Tesla will be left behind for a company like Mercedes, which is in the midst of developing its own electric car.

Build more charging stations in order to cross the chasm
Create more low-cost cars for cost-conscious consumers
Do more to lower operating costs by vertically integrating, which may cost a lot short-term but will pay off in the long-term
Be able to leverage the first-mover’s advantage.
Find a way to get past gasoline prices

Internal stakeholders need to be able to carry out non-waste, lean six sigma approaches. Outside stakeholders need to be able to be patient with the process of Tesla crossing the chasm. Stakeholders need to be able to remain calm in the face of change, and they need to be able to adapt to changing alliances and shifting supply chain allocations.

Tesla is currently being subsidized by the government for a ton of money; Tesla also is still a target of investors, even with Elon Musk’s behavior. Tesla remains in a strategic position, but it needs to leverage this before some other company copies and surpasses them.

Step 1: Hire a research firm to analyze the market around the world and induce corporate clients to join Tesla.
Step 2: Meet with potential partners for strategic alliances in order to get on the same page and understand the needs of both sides.
Step 3: Conduct testing on cars to see if there is an efficient option to deal with the gas price issue
Step 4: Thoroughly analyze companies in terms of the financials in ensuring you are partnering with a compatible company.
Step 5: Partner with the companies in order to gain a competitive advantage and beat out potential competitors.

Tesla needs to be able to modernize the inner workings of the company. The culture needs to be geared towards a three-pronged approach of people, planets, and profit. Tesla needs to push towards sustainability in order for the message to resonate with both employees and the public.

Tesla should be across the chasm and in the early adopter’s phase of electric cars. Tesla has the potential to raise its customer base and sales, but it needs to make its cars more accessible to the middle class.

The price of gas remains a wildcard. Whenever gas goes up high, the demand for electric cars goes down. According to some projections, this is a good thing for Tesla since gas is expected to rise in price due to the scarcity of the resource.

This is a growing market; there are more and more companies such as Mercedes and BMW that are trying to penetrate into the electric car market. This poses a challenge to Tesla, as the competition it is facing is, in many cases, backed by more resources and revenue. Furthermore, this matters because Tesla not only needs to compete with gas vehicles but other vehicles within its own niche. Electric vehicles seem to me like they are incremental. In 1900s New York City, 30% of all vehicles were actually electric, so this does not seem new. I think it is more architectural; an example of disruptive would be something like a flying car.


Bonding: writing essay help

Elements will form compounds in order for them to become stable. Elements don’t combine to form compounds at r&om, many different factors will come into this, the main & most important one being the electronic configuration, meaning the finally electronic configuration of the atoms must be 2n2, where n=maximum number of electrons a shell can hold (1). Most non-metals that do this In two ways, by reacting with metals, creating ionic compounds. Or by forming molecular compounds with metalloids & other non-metals, known as covalent compounds. (2) in most cases, the full outer shell of electrons will be made up of eight electrons, expect in the case of hydrogen & helium, where a full outer shell of electrons in made up of two electrons. Fluorine is the most reactive element & therefore forms compounds most readily, only two elements don’t form compounds with fluorine, these are helium & neon. (3)

Ionic bonding is the bonding between a metal & a non-metal. This occurs when there is a transfer of electron(s). The transfer occurs due to the ionisation energy & the electron affinity of the two atoms bonding. Ionisation energy is defined as the energy needs to remove one of more electrons from a neutral atom. (4) electron affinity is defined as the energy that is released when one or more electrons is added to a neutral atom.(5) Ionic bonds will happen between two elements where one had a low ionisation energy that is bonded to an element with a has a high electron affinity. This means that the transfer of electrons happens very easily. The atom with the low ionisation energy will easily give up one or maybe more electrons in order for it to achieve a full outer shell & the atom with the high electron affinity will happily accept these electrons in order for it to also achieve a full outer shell. (6) ionic structures typically have high melting & boiling points. They don’t conduct heat when solid but will when in dissolved or molten, due to there being no delocalised electrons when in the crystal structure but when the molecule is melted or molten, the ions making up the structure will become the mobile valence delocalised electrons required to carry charge. Ionic structures are hard but they are also brittle, if a force is applied, a layer of the structure will move, causing repulsion & the structure will spring apart. (7) there are four main types of ionic crystal structure;

rock salt formation (NaCl)- this is where there Cl- will make up the corners & face centres of the structure & the Na+ make up the body & the edge of centres.
Zinc blended type (ZnS)- this is where each Zn2+ is surrounded by 4 S2- & vice versa.
Fluorite type (CaF2)- the is when each Ca2+ is surrounded by 8 F- & each F- is surrounded by 4 Ca2+. Anti-fluorite type- this is where the positive ion Is surrounded by 4 negative ions & the negative ions are surrounded by 8 positive ions.
Caesium Chloride type (CsCl)- this has Cs+ at the body centre with Cl- at the corners & vice versa. (8)

Metallic bonding is the bonding between a lattice of cations & a sea of mobile valence delocalised electrons. (9) these cations will have an electronic configuration of 2n2. There are two key conditions for metallic bonding;

a low ionisation energy
Sufficient valence electrons

The strength of a metallic bond is reliant on three things;

The number of valence electrons
Charge on the nucleus

Metallic structures all have very similar properties. They conduct electricity & heat, due to the delocalised elections that move carrying heat & charge. Due to the non-directional bonds within the structure, metals are considered to be malleable & ductile. Non-directional meaning the bonds are not with tied to specific electrons but the collective electrons surrounding.(10)

Localised & delocalised bonds are both present in the formation of covalent bonding. Localised bonds, better known as sigma & pi bonds are the overlapping of two orbitals from two different atoms. A sigma bond is the overlapping of the two orbitals on the internuclear axis & a pi bond is the bonding between unhybridised p-orbitals both above & below a sigma bond, making two pi bonds. sometimes, hybridisation will occur for some atoms to form molecules. When this happens, they will create & occupy newly created orbitals. there are several levels to this, first level is sp.Where one s orbital will overlap with one p orbital, taking on a linear orientation. The next level is sp2 which is between one s orbital & two p orbitals which will create three hybrid orbitals. These orbitals will have 1/3 “s” characteristics & 2/3 “p” characteristics. The third level of hybridisation is sp3, is between one s orbital & three p orbtials & will create four hybrid orbitals each with 1/4 “s” character & 3/4 “p” character. A delocalised bond or electron is one that has resonance & therefore no fixed positioning. The most obvious example of this would be In benzene where there is no double or single bonds in the ring structure but there is instead six delocalised electrons that allow the structure increased stability. This theory is known as molecular orbital theory. (11)

There are three main types of intermolecular forces- ion dipoles, hydrogen bonds & van Der Waals forces. Ion dipoles are the interactions between an ion & a polar molecule. This allows the charge of the ion to line up with the charge of polar molecule, forming an interaction which is typically found in ionic structures. Hydrogen bonding is the strongest force, which is between the lone pair of electron on an electronegative species & a hydrogen atom that is bonded to either N, O or F. This bond is very strong but it will weaken the bond between the hydrogen & its other molecule. (12) van Der Waals are an umbrella term from four different types of forces, dipole-dipole, ion induced dipole, dipole induced dipole & London dispersion. There are in the descending order of strength. London dispersion is present in all molecules but is the only force in non-polar molecules. Van der Waals are what play a part in the state of a substance. (13)

Compounds are formed in order to become stable through many different types of bonding. The strongest of which is the ionic bond. The four different ionic structures, forming different lattices make it much stronger than the next strongest type, covalent. Although valence bond theory is useful in the explanation of basic bonding, it doesn’t give any explanation of resonance which gives rise to molecular orbital theory. the weakest type of intramolecular is metallic bonding, due to its non-directional bonding. This types of bonding can also be explained using the b& theory which I didn’t discuss in my essay but it gives a better explanation of the rise of the metallic properties. Intermolecular forces are at play in the interaction between these different types of structures with one another. These are found widely in various biological system, most notably in the hydrogen bonding within DNA. The hydrogen bonding occurs between two nucleotides that will fold in various different ways depending on how the h-bonds interact. (14).


Improvement methodologies: custom essay help

It is essential to look on any possible definitions of the two basic elements. Improvement can be defined as “the process of a thing moving from one state to a state considered to be better, usually through some action intended to bring about that better state” (1). This general statement can easily refer and apply to almost all aspects and domains in human life and it can be argued that our current state of civilization is a result of our improvement actions through time. Methodology can be defined as “the systematic, theoretical analysis of the methods applied to a field of study” (2) and “a body of methods, rules, and postulates employed by a discipline, a particular procedure or set of procedures”(3) aiming to create better processes. Taking these into account its systematic research can traced up at the early decades of the previous century. (4)(5)(18) The most of the theories and practices derives their foundation from the business world (1), thus it is inevitable to examine a selection of methodologies from there. For the purposes of this review the broad term “improvement methodologies” will be used to cover all the various theories, approaches and aspects whose discourse with the subject.

According the findings of literature there is not a common accepted structural distinction, thus many theories and methods are included under different “brand or umbrellas”. The same label name can refer to different meanings, terms are frequently used interchangeably, different methods can use the same tools, approaches are interconnected. Continuous improvement, sometimes called continual improvement, which as general notion ” is the ongoing improvement of products, services or processes through incremental and breakthrough improvements”(7) but also can be defined “as a culture of sustained improvement targeting the elimination of waste in all systems and processes of an organization.”(20) It can also refer exclusively to Kaizen(9) as methodology, as an offshoot of existing quality initiatives(20) or as a meta-process for most management systems (8) or even as a completely new approach (20). Some use continual improvement process, as philosophy wherein all activities of the business are constantly examined to weed out inefficiencies and better ways of carrying out tasks (1) or as Deming described “a set of improvement initiatives that increase successes and reduce failures.”(20) Moreover Kaizen as a philosophy is used in management field and in the common everyday life in Japan(19). Continuous improvement uses widely the tool plan-do-check-act (PDCA) cycle, a four-step quality model and it can include Six Sigma, Kaizen, Lean, (6) while others include also the (TQM)(7).The quality improvement, as one of the four main components of quality management (4) can enclose a long list of almost all of the existent methods and theories. Lean thinking is a business methodology which also include Kaizen, but as scheduled, planned, and controlled set of activities in order to improve the work within the normal working day. Kaizen in that form is led by a teacher who makes sure PDCA is followed rigorously.(15) Moreover Focused improvement, as Five Focusing Steps known as the Process of On-Going Improvement (14) of the Theory of Constraints, is “an ensemble of activities aimed at elevating the performance of any system, especially a business system, with respect to its goal by eliminating its constraints one by one and by not working on non-constraints”, as a principle activity of the complex organizational change process. (12) Total Productive Manufacturing (TPM) “includes all activities that maximize the overall effectiveness of equipment, processes, and plants through uncompromising elimination of losses and improvement of performance.” (cited Suzuki 1994 p. 1992) and has object for equipment to perform as well every day as it does on its best day with Zero Losses production. (11) It is also a process of applying systematic problem solving methods to manufacturing while aligning the correct method to the correct scenario.(13) Some consider as the main techniques of Business Process Improvement the Lean Thinking, SIX SIGMA and Business Process Re-engineering (BPR) while anything that fall outside of the above referred as Process improvement which include TQM, ISO, European Foundation Quality Model (EFQM), Kaizen and Benchmarking.(17) The list of interconnections and variations does not definitely stop here, but it is quite clear as far that the subject is enough misty and complex to create confusion. Even some like However Seddon (2005) (17) claimed that the ‘titles’ of the various approaches can be both a distraction and dangerous.

Although some consider TQM as the “mother of all” of the more recent process improvement methodologies. (23) there is no widespread agreement as to what the essential elements TQM are (25), the concept is quite subjectively (26) and it encompass many sorts of management practice, taking the form of various programs (27). Even though an exact common definition does not exist, it can be described as an organizational change programme focus on continuous improvement and based on customer-oriented definitions of quality (Joss and Kogan 1995:37) (21), as a system of practices, tools and training methods for managing organizations in order to increase customer satisfaction [4]. 24, as a collaborate culture attributed by the same elements. 24 [6] or just as a “management philosophy and business strategy” (Iles and Sutherland, 2001 p.48) (22). The UK Department of Industry – now the Department for Business, Enterprise and Regulatory Reform has provided a broader aspect of the TQM. It expresses the view that “TQM is the way of managing for the future, and is far wider in its application than just assuring product or service quality – it is a way of managing people and business processes to ensure complete customer satisfaction at every stage, internally and externally.”(28)

The terms Total Quality Management (TQM), Continuous Quality Improvement (CQI) and Total Quality Improvement (TQI) are often used interchangeably(21) even though there have been dissenters to the view that the terms have the same meaning. (22) General, TQM is supposed to be build upon the Deming’s 14 points (23). In all cases some elements are constantly present in TQM literature, such as:

1. Leadership engagement and Managerial responsibility for continuous improvement

2. Energetic Employee involvement

3. Improvement as integrated ongoing activity within the organization

4. Focused attention on systems with continuous improvement

5. Emphasised the importance of data collection and analysis by the use of statistics to measure process performance

6. Quality is the end result of complex but understandable processes

7. Organizational success depends on meeting customer needs, including internal customers

8. Most people are intrinsically motivated to try hard and do well in work


Re-(art)iculating the (His)tories of the South Seas Oeuvre

Introduction and background information

This master’s research proposal interrogates the visual representations of the indigenous women in the Nanyang, or South Seas, oeuvre by male émigré artists based in Malaya in the 20th century. I consider the relationship between these artists and their artworks with the larger Southeast Asian cultural constructs of gender and sexuality that were borne out of the male colonial gaze. Hence, I re-assess Singapore’s art historical narratives through the lens of gender and post-colonial studies. I conceive my research to answer these questions, what was Nanyang art in the time of its historical emergence? How was it thought about, constructed, reviewed? Under what conditions was it produced, exhibited, and collected?

The search for the answer entails a careful historical account of Nanyang art and its relations with male colonial culture and its resultant gaze, one that takes into account the politics of representation that drives gender and post-colonial studies. I retain a sense of connection to the problematics of modernism, in particular, its histories which modernist art historians and historiographers have neglected. Therefore, what does it mean to attach these émigré artists to the French/British Orientalist tradition, to see their varying artistic styles like Impressionism, Cubism et cetera as being secondary matter? Critics have argued that the innovations of these styles were enabled by prior experiences of travellers to the East. How would it skew the images of these artists to view them as part of the caravan of colonial art tourists, whose works were made possible by the Dutch annexation of Bali? What is required then is to re-contextualise Nanyang art from the periphery, and from the perspective of the fin de siècle art movements emerging out of the Europe’s Belle Époque.

To most, the body of works in the Nanyang oeuvre by pioneer artists like Liu Kang, Cheong Soo Pieng, Chen Chong Swee and Chen Wen Hsi are regarded to be innovative and avant-garde with their Symbolist search for what is quintessentially Nanyang. Sabapathy explicates:

“…that Nanyang artists adopted an experimental approach, using styles and techniques derived from two sources: Chinese pictorial traditions, and the School of Paris… The diverse styles and techniques collectively known as the School of Paris, manifest the new — modern — status of the artist, and a fresh approach towards art activity. The artist assumes a heroic stature, heightened by a sense of individuality and self-determination, and desires freedom from institutional constraints. Art activity is viewed as a ceaseless ‘search for the new’, uninhibited by aesthetic dogmas, the demands of patronage, and the weight of tradition; in this search the artist ranges freely over the entire history of all art, including that from non-European cultures. Furthermore, the School of Paris was not the creation of Parisians alone; artists from other countries and centres contributed significantly towards its formation.”

In 1952, the four artists sojourned to Bali. A visiting Belgian artist Adrien-Jean Le Mayeur de Mepres had introduced the idiom of Post-Impressionism to the pioneering generation of Singapore artists while he exhibited in Singapore at the Young Men’s Christian Association (YMCA) in 1933, 1937 and 1941. More importantly, he instilled the idea for the ‘search of paradise’ typified by the colonial artists from the past, for the search of visual expression and perception of Bali as an artistic haven. Like Gauguin, Le Mayeur went in search for the South Seas but found his paradise in Bali. This historic Bali trip occupies a canonical position in local art history for having played an influential role in the development of the Nanyang oeuvre. Articulated and propagated by Lim Hak Tai, thematic features in oeuvre are “the reality of the Southern Seas… [and] the localness of the place” were further developed when the male pioneer artists journeyed to Bali. There, “the one context in Southeast Asia in which art and life appeared to be inextricably meshed… promised the availability of pictorial motifs and subject matter” .

While the works borne out of their Bali trip covers a wide range of pictorial subject matter, the figure-types of Balinese women make a substantial proportion of the works they produced as they undertook the study of the female body there. In 1969, Chen Chong Swee recollects on Le Mayeur’s exhibion in the 1930s through his writings:

“This Belgian artist originally wanted to go to Tahiti as he had a yearning for the type of life led by the Postimpressionist artist Gauguin. On his way there he passed through Bali and found that there was no place on earth like Bali -— its dancing and singing so soul-stirring and its women so vigorous and graceful…. It was around the summer of 1938 that he held a second art exhibition in Singapore…. I remember seeing many of his large landscape paintings done during his travels in India. His works were executed with free-flowing and bold, strong strokes, in bright and gay colours. Figures dominated his Bali paintings. His works, be they sketches done in light colours or bright-coloured oil paints, showed that they were inspired by the brilliant and clear tropical sunlight. His brightly-clad, energetic and graceful dancers, dancing to the beat of the drums and bells, or his women, kneeling beside the loom weaving sarong cloth, fully demonstrated the tranquil and fine life of the Balinese. Le Mayeur’s painting partner (who later became his wife), attired in traditional Balinese costumes, was on hand to receive guests. She offered herself bare-breasted for photographs. This created quite a stir in Singapore.”

This produced an iconography distinct to the oeuvre that would serve as inspiration for the latter generations of Nanyang artists, especially in relation to their depiction of the human figure through of identifiable figure-types, which varied across each artists’ stylisation of the figure. Their pronounced fascination with Bali and the figuration of Balinese women as their pictorial subject matter, and in the manner these women are portrayed against the tropical landscape, calls into question the artist’s gaze behind these works. In the catalogue for the 1953 exhibition titled Four Artists to Bali that presented the paintings produced from the Bali trip, Liu Kang declared, “Working in Bali is as good as working in Rome or Paris” and added that “whoever hasn’t been to Bali can’t say that he has been to Southeast Asia… It is the Last Paradise” . Surprisingly, the available research and writings on these Bali paintings either glosses or ignore this questionable male gaze. In a sense, Bali was reinvigorating as it spurred on their consequent production of female representations of Malayan women after painting semi-nudes of Balinese women. Instead, local scholarship has romanticised the artists, their artworks and attitudes towards their subjects. In light of the significance of these artworks within local art historiography, such representations of the brown female body and the male gaze behind these art warrants a closer examination.

To others like me, the compositions of young, lithe, brown-skinned, indigenous girls and women, and sometimes painted semi-nude, begs the question: Are these male Nanyang artists any different from the likes of Paul Gauguin and his “primitivist gaze” ? Sabapathy himself drew parallels between the Nanyang artists sojourn to Bali to Gauguin’s Tahiti where “Gauguin’s figurative compositions provided for these artists a schema which was congenial to their own aspirations in the creation of figure-types.” Regarded as being one of Modernism’s greatest artists, this bohemian renegade had broken free from the shackles of the European bourgeois society and went on a soul-searching journey for creative liberation in the Pacific. He successfully did so, and achieved immense success and fame as a Post-Impressionist giant (although mostly posthumously), through his compositions of nude, brown-skinned Tahitian girls in his self-constructed imagination of primitive Edenic utopia. Like “a ‘poacher;’ (Pissarro) like other French colonists, he wished to ‘replenish himself’ – his masculinity, his imagination and his purse – at the expense of the Tahitians” .

In the 19th century, French colonial propaganda presented the Pacific Islands, or the South Seas, through the romantic visions of an exotic paradise in a bid to encourage French citizens to immigrate to the colonies. One of the ways that the French Republic did so was through the 1889 Exposition Universelle in Paris, which intended to showcase the progress of the Republic and glory its colonial empire — major ‘exotic’ attractions included a “negro village”, Javanese performers and even a Mexican pavilion that featured a model of an Aztec temple. Elizabeth Childs explains that the exhibitions at the fair were set up for visitors to go from “one ‘colony’ to another, from one exotic spectacle of sight, sound, and smell to another [and] assured visitors of the Other’s distinctive difference and also extended the promise of seamless entry into the Other’s world” . Accordingly, Gauguin was compelled set sail for Tahiti after attending this exhibition where he had fallen in love with exotic culture.

In the Exposition Universelle, colonised people and their cultures were represented as a spectacle of the uncivilised for the civilised world. Amongst the archival materials of the exhibition is a photograph of a troop of ornamented Javanese women, who danced for those who watched them. Therefore, these representations of women from colonised places as the exotic Other only served to objectify these women for the consumption by the European (male) gaze. Such representations upheld and further fuelled the stereotypes of literature that drove the colonial forces. More than a century later, scholars and critics have acknowledged Gauguin’s controversial paintings and its ugly reality. Re-articulated, Gauguin is re-casted as a fraudulent cad whose canvases laid bare his sexual and racial fantasies that were savagely forged from the Western position of patriarchal, colonialist power. Not only did he satisfy his erotic and exotic fantasies with his pubescent lovers, he consciously profited on the myth of the noble savage to create a demand and market for his paintings back in Europe. As he penned to his friend Vincent van Gogh in 1890, “At the atelier of the Tropics, I will perhaps become the Saint John the Baptist of the painting of the future, invigorated there by a more natural, more primitive, and above all, less spoiled life”.

Statement of the problem and justification

This master’s thesis proposal attempts to trace and analyse the lineage of which selected 20th century Malayan émigré artists with regards to their Orientalist depiction and treatment of the female, indigenous form and the landscape of South-east Asia. I challenge the dominant narrative that celebrate these artists’ representation of the subject matter of the brown female body within the landscape of the Nanyang. This thesis will examine the social and historical conditions that the discourses of art that in turn, shaped their aesthetic ideologies and artistic practices. The following factors would have impacted the reception of their artwork — how they would be received and mediated — by the public art market, private patrons, and art institutions. Hence, how is the production of meaning and/or value that are tied to artworks in the pictorial realm connected to and affected by society’s production of power and subordination that contain legacies from imperialism and colonialism?

I will focus on re-reading the works by male artists within the Nanyang oeuvre. By examining the representation(s) of indigenous females, femininity, race, class, and religion, I question the politics of South-east Asian art history and how it is bound to the Western tradition and tastes. Despite the geographical distance between the South-east Asia and the West, the latter’s inextricability has impacted the visual, which is intertwined with the political. The reigning power positions control the very structure of visuality and its subsequent production of representations. Both are impacted by economic and social systems as well. Therefore, it is precisely this politics of vision that has determined not only how art history is represented but also who, what and how such a vision establishes meaning(s) as visuality and representation are invariably related to political, economic and social structures. Much of the work created, whether deliberate or not, appealed to a white European/Western audience as these particular group of people had economic control of the market which dictated certain aesthetic tastes.

During the 19th century, these certain aesthetic tastes were bound to the flourishing of the European art movement, Orientalism. Highly fashionable, these Orientalist paintings usually depicted richly colourful, sensual and exotic domains beyond Europe. The artistic interpretations included the simplified and demeaning depictions of the cultures of the ‘Near East’ — North Africa, Turkey and the Middle East. Such portrayals have had lasting influence and impact in the creation of a binary between the East and West as it incorporated detrimental views of the people and culture of the non-European. As such, Europe claimed social, intellectual and political superiority. These binaries undergirded colonialism and imperialism, functioning as visual propaganda and justification for the perceived right to conquer and rule of Europeans. Framed through the male colonial gaze, complex cultures were often reduced to primitive and exotic stereotypes.

The depiction of the female non-European by (male) European artists had its own particular currency. The vast majority of producers and consumers of art in European societies were men and therefore, men had full control over the representation and propagation of the image of the female and the ideals of femininity. The female non-European, was often than not, cast in an exotic and erotic manner, shaped around a patriarchal colonial agenda. Designed to provide titillation for the male colonial gaze, the perversion of the image of the harem became a conduit for the male European sexual fantasy. Within harem paintings, images of naked women served as passive exotic objects of consumption for the male voyeur. Associations of debauchery, lesbianism and sexual availability were rampant in these particular Orientalist paintings, fulfilling the desires for domination and control of the male coloniser through these projections of the Orientalist fantasy.

Within the oeuvre, the expression of the female subject varies from artist to artist, with some being comparable to the traditional erotic and exotic 19th century Orientalist paintings of women. However, to level the claim that the paintings in the South Seas oeuvre are blatant copies or completely inspired by the likes of Paul Gauguin, Adrien-Jean Le Mayeur de Mepres or Pablo Picasso is an absolute generalisation. Therefore, I suggest that it is necessary to investigate the differences in motivations and meanings that are produced through these artworks. This is done in relation to the particular functions of these female representations which manipulate both Orientalism and Primitivism in a specific historical context for the artist’s benefit. Taking us back to the male colonial gaze adopted by these Nanyang artists, perhaps the more pointed charge against their treatment of the subject matter is not that it is merely scandalous, but that it is, at its very core, a reproduction of the already long-existed tropes of the female Other.

Scope of Research

Here, I define the ‘South Seas’ to be the geographical area located around the Straits of Singapore and Malaysia, or Malaya, or what has been established as the Nanyang. Gauguin’s ‘South Seas’ refers to the Pacific Islands but the premise remains the same. The Pacific Islands and Pacific Islanders had been aesthecised by employing romanticised, classical references and comparisons to an earthly Garden of Eden filled with exotic willing women. Bali, in particular, has been constructed as the quintessential site of Orientalism in South-east Asia, famed amongst both European and Asian male artists.

Therefore, my proposed research is the investigation of the oeuvre within the canon of art historical scholarship and whether or not the racial, sexual, and class antagonisms of the 19th century French and/or British Orientalist concepts has filtered into the 20th century’s emerging practices of local male artists in Singapore, Malaysia, and Indonesia. Some of my selected artists from Malaya are émigrés like Liu Kang, Chen Chong Swee, Chen Wen His, Cheong Soo Pieng, and Lee Man Fong. Their works will be contrasted against the works, which vastly differ in their treatment of pictorial subjects, by Indonesian artists like Sudjojono, Hendra Gunawan, and Affandi. Not only did these antagonisms defined and categorised humanity, but they had also influenced prevailing attitudes towards erotic and exotic imagery. I attempt to address the problem of the ideological content of such images of women which rely upon the assumed metaphysical fascination with the exotic Other.

I question as to how and why these male artists chose the female form set against the tropics, where the female form as a relatively traditional cult object, and transformed it into modern art’s cult object. By the 19th century, the male nude would be relegated to the background as the female nude emerged into the foreground. Becoming an object of delectation not merely due to the allure and fetishisation of the female body, but also due to the constellation of political, economic and social structures coming into being. This resulted in the consumerist impetus that pushed for the demand and emphasized female nudity as part of the Western artistic tradition at that point of time. Within art and art history, women were treated as passive objects of male desire, artistic mastery and for commodification. While fascination with the search of a new visual currency is no doubt a factor for these artists, the impulses which drove their ideology has to be scrutinised in terms of formal and aesthetic traditions while relating it to the real political, social and economic issues of that particular time period.

Implications/Benefits of the research

In recent decades, studies focused on gender in Southeast Asian societies have concurrently emerged with the development of Southeast Asian art histories that is more regionally focused. Thus, I situate my proposed research topic within the discursive intersection between these two fields. To do so, I utilise an inter-disciplinary approach spanning across a range of disciplines like art history, visual studies, gender studies, history and post-colonial studies and a transnational framework to establish linkages between the practice of these émigré artists to their European predecessors who were the forerunners in their utilisation of a male colonial gaze.

The absence of critical discourse surrounding the Nanyang oeuvre encourages my approach to address the differences in meanings with regards to paintings of the South Seas oeuvre. Like Donald Rosenthal, the failure of modern and contemporary scholarship to address the differences in meaning of the paintings in historical and critical terms exposes the lack of understanding of Orientalism and Primitivism’s relevance and complexity in today’s world. I situate my research within the intellectual climate of post-colonial studies, in particular the works of the profound critics of colonialism: Franz Fanon, Edward Said and Homi Bhabha. While we are in the post-colonial era which refers to the temporal reference to the period of ‘after-colonialism, former colonies are still grappling with the resonant ambiguities and complexities of the social and material effects of colonialism. As Al-Atas asserts, “…we cannot yet speak of alternative discourses if the mainstream is not engaged, critiqued and subverted or an alternative set of conceptualisations and theories presented.”

Refusing to recognise the highly political and seriously negative use of these two concepts is akin to dismissing the complex problem of their colonial connections and denial of history itself. Thus, neutralising the political implications of such Orientalist paintings and excusing these artists’ problematic attitudes. Therefore, I wish to avoid this particular genre of romanticised scholarship that subscribes to the notion of ‘art for art’s sake’ (l’art pour l’art). Such scholarship is representative of the traditional and uncritical art historical method whereby formal and aesthetic concerns are segregated from the complex historical context and legacies in which shape the productions of such erotic and exotic imagery. Hence, this has resulted in a limited viewpoint which does not attempt to understand or question the impact of ideology upon these artists’ fascination with conscious or unconscious production of such representations. Therefore, I intend to evaluate their choices of pictorial subjects and their works through alternative methodologies and examining more in depth this neglected field of feminist art historiographical approach.

Proposed Methodology

While I will focus on the visual images of the Nanyang made by these selected artists, my concern the representation of bodies and nature. Hence, in concentrating on the visual images of the exoticised and eroticised female Other(s) in a tropical paradise, I draw upon a selected array of primary sources like exhibited paintings by artists and images by photographers of the Indies and Malaya. My work aims to contribute to the existing scholarships that analyse colonial representations of landscape and people as historical aspects of visual culture, specifically on the ideals and fantasies that informed the artist’s notions. Viewing the images of landscapes and people against historical context in which they were produced will be the main approach taken to my visual sources. Thus, I will be drawing upon theoretical and methodological paradigms from the multi-disciplinary field of visual culture studies.

With “pictorial turn” , scholars have their attention to world of images. However, historians and art historians privilege different approaches. While historians acknowledge that images do matter, they often “continue to treat visual material as illustrative, rather than constitutive, of the agendas and problems they explore” . Thus, privileging only written texts alone as ‘evidence’ of cultural discourses and colonial mentalities. This has been reinforced by the theoretical dominance of the “linguistic turn” . The debate on the utility and value of visual studies for history hence continues. Do images alone suffice to serve as primary sources, or are textual sources still required to support interpretive analyses?

On the other hand, art historians have long privileged visual sources to contain traces of the past and therefore, are as complex as textual sources of literature, documentary archives and news media. Hence, I will employ both approaches to yield new insights into the representations of the landscape and people of the South Seas. By examining both visual and textual primary sources together, I hope to contend that colonial images of the tropics, while having dazzled the eye, are veiling controversies of certain representations of the landscape and its peoples.

In Gillian Rose’s An Introduction to the Interpretation of Visual Materials (2001), Rose begins by positing that the “interpretation of visual images must address questions of cultural meaning and power” . A “critical visual methodology…. [is] an approach that thinks about the visual in terms of the cultural significance, social practices and power relations in which it is embedded; and that means thinking about the power relations that produce, are articulated through, and can be challenged by, ways of seeing and imaging” . Thus, I forward my argument through theoretical interpretation and seek to establish linkages across different contexts and time periods through broad contextual analysis to arrive at my conclusion where I argue that these paintings have developed not only a sense of nostalgia which is one of the vehicles in idealizing the tropics, but more pertinently, casting its peoples as ‘Others’.

Literature Review

In 1971, Linda Nochlin published a ground-breaking essay entitled, “Why Have There Been No Great Women Artists?”. This was a turning point in art history which brought forth a radical feminist re-conceptualisation of the discipline where she argued against the meta-historical premise of ‘greatness’ and so-called ‘natural’ assumptions. Instead, she proposed an alternative way of viewing art — through its social coordinates. Nochlin challenged the semi-religious conception of the role of the male artist and male scholar in history in the constitution of the subject of art history. This was imperative as to “encourage a dispassionate, impersonal, sociological and institutionally-oriented approach [which] would reveal the entire romantic, elitist, individual-glorifying and monograph-producing substructure upon which the profession of art history is based, and which has only recently been called into question by a group of younger dissidents.”

She would go on to produce The Politics of Vision: Essays on Nineteenth-Century Art and Society (1989) by which Nochlin involves herself in a “revisionist project” whereby feminism would be conceived as both theory and politics. Within this collection, “The Imaginary Orient” begins by referencing the 1982 exhibition and catalogue, Orientalism: The Near East in French Painting, 1800-1880. Within the catalogue’s introduction, the crucial issue of Orientalist painting’s problematic associations with political domination and colonist ideology is raised by the organiser of the exhibition, Donald Rosenthal. Rosenthal maintains that “the flowering of Orientalist painting… was closely associated with the apogee of European colonist expansion in the nineteenth century” despite his acknowledgement of the critical definition of Orientalism by Edward Said: “…as a mode for defining the presumed cultural inferiority of the Islamic Orient… part of the vast control mechanism of colonialism, designed to justify and perpetuate European dominance”.

Said’s seminal Orientalism (1978) criticised the western Orientalist discourse; characterising it as tool to achieve western imperial hegemony. Establishing a clear link to Antonio Gramsci’s notion of “hegemony”, Said also draws upon Michel Foucault’s concept of power-knowledge relations. Thus, he argues that imperial governance is intertwined with western disciplines of knowledge — exposing the complicity of western knowledge with western power. Said’s argument hails back to Friedrich Nietzsche’s the will to power, dispelling the objectivity of knowledge and posited that knowledge would always serve some form of interest or unconscious purpose. Fundamentally, through what Said forwards as Orientalism’s “ontological and epistemological distinction”— devising of a theory and practice which divided the world into two ‘equal’ halves. Interiorising the notion of western superiority and of eastern inferiority, these supposed essential differences eventually calcified in to compulsions for westerners undertake the “white man’s burden” of civilising subject races and save native women from their savage male counterparts.

Immediately rejecting to forward a Saidian analysis of the exhibition in his own study, he asserts, “French Orientalist painting will be discussed in terms of its aesthetic quality and historical interest, and no attempt will be made at a re-evaluation of its political uses”. These Nanyang artists were treated similarly in Singaporean art historical scholarship. However, Alison Carroll’s Gauguin and the Idea of an Asian Paradise (2011) provides a clear framing on how problematic the Nanyang oeuvre is, by linking these endeavours of these Asian artists to Gauguin’s legacy. As McClintock’s Porno-Tropics encapsulates the European fascination with unknown peoples and cultures, “For centuries, the uncertain continents – Africa, the Americas, Asia – were figured in European lore as libidinously eroticized. Travellers’ tales abounded with visions of the monstrous sexuality of far-off lands, where, as legend had it, men sported gigantic penises and women consorted with apes… Renaissance travellers found an eager and lascivious audience for their spicy tales, so that, long before the era of high Victorian imperialism, Africa and the Americas had become what can be called a porno-tropics for the European imagination – a fantastic magic lantern of the mind onto which Europe projected its forbidden sexual desires and fears.”

This is additionally supported by David Arnold’s concept of “tropicality” which presents that Europeans viewed the tropics like the notions of the Orient where it “represented an enduring alterity, but one which qualifies and extends the Orientalist paradigm, not least by demonstrating that historically Europe possessed more than one sense of ‘otherness’”. Therefore, the notion of the tropics should not be eclipsed by universalist scholarly notions of ‘cultural imperialism’ or ‘orientalism’ where their historical and cultural specifities are ignored. This would be followed by Nochlin’s Representing Women (1999). Here, feminism is conceived as an aesthetic and political commitment — emphasising the diversity and plurality of methods, perspectives, and opinions. Collectively, her works ultimately work towards dismantling the phallicity of the master narrative.

Taken together with John Berger insight in Ways of Seeing (1972), his analysis of the important category of European oil painting, the female nude, emphasises that the ‘spectator owner’ was typically and ideally male, while the “object of vision: a sight” — the owned — was female. Therefore, one needs to be constantly mindful of the ideologies that underscore visual cultures as “…women are depicted in a quite different way from men — not because the feminine is different from the masculine — but because the ‘ideal’ spectator is always assumed to be male and the image of the woman is designed to flatter him” . Within art history, he highlighted how it was a product of the naturalisation of the Western canon and ends with the declaration that the same objectifying qualities and assumed male perspective exhibited by traditional paintings of the female nude continue to manifest themselves in new ways through “advertising, journalism, and television.”

Picking up where Berger left off, Laura Mulvey’s Visual Pleasure and Narrative Cinema (1975) presents a second-wave feminist concept of the “male gaze” as derived from Jacques Lacan’s psychoanalytical term of the “gaze” :

“In a world ordered by sexual imbalance, pleasure in looking has been split between active/male and passive/female. The determining male gaze projects its phantasy on to the female form which is styled accordingly. In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote to-be-looked-at-ness…”

Mulvey’s concept of the “male gaze” became a cultural lexicon as her observation transcended movies as in all forms of media, the male experience is held up as the norm — it is the men who have agency, women are merely the objects of consumption. The domination of the male gaze on the art world was blatantly exposed by the Guerilla Girls, an anonymous all-female collective. In their 1989 pop-art inspired poster Do Women Have To Be Naked to Get Into the Met Museum, it revealed that “less than 5% of artists in the Modern Art section (of the Metropolitan Museum of Modern Art in New York) are women, but 85% of the nudes are female” .

The disparity between who is object and who is the subject matters as at its core, the male gaze is primarily about power. In the arts, where the currency is selling a narrative or having a discernible voice, it boils down to who gets to tell the narrative and in what way, and who must remain a silent character in the narrative told about them. Therefore, the “male gaze” refers to the representation of women by the male subject whereby the female body is traditionally submissive, objectified and eroticised. Retrospectively, Mulvey’s “male gaze” can be used as a framework to interpret many revered works in the history of art, such as Titian’s Venus of Urbino (1538), Jean Auguste Dominique Ingres’ Grande Odalisque (1814), Paul Gauguin’s images of Tahitian women and the works in the Nanyang oeuvre.


Gestational Diabetes Mellitus (GDM): essay help


Gestational Diabetes Mellitus (GDM) is considered one of the most common conditions that affect women during pregnancy or gestation, where blood glucose levels are increased above normal limits (Baz et al., 2015). Blood glucose levels are controlled normally by insulin hormone (Kinalski et al., 2002). But some pregnant women may have higher levels of glucose that cannot be managed by insulin secreted in their bodies (Kleinwechter et al., 2014).

Diabetes Mellitus, other than GDM, could be one of two types, “type one” in which the body does not secrete insulin hormone at all and mostly called ‘juvenile diabetes’ (Toyoda, 2002), and “type two” in which the body does not secrete enough insulin and/or there is insulin resistance or the cells do not react to insulin) (Toyoda, 2002).

GDM usually arises after 28 weeks, in the third trimester, of pregnancy, and usually disappears after delivery of the baby (Association, 2010). Women with GDM are prone to the risk of preeclampsia and Caesarean section (Ross, 2006) in addition to developing type 2 diabetes later on (Dunne et al., 2003), so they should keep monitoring their blood glucose levels and manage it with their physicians at a regular basis (Kim, 2010). GDM symptoms and signs are very few in general and mostly diagnosed by screening tests blood glucose levels, which are mostly above normal limits, in blood samples withdrawn during pregnancy (Kalelioglu et al., 2007).


About 3 to 10% of pregnant women are affected with GDM, according to several factors (Chanprapaph and Sutjarit, 2004). If they are not treated, they would deliver infants at high risk of clinical problems, for example being larger than normal (macrosomia) which may cause delivery complications), jaundice and hypoglycemia (Kinalski et al., 2002). This can also lead to seizures or being born dead (stillbirth) (Kinalski et al., 2002).

GDM could be treated and women can decrease these risks effectively by controlling of glucose levels in their blood (Erem et al., 2015). This control could be achieved by following a healthy eating plan and keeping active and physical exercises, and if this does not work, then using anti-diabetic medications (the safest one is insulin), would be necessary (Erem et al., 2015).


GDM could also be defined as a degree of intolerance of glucose during gestation period (Buchanan and Xiang, 2005). This definition may indicate that pregnant woman has not been diagnosed with diabetes mellitus previously, or she may have developed diabetes mellitus simultaneously with pregnancy (Buchanan and Xiang, 2005). Diabetes Mellitus could be classified into two groups according to this definition, which are Gestational Diabetes Mellitus (type A) and Pregestational Diabetes Mellitus (prior to pregnancy) (Association, 2010).

Furthermore, These two groups are classified according to their related risks and how to manage them (Association, 2010), where Gestational Diabetes Mellitus is divided into type A1 and type A2. Type A1 is diagnosed using oral glucose tolerance test which shows abnormal levels of glucose, however during fasting and after a meal by two hours (postprandial), it shows normal levels of glucose; so following healthy diet and practicing physical activities would be sufficient for management of this type (Mellitus, 2005). Type A2 is diagnosed using oral glucose tolerance test which shows abnormal levels of glucose and during fasting after a meal by two hours (postprandial), it also shows abnormal levels of glucose; so management would necessarily include the use of anti-diabetic medications such as insulin or other oral drugs (Abell et al., 2015).

Pregestational Diabetes Mellitus is also divided into many subtypes, these subtypes may affect different age group and last for different period of time (Table 1), as in subtype B, C and D (Association, 2010). Other may affect different organs of the body, which include subtype E, which is obvious with calcified (rigid) vessels in pelvic region, subtype F which affects the kidney, subtype R which affects the retina, subtype RF which affects both kidney and retina, subtype H which affects the heart, and subtype T which arises before transplantation of kidney (Association, 2010).

Subtype / Period / Age / onset

B Less than 10 years Equals to or more than age 20
C Between 10 to 19 years Equals to a range of age 10 to 19
D More than 20 years Equals to or less than age 10

Table 1 Pregestational Diabetes Mellitus Subtypes


GDM develops because of a lot of hormonal changes and otherwise occurring during gestation, when there is not enough insulin secreted to control the rise in blood glucose levels and metabolize such glucose effectively. Insulin is a hormone formed in the pancreas. Insulin helps the body to use glucose for energy and helps control your blood glucose levels (Kinalski et al., 2002). When insulin binds to its receptors, it activates several protein processes necessary for the synthesis of glucose, glycogen and fatty acid synthesis, besides glycolysis process and metabolism of carbs and fats to get energy for cells (Poulakos et al., 2015).

It is unknown yet the exact mechanisms causing GDM (Poulakos et al., 2015). It is thought that pregnancy hormones may interfere with insulin action by binding to its receptors and replacing it, this is called insulin resistance (Kahn, 2003). As insulin activates glucose influx into most cells, then insulin resistance would prevents the occurrence of this action. And so glucose would remain in the bloodstream; this means the rise of glucose levels (Vambergue et al., 2002). As a consequence, more insulin would be needed to resolve this resistance; which could be about 1.5 to 2.5 times more in the normal pregnancy, to ensure enough glucose and nutrients supply to the fetus, so it can grow. This means also that insulin resistance is something normal which arises in the second trimester of pregnancy, but it could further progress to levels equivalent to type 2 diabetes levels (Becquet et al., 2016).

Placental hormones, like cortisol and progesterone, also may control the mechanism of insulin resistance during pregnancy, but estradiol hormone (estrogen sex hormone), prolactin hormone (luteotropin or Milk hormone), placental lactogen hormone (chorionic somatomammotropin), other placental hormones, TNFα (tumor necrosis factor alpha), resistin hormone (adipocyte-specific hormone), and leptin hormone (satiety hormone), are all involved also in decreasing of insulin resistance during pregnancy (Abell et al., 2015).


GDM risks affect both mother and her baby. These risks are associated and increased with unmanaged glucose levels which exceed normal limits and their effects. Treatment and good control of these levels can reduce many of these risks significantly (Lee et al., 2007). GDM, if not treated or managed, can cause problems for the baby. Babies might be born with a body larger than normal—a condition called macrosomia—as extra glucose in mother’s bloodstream crosses the placenta, which stimulates baby’s pancreas to secrete more insulin, which in turn makes the baby grow too large (Obstetricians and Gynecologists, 2000). Very large babies —of weight › 4 kg — are prone to the risk of being stuck in the birth canal during vaginal delivery, causing problems like shoulder dystocia; when baby’s head passes through the vagina, but baby’s shoulder gets stuck behind pelvic bone. Shoulder dystocia can be dangerous as baby may be unable to breathe easily while stuck (Draycott et al., 2008). These problems make C-section more preferred, or decide early delivery if this happens.

Babies also might be born early (preterm birth) with respiratory distress syndrome. GDM increases the risk of early labor and delivery before the due date. Dysmature babies are prone to this syndrome due to lung immature lung and its surfactant formation which, in turn, affects respiration (breathing) and makes it not easier (Baz et al., 2015). Babies suffering from this syndrome would need respiratory care until their lungs become mature and gets stronger (Brower et al., 2004). Babies also might be born with jaundice, in which the skin and eyes’ whites become yellowish in color. Jaundice usually disappears, when the baby gets enough breastfeeding with the help of phototherapy (Ross, 2006).

Babies also may develop hypoglycemia (Low blood sugar) shortly after birth because their bodies secrete higher amounts of insulin. Severe hypoglycemia may stimulate seizures in the baby, which may need intensive care and quick intervention with good feedings and administration of intravenous glucose solution to return blood sugar to normal levels (Cryer et al., 2003). Babies of mothers who have not treated or managed GDM may be subject to risk of developing Type 2 diabetes and obesity later on in their life (Bellamy et al., 2009). Untreated GDM also could lead to baby death either before birth or after a short time of it (Bellamy et al., 2009).

GDM, if not treated or managed, may also increase the mother risk of having high blood pressure (hypertension) and more levels of protein in urine (proteinuria), a condition called preeclampsia (Redman and Sargent, 2005). Preeclampsia usually occurs during the 2nd half or 3rd trimester of pregnancy. If it is not treated, it can cause a lot of problems for both mother and baby and may lead to death. The only way to cure preeclampsia is to give birth or to have a cesarean section to deliver the baby early (preterm birth) (Redman and Sargent, 2005, Sibai, 2003). If preeclampsia develops earlier, the mother may need for bed rest and medicines, which could be achieved by hospitalization for adequate care for both her and the baby (Redman and Sargent, 2005).

It may also increase the mother risk of getting depressed. Depression, in turn, can make her too tired and feel sad, nervous or unable to accommodate with the changes she is facing (Musselman et al., 2003). She may develop diabetes of type 2 and all of its related problems later on (Dunne et al., 2003), check Table 2 for a conclusion of these complications.

Fetal complications / Maternal complications

Fetal distress/fetal death
Birth injury due to shoulder dystocia and Macrosomia
Delayed fetal development Neonatal hypoglycemia
Neonatal hyperbilirubinemia Neonatal hypocalcemia Neonatal polycythemia
Respiratory distress syndrome Hypertrophic cardiomyopathy
Obesity/diabetes later Diabetic retinopathy
Diabetic nephropathy
Diabetic ketoacidosis
Hypoglycemia (when using insulin)
Spontaneous abortion Premature birth
Pregnancy-induced hypertension Hydramnios

Table 2 Maternal and fetal complications in pregnancies with carbohydrate intolerance

Risk factors

Every woman should seek health care early, if possible — when she first think about trying to get pregnant — so her doctor can evaluate her risk of developing GDM. If she develops it, then she may need further more screening and checkups. These are most likely to occur during the third trimester (last three months) of pregnancy when the doctor will monitor blood sugar level of mother and baby’s health (MacNeill et al., 2001).

The most common is Polycystic Ovary Syndrome (PCOS), which is a common disorder of endocrine system which develops among women in child-bearing age. It is characterized by enlarged ovaries that contain small collections or follicles of fluid located in each ovary which could be seen during the ultrasound examination. It may lead to infrequent or prolonged menstrual periods, intense hirsutism, increased weight, and developing acne (Toulis et al., 2009). Other factor includes GDM history in the past, intolerance of glucose, previous diabetes, or abnormal fasting levels.

Risk also increase with one of first-degree relatives has a previous history of diabetes type 2, old maternal age which increases as a woman gets older especially for women whose age is over 35 or 45 years (Di Cianni et al., 2003), and ethnicity where non-white race has the highest risk factors including People from Africa, Islands of the Pacific, the Caribbeans, Hispanics, local native Americans, and South Asians (MacNeill et al., 2001). Among the factors also if there is a previous pregnancy which delivered a baby with a macrosomia (weight › 4 kg), if obstetric history is poor (Di Cianni et al., 2003), between smokers and obesity which means that body has excess fats, and body mass index (BMI) is 30 or higher (Mokdad et al., 2003), and other genetic factors where 10 genes at least involved with increased risk of GDM, when a certain polymorphism occurs to them, the most notable one is TCF7L2 gene (Zhang et al., 2013).

Risk factors are not usually demonstrable, nor symptoms in about 40 to 60 percent of women suffering from GDM; so all women must be screened. Some other women may suffer from some of the common symptoms of diabetes, like fatigue, tiredness increase of urination, nasal congestion, thirst, blurred vision, nausea and vomiting, fungal infections and urinary tract infection (MacNeill et al., 2001).

Most women who have well controlled GDM deliver healthy babies. However, GDM that’s not carefully managed can lead to uncontrolled blood sugar levels and cause problems for both the mother and the baby, including an increased potential for C-section delivery (Jensen et al., 2001).

Diagnosis & Screening

Blood tests are commonly used for diagnosing GDM. There are many screening and/or diagnostic tests for detecting high levels of plasma or serum glucose, as per WHO diagnostic criteria, (Table 3).

Condition 2-hour glucose Fasting glucose HbA1c

Unit mg/dl mg/dl mmol/mol DCCT %

Normal <140 <110 <42 <6.0

Impaired fasting glycaemia <140 ≥110 & <126 42-46 6.0–6.4

Impaired glucose tolerance ≥140 <126 42-46 6.0–6.4

Diabetes mellitus ≥200 ≥126 ≥48 ≥6.5

Table 3 WHO diabetes diagnostic criteria

Non-challenge blood glucose tests measure glucose levels in blood samples without challenging or forcing the person to drink a glucose solution, then blood glucose level is determined when fasting or two hours after a meal (postprandial), or at any random time, in contrast, challenge tests measure glucose levels in blood samples after forcing the person to drink a glucose solution (Mellitus, 2005).

Non-challenge blood glucose test, in which diagnosis of GDM is made, when a plasma glucose level is higher than 126 mg/dl after fasting, or over 200 mg/dl at any random time, and then confirmed on the following day, and no further testing is required after that (Nielsen et al., 2012). It is performed usually at the first antenatal visit. It has advantages like simple administration and inexpensive, however disadvantages are low performance, low specificity, moderate sensitivity, and high false positive results (Nielsen et al., 2012).

Screening glucose challenge test (O’Sullivan test) in which diagnosis of GDM is made, when a plasma glucose level is 140 mg/dl after 1 hour of drinking a glucose solution that contains 50 grams of glucose (Palmert et al., 2002). It is done between the gestational weeks 24–28. It has advantages like no previous fasting is required, simple and inexpensive , however disadvantages are glucose solution could cause nausea in some women, so artificial flavors could be added.

Oral glucose tolerance test (OGTT), is done, usually, at morning following fasting overnight (8-14 hours), the person must have unrestricted diet and physical activity during the last 3 days. The person will drink a glucose solution, containing 100 g of glucose, then withdrawing blood samples to measure glucose levels at the beginning and after one, two and three hours thereafter (Stumvoll et al., 2000). Diagnosis of GDM is then made, when fasting blood glucose level is larger than or equals 95 mg/dl, then 1-hour blood glucose level is larger than or equals 180 mg/dl, then 2-hour blood glucose level is larger than or equals 155 mg/dl then 3-hour blood glucose level is larger than or equals 140 mg/dl

Urinary glucose testing measures urine glucose levels which are considered high in women with GDM. Dipstick test, which is a stripe containing a reagent for detecting glucose in urine, is widely used, although it performs poorly; as the sensitivity of glucosuria for GDM in the first 2 trimesters is only around 10% (Goldstein et al., 2004).


Ways that can lower the risk of getting GDM (Ratner et al., 2008) include losing extra weight as pregnancy could lead to a little increase of body weight, which is something good for baby health, but gaining too much weight in a very quick manner may increase the risk of getting GDM (Ratner et al., 2008). Increasing physical activity level before pregnancy is effective also for the prevention of GDM (Sanabria‐Martínez et al., 2015). Stopping smoking also may lower the risk of getting GDM. Monitoring blood glucose levels regularly. Also following a healthy eating plan, eating more grains, fruits, and vegetables, cutting down on fat and calories (Kim et al., 2007).

Healthy Eating Plan is an important factor of managing GDM (Reader et al., 2006). Following a healthy eating regimen or diet will help with controlling blood glucose levels to be within normal limits, providing adequate nutrition for mother and growing fetus, and achieving the convenient necessary changes of weight during pregnancy (Kim et al., 2007). Women with GDM are encouraged to eat small amounts and keep their weight healthy (Reader et al., 2006), eat carbohydrate in every meal, eat foods that provide nutrients especially needed during pregnancy, eat foods with high fiber content, and avoid foods and drinks of large amounts of sugar or high glycemic index ex. Basmati rices.

Carbohydrates are metabolized into glucose which is then used to produce energy. To well control glucose levels, it is necessary to distribute carbohydrate over three small meals and snacks daily (Zhang et al., 2006a). Foods containing carbohydrate include milk, yogurt, cereals, multigrain breads legumes like red kidney and baked beans, rice (Basmati), pasta, noodles, fruits, corn, potato, and sweet potato. Sucrose (table sugar), fruit juices, soft drinks, cakes, and biscuits have low nutritional value (Zhang et al., 2006a).

Eating fats, especially saturated fat, should be limited. Healthy fats should be used example, polyunsaturated oils, margarine, canola, olive oil, avocados and unsalted nuts. To avoid or decrease the intake of saturated fat, then low-fat dairy foods, lean meats should be selected and processed and takeaway foods should be avoided (Liang et al., 2010).

Protein should be served two times each day in a small amount, because of its importance for the growth of fetus and maintenance of mother health. It include lean meat, eggs, milk, low fat cheese and fish (Zhang et al., 2006b, Kim et al., 2007). These foods do not affect glucose levels of blood in a direct manner. (Zhang et al., 2006b).

Calcium & iron are increasingly required as pregnancy progresses. So should be served two times daily. (For calcium, one serve is equivalent to 200 g of yogurt, 250 ml milk, or 2 slices of cheese). The iron from red meat, chicken and fish are readily absorbed (Zhang et al., 2006b). In general, any of nutritious foods that do not cause increase of weight or cause glucose levels in blood can be eaten freely. Examples on these foods are fruits and vegetables (except corn, beans, potato, sweet potato, mentioned above) (Zhang et al., 2006b).

Water is considered the best drink for the body – it is recommended with fresh lemon for difference. Sugar-free or diet drinks are preferred for people with diabetes (Gray-Donald et al., 2000). However products containing caffeine and carbonated soft water can increase osteoporosis risk and alter the mood so should have just little of them (Gray-Donald et al., 2000) Alcohols are forbidden as it harms the baby . Alternative sweeteners are more preferred also than to natural sugars, examples are sucralose, aspartame and acesuphame potassium, for sample food plan, check Table 4.

Meal Choose from Plus

Option 1 / Option 2

Breakfast ½ cup untoasted muesli/All Bran®/ rolled oats (raw) 1-2 slices of toast multigrain, soy & linseed, wholemeal, white, heavy fruit bread

OR 1 slice of toast with ½ cup baked beans 250ml low fat milk
OR 100g low-fat fruit yoghurt
OR 200g artificially sweetened yoghurt

Morning tea 4 Vitaweats® with a small amount of reduced fat cheese ½ English muffin

OR 1 slice toast with a small amount of reduced fat cheese 1 serve of fruit, 1 apple, 1 pear, 1 small banana, 2 kiwi fruits, 4 apricots, ½ cup tinned fruit, 2 tablespoons sultanas

Lunch 2 slices of bread

OR 1 medium bread roll with tuna, salmon, fresh chicken, egg, roast beef or reduced fat cheese 2/3 cup cooked rice (Basmati/Doongara)
OR 1 cup pasta/noodles with tuna, salmon, fresh chicken, egg, roast beef or reduced fat cheese Plenty of salad or cooked vegetables (other than potato or corn) PLUS 1 serve of fruit

Afternoon Tea

250mls low fat milk 100g Low fat yoghurt

OR 200g artificially sweetened yoghurt 1 slice heavy fruit loaf
OR 1 crumpet
OR ½ English muffin

Dinner 2/3 cup cooked rice (Basmati)

OR 1 cup pasta/noodles 1 medium potato and a small corn cob A small serve of lean meat, fish, chicken or tofu, with plenty of salad
OR cooked vegetables PLUS 1 serve of fruit

Supper ½ cup low fat custard

OR 2 small scoops of low-fat ice cream 100g low-fat yoghurt
OR 200g artificially sweetened yoghurt 1 serve of fruit

Table 4 Sample food plan

Moderate intensity physical activity is recommended for women with GDM as it can help to control glucose levels, however, it is more preferred to check that with physician prior to starting any activity during pregnancy. It helps also to lower insulin resistance (Sanabria‐Martínez et al., 2015). Practicing exercise, like walking, regularly will help to increase fitness of mother and be prepared more for delivery of her baby. It also helps to maintain glucose levels of blood under control (Sanabria‐Martínez et al., 2015).

Walking as a regular exercise could be achieved by Using a ‘pedometer’ (or a ‘step counter’), standing and moving a lot in the kitchen, taking the stairs not the elevator, walking to faraway stores for shopping instead of using car, making a ‘walking group’ with family or friends at a regular time, and practicing on gardening (Artal et al., 2007).

Glucose levels of pregnant woman usually return to normal limits after delivery, but there is still an increased risk for her to develop type 2 diabetes later in her life (Retnakaran et al., 2007). To decrease this risk or delay it, it is recommended to (Ross, 2006) achieve a healthy weight and maintain it, by eating only balanced, healthy and nutritious foods, as previously mentioned, and practicing physical activities to reduce any extra weight, for at least 30 minutes on most days, keeping checking glucose levels regularly at least every 1-2 years (Vijan, 2010).

Treatment & Management

Recent studies indicate that there is an evidence that good management and treatment of GDM can reduce its complications (Buchanan and Xiang, 2005). Primary complications on the baby included death, fracture of bones, nerve palsy and shoulder dystocia. Primary complications on the mother included the need for premature and/or cesarean delivery. These complications were significantly fewer after treatment, and the need for cesarean deliveries was limited .

The purpose of treatment is to reduce the risks of GDM for both mother and child. Controlling glucose levels can lower fetal complications (such as macrosomia) and increase maternal health (Artal et al., 2007). If a healthy diet, physical exercise, and oral medication are not enough to maintain glucose levels within normal, then treatment with insulin would be necessary (Westermeier et al., 2015).

Counseling before pregnancy is always a good way for a good lifestyle (Artal et al., 2007). Most women can manage their GDM by making healthy dietary changes and exercise activity, as mentioned above. Self-monitoring of glucose levels in blood is an important factor to guide therapy (Saudek et al., 2006). Treatment goals, in the first place, is to achieve normal blood glucose levels (Table 5).

Test Glucose levels (mg per dL)

Fasting Less than 96
One-hour after meal Less than 140
Two-hours after meal Less than 120 to 127

Table 5: Treatment goals for Women with Gestational Diabetes

Some women need anti-diabetic drugs, whereas most commonly need insulin therapy (Artal et al., 2007). Self-monitoring could be achieved using a device called “handheld capillary glucose dosage system” (a device used for measuring blood Glucose levels) (Tang et al., 2000). Testing blood glucose levels is most commonly done once wake up in the morning (fasting) then 2 hours after each meal.

If monitoring indicates failure of maintain of glucose levels within normal limits using these ways, or if there are complications like macrosomia, then treatment with insulin would be necessary (Westermeier et al., 2015). Fast-acting insulin is commonly used just before eating on an empty stomach. Take care to avoid lowering blood sugar levels (hypoglycemia) when injecting excess insulin (Westermeier et al., 2015).

Certain oral anti-diabetic drugs might be safe or less dangerous in pregnancy on the growing fetus than poorly controlled diabetes (i.e The lesser of two harms) (Zhu et al., 2016). Metformin is better than glyburide. If glucose levels cannot be controlled enough with a single drug, then metformin and insulin combination would be better than insulin alone (Ashoush et al., 2016). Metformin is preferred as oral drug rather than insulin injections. Also, it helps with treatment of polycystic ovarian syndrome during pregnancy one of the risk factors of GDM (Song et al., 2016). Metformin also lowers the need for insulin and help to gain less weight (Song et al., 2016).

Medical Interview

When a pregnant woman is examined for the first time, her doctor should check her history concerning diabetes, and if she had developed it before in a previous pregnancy, and if she is at risk for GDM (Ito et al., 2015). Screening tests are done and if they give positive results, then GDM diagnosis is confirmed. After that she is referred to hospital or care center for further checks. These include measuring blood pressure, weight and heart rate every day during hospitalization, measuring uterine fundus height once every week, examination of pelvis for signs of premature birth, collecting vaginal culture, as well as blood testing and urine analysis (Kilgour, 2013). Pregnant woman with GDM may further be examined for glycoalbumin, HbA1c, once per month, anti- glutamate decarboxylase, anti-insulin, islet cell antibodies, once at early pregnancy, Uric protein, urinary glucose, twice every month, urine ketone bodies and albumin once every month, and measuring creatinine clearance to check for nephropathy in diabetic patients (Kim et al., 2007).

Measurement of baby growth using ultrasound, where the head circumference or biparietal diameter, femur length, and abdominal circumference of the baby are examined at suitable intervals. In case the baby appeared too larger than standard limits, pregnancy termination would be considered and early delivery is decided. Ultrasound is used also for screening for congenital anomalies, spine, nervous system, etc. It helps also to check the amount of amniotic fluid, and overall well-being of the baby (Naylor et al., 1996).

When insulin therapy is considered, a special care should be taken for the doses of insulin required during pregnancy, delivery, and after birth as they differs significantly. The need for insulin at the end of pregnancy is increased about two times. But during first-stage of delivery, it is decreased, then increases slightly in the second-stage, and finally decreases rapidly after birth (Itoh et al., 2016).

During delivery, an electrolyte solution containing 5% glucose is administered to the patient at a rate of 100–120 ml/hr, then she is administered intravenous insulin through an infusion pump. Blood glucose is measured at 1–2 hours intervals (Hiden et al., 2012).


GDM generally resolves once the baby is born. According to different studies, the potential of developing GDM in a second pregnancy, if first pregnancy developed GDM, is between 30 and 84%, especially within one year of the previous pregnancy, depending on ethnic background (Nohira et al., 2006).

Women with GDM are subject to an increased risk rate of developing diabetes mellitus in the future, Type 2 (Nohira et al., 2006).

This risk is highest in case of women who needed treatment anti- glutamate decarboxylase, anti-insulin, islet cell antibodies and/or insulinoma antigen-2, women who had more than two previous pregnancies, women who were obese, and women who need insulin to treat GDM have a 50 percent risk of getting diabetes within five years (Lee et al., 2007). Also, their children have an increased risk for obesity in childhood and adult phase as well as type 2 diabetes and glucose intolerance later in life (Lee et al., 2007)


Gestational Diabetes Mellitus (GDM) is considered one of the most common conditions that affect women during pregnancy, where blood glucose levels are increased above normal limits (Baz et al., 2015). It usually arises after 28 weeks, in the third trimester, of pregnancy, and usually disappears after delivery of the baby (Association, 2010). GDM develops when there is not enough insulin secreted during pregnancy to control the rise in blood glucose levels (Kinalski et al., 2002), and this comes by several causes including placental hormones, and insulin resistance.

GDM complications, if not treated or managed, affect both mother and her baby, which make C-section or early delivery more preferred. Complications on the baby includes, macrosomia, respiratory distress syndrome, jaundice, hypoglycemia, developing type two diabetes in the future, or baby death (Bellamy et al., 2009). Complications on the mother include hypertension, proteinuria, a condition called preeclampsia, that needs hospitalization to avoid the risk of preterm birth, and may cause depression or diabetes type two in the future and all of its related problems.

Risk factors include Polycystic Ovary Syndrome (Toulis et al., 2009), GDM history in the past, intolerance of glucose, previous diabetes, or abnormal fasting levels. They also increase with first-degree relatives has a previous history of diabetes type two, non-white race, obesity, smoking and genetic factors.

Blood tests are commonly used for diagnosing GDM. They could be either non-challenge blood glucose tests or challenge blood glucose tests. They include non-challenge blood glucose test, screening glucose challenge test (O’Sullivan test), oral glucose tolerance test, and urinary glucose testing (Nielsen et al., 2012).

Prevention of GDM is achieved by losing extra weight, increasing physical activity level, Stopping smoking, and following a healthy eating plan (Ratner et al., 2008).

GDM could be treated and women can decrease these risks effectively by controlling of glucose levels in their blood (Erem et al., 2015). This control could be achieved by following a healthy eating plan and keeping active and physical exercises, and if this does not work, then using anti-diabetic medications (the safest one is insulin), would be necessary (Erem et al., 2015).

Finally, For all women who are diagnosed with gestational diabetes, this can be upsetting and frustrating. However, working closely with physician and health care team can maintain glucose levels within the normal limits to provide the best outcomes ever for both the baby and mother.



Cholesterol, the most prominent family member of the sterols, is originally discovered as a major component in human gallstones by F. Pouletier de la Salle in 1769. M.E. Chevreul named the organic molecule “cholesterine” (chole for bile, stereos for solid) in 1815, later adjusted with the chemical suffix of –ol for the alcohol component [REF Review Olson 1998 1]. Over the past 100 years, cholesterol has been extensively studied and linked to a variety of pathologies and tightly regulated metabolic pathways. Structure of cholesterol consists, in its free form (free cholesterol; FC), of four linked hydrocarbon rings with on one side a hydrocarbon tail, opposing a hydroxyl group [REF]. The two ends create an amphipathic molecule with a hydrophobic and hydrophilic side. This structural phenotype is of great importance in animal cellular membranes formation. The hydrophilic hydroxyl group binds to the phospholipid heads in the cell membrane, turning the hydrophobic hydrocarbon tail towards the core of the membrane bilayer. This structural phenotype increases membrane fluidity and permeability, allowing the cell to change shape [REF bloch 1991 363-381]. The membrane FC/phospholipid ratio is thus essential for membrane rigidity any misbalance could influence cellular mobility and eventually induce cell death [REF Simons 2000 1721-6 2]. Mechanisms that are associated with the accumulation of membrane-bound FC induced cytotoxicity are intracellular cholesterol crystallization, toxic oxysterol formation [REF Björkhem I. 2002 3] and apoptotic signalling pathway activation [REF Tabas I. 1997 & 2002 4,5]. It is therefore that the majority of the cholesterol found in the body exists in its more stable, less cytotoxic, esterified form (cholesteryl esters (CE)) that take up about 2/3 of the serum cholesterol. Lecithin-cholesterol acyltransferase (LCAT) drives the esterification of a FC molecule in plasma, adding a single fatty acid to the hydroxyl group [REF 6 glomset 1968]. The conversion of un-esterified cholesterol towards CE enables cells to store and transport cholesterol, without the risk of FC induced cytotoxicity [REF]. Upon hydrolyzation by cholesteryl ester hydrolase, cholesterol and free fatty acids are regained for further biosynthesis [REF 36 goedeke].

Besides the eminent role in animal cellular membrane modulation, cholesterol influences a range of pathways i.a. as the precursor for hormone steroidogenesis [REF] and bile acids [REF], plays a significant role in transmembrane signalling [REF] and cellular proliferation [REF fernandez 7]. Despite the functional diversity between cholesterol using pathways, acquisition of cholesterol follows, for most mammalian cells, a comparable pattern. Cellular cholesterol is either de novo synthesized or derived from exogenous uptake from the circulation.



De novo synthesis of cholesterol is mainly found in vertebrates and in low amounts in plants, (not in prokaryotes) [REF Behrman EJ, 2005 8] and derived via the mevalonate (MVA) pathway. The MVA is a fundamental metabolic network providing essential elements for normal cellular metabolism and executed in the endoplasmic reticulum (ER) and cytoplasm of a cell. Despite the presence of MVA pathway in almost all animal cells, the contribution per organ differs. The human brain generates vast amounts of de novo synthesized cholesterol, approximately 20% of the total cholesterol pool and primary FC, mainly found in myelin sheaths that insulate axons [REF dietschy turley 2004 9]. Moreover, the hepatic contribution to the cholesterol pool derived from de novo synthesis varies per species, hepatic cells in mice contribute approximately 40% to the whole cholesterol synthesis, while human liver cells adds only 10% to the total pool [REF Dietschy turley 2001 10 REF 30 Goedeke ].

The MVA-pathway is a highly controlled enzymatic process, resulting in the stepwise formation of FC [REF reviewed by 11 tricarico 2015 16067-16084]. The newly formed cellular cholesterol is either directly used as a precursor for metabolites (bile acids, steroids, water soluble vitamins, included in the membrane) or converted towards CE by acyl-Co A acyl transferase (ACAT) and either effluxed towards the plasma compartment or stored in lipid droplets [REF 12 35 goedeke]. The stored CE within lipid droplets can be converted into FC by hormone sensitive lipase (HSL)[REF]. Since appropriate cellular cholesterol levels are critical for normal cell metabolism, the regulation of intracellular cholesterol levels are tightly controlled by feedback mechanisms that operate at both transcriptional as well as post-transcriptional levels [REF goedeke 10.11]. Low cellular cholesterol triggers the MVA-pathway to upregulate the activation of the rate limiting enzymes i.a. 3-hydroxy-3methylgkutaryl (HMGCR) [REF] and receptor mediated exogenous uptake [REF]. High cellular cholesterol levels activate nuclear hormone receptors that in turn trigger transcription of cholesterol efflux related genes i.a. ABC transporters and inhibit HMGCR expression [REF].

Furthermore, the MVA- pathway is best known as a target for Statins, an extensive prescribed drug that inhibits the rate limiting step; HMGcoA reductase. As a result of the HMGCOA reductase inhibition, cholesterol levels decrease in patients that suffer from hypercholesterolemia.


The second source for cellular cholesterol is exogenous mediated uptake. Exogenous cholesterol obtained via dietary uptake cover approximately 30% of the total cholesterol pool [REF Kapourchali 2016 13]. Nearly 50% of the total dietary cholesterol is absorbed, the remainder is excreted via feces [REF Clearfield 2003 Crouse 1978; Sudhop 2009 14–16]. Lipid absorption from the intestine is a complex functional collaboration along the whole digestive track; gastric, intestinal, biliary and pancreatic. In short, solubilisation of dietary lipids starts in the duodenum and proximal jejunum parts of the intestine where bile acid micelles hydrolyse CE into FC and fatty acids (FA). Micelles absorb the FC and FA and facilitate transport to the enterocytes of the small intestines were FA is synthesized into triacylglycerol to form triglycerides. Exogenous FC is converted into CE in the ER by ACAT [REF 17]. Due to the hydrophobic character of CE its transport throughout the body is facilitated by lipoproteins.


Lipoproteins are spherical macromolecular particles consisting of a hydrophobic core and a hydrophilic shell. The lipoprotein shell contain a mono layer of phospholipids (PL), amphipathic molecules, FC and apolipoproteins [REF], enfolding the hydrophobic content of CE and triglycerides (TG) [REF]. Five lipoprotein classes are distinguished based on their buoyant density: Chylomicrons (CM), very low-density lipoprotein (VLDL), intermediate low-density lipoprotein (IDL), low-density lipoprotein (LDL) and high-density lipoprotein (HDL). The difference in lipid composition of the five lipoprotein classes is depicted in TABLE 1.Cholesterol, the most prominent family member of the sterols, is originally discovered as a major component in human gallstones by F. Pouletier de la Salle in 1769. M.E. Chevreul named the organic molecule “cholesterine” (chole for bile, stereos for solid) in 1815, later adjusted with the chemical suffix of –ol for the alcohol component [REF Review Olson 1998 1]. Over the past 100 years cholesterol has been extensively studied and linked to a variety of pathologies and tightly regulated metabolic pathways. Structure of cholesterol consists, in its free form (free cholesterol; FC), of four linked hydrocarbon rings with on one side a hydrocarbon tail, opposing a hydroxyl group [REF]. The two ends create an amphipathic molecule with a hydrophobic and hydrophilic side. This structural phenotype is of great importance in animal cellular membranes formation. The hydrophilic hydroxyl group binds to the phospholipid heads in the cell membrane, turning the hydrophobic hydrocarbon tail towards the core of the membrane bilayer. This structural phenotype increases membrane fluidity and permeability, allowing the cell to change shape [REF bloch 1991 363-381]. The membrane FC/phospholipid ratio is thus essential for membrane rigidity any misbalance could influence cellular mobility and eventually induce cell death [REF Simons 2000 1721-6 2]. Mechanisms that are associated with the accumulation of membrane bound FC induced cytotoxicity are intracellular cholesterol crystallization, toxic oxysterol formation [REF Björkhem I. 2002 3] and apoptotic signalling pathway activation [REF Tabas I. 1997 & 2002 4,5]. It is therefore that the majority of the cholesterol found in the body exists in its more stable, less cytotoxic, esterified form (cholesteryl esters (CE)) that take up about 2/3 of the serum cholesterol. Lecithin-cholesterol acyltransferase (LCAT) drives the esterification of a FC molecule in plasma, adding a single fatty acid to the hydroxyl group [REF 6 glomset 1968]. The conversion of un-esterified cholesterol towards CE enables cells to store and transport cholesterol, without the risk of FC induced cytotoxicity [REF]. Upon hydrolyzation by cholesteryl ester hydrolase, cholesterol and free fatty acids are regained for further biosynthesis [REF 36 goedeke].

Besides the eminent role in animal cellular membrane modulation, cholesterol influences a range of pathways i.a. as the precursor for hormone steroidogenesis [REF] and bile acids [REF], plays a significant role in transmembrane signalling [REF] and cellular proliferation [REF fernandez 7]. Despite the functional diversity between cholesterol using pathways, acquisition of cholesterol follows, for most mammalian cells, a comparable pattern. Cellular cholesterol is either de novo synthesized or derived from exogenous uptake from the circulation.



De novo synthesis of cholesterol is mainly found in vertebrates and in low amounts in plants, (not in prokaryotes) [REF Behrman EJ, 2005 8] and derived via the mevalonate (MVA) pathway. The MVA is a fundamental metabolic network providing essential elements for normal cellular metabolism and executed in the endoplasmic reticulum (ER) and cytoplasm of a cell. Despite the presence of MVA pathway in almost all animal cells, the contribution per organ differs. The human brain generates vast amounts of de novo synthesized cholesterol, approximately 20% of the total cholesterol pool and primary FC, mainly found in myelin sheaths that insulate axons [REF dietschy turley 2004 9]. Moreover, the hepatic contribution to the cholesterol pool derived from de novo synthesis varies per species, hepatic cells in mice contribute approximately 40% to the whole cholesterol synthesis, while human liver cells adds only 10% to the total pool [REF Dietschy turley 2001 10 REF 30 Goedeke ].

The MVA-pathway is a highly controlled enzymatic process, resulting in the stepwise formation of FC [REF reviewed by 11 tricarico 2015 16067-16084]. The newly formed cellular cholesterol is either directly used as a precursor for metabolites (bile acids, steroids, water soluble vitamins, included in the membrane) or converted towards CE by acyl-Co A acyl transferase (ACAT) and either effluxed towards the plasma compartment or stored in lipid droplets [REF 12 35 goedeke]. The stored CE within lipid droplets can be converted into FC by hormone sensitive lipase (HSL)[REF]. Since appropriate cellular cholesterol levels are critical for normal cell metabolism, the regulation of intracellular cholesterol levels are tightly controlled by feedback mechanisms that operate at both transcriptional as well as post-transcriptional levels [REF goedeke 10.11]. Low cellular cholesterol triggers the MVA-pathway to upregulate the activation of the rate limiting enzymes i.a. 3-hydroxy-3methylgkutaryl (HMGCR) [REF] and receptor mediated exogenous uptake [REF]. High cellular cholesterol levels activate nuclear hormone receptors that in turn trigger transcription of cholesterol efflux related genes i.a. ABC transporters and inhibit HMGCR expression [REF].

Furthermore, the MVA- pathway is best known as a target for Statins, an extensive prescribed drug that inhibits the rate limiting step; HMGcoA reductase. As a result of the HMGCOA reductase inhibition, cholesterol levels decrease in patients that suffer from hypercholesterolemia.


The second source for cellular cholesterol is exogenous mediated uptake. Exogenous cholesterol obtained via dietary uptake cover approximately 30% of the total cholesterol pool [REF Kapourchali 2016 13]. Nearly 50% of the total dietary cholesterol is absorbed, the remainder is excreted via feces [REF Clearfield 2003 Crouse 1978; Sudhop 2009 14–16]. Lipid absorption from the intestine is a complex functional collaboration along the whole digestive track; gastric, intestinal, biliary and pancreatic. In short, solubilisation of dietary lipids starts in the duodenum and proximal jejunum parts of the intestine where bile acid micelles hydrolyse CE into FC and fatty acids (FA). Micelles absorb the FC and FA and facilitate transport to the enterocytes of the small intestines were FA is synthesized into triacylglycerol to form triglycerides. Exogenous FC is converted into CE in the ER by ACAT [REF 17]. Due to the hydrophobic character of CE its transport throughout the body is facilitated by lipoproteins.


Lipoproteins are spherical macromolecular particles consisting of a hydrophobic core and a hydrophilic shell. The lipoprotein shell contain a mono layer of phospholipids (PL), amphipathic molecules, FC and apolipoproteins [REF], enfolding the hydrophobic content of CE and triglycerides (TG) [REF]. Five lipoprotein classes are distinguished based on their buoyant density: Chylomicrons (CM), very low-density lipoprotein (VLDL), intermediate low-density lipoprotein (IDL), low-density lipoprotein (LDL) and high-density lipoprotein (HDL). The difference in lipid composition of the five lipoprotein classes is depicted in TABLE 1.

Chylomicrons are essential in the transport of exogenous cholesterol from the intestines towards the liver. Within the ER of enterocytes nascent chylomicron particles are formed as a result of lipidation of one APOB48 molecule with cellular CE, TG and phospholipids, alongside apolipoproteins [TABLE 1]. The major apolipoproteins classes are de novo synthesized by intestine and liver [REF] and located in the membrane of lipoproteins. The amphipathic apolipoproteins serve in the membrane as enzymatic cofactors and receptor ligands, regulating lipoprotein metabolism [REF 18]. The function and presence of apolipoproteins differ per lipoprotein class.

Once the chylomicrons enter the circulation via the lymphatic system, circulating APOC’s are acquired. APOC’s in the membrane of CM’s serve as a substrate for lipoprotein lipase (LPL) that is present on the endothelial cells of adipose tissue and skeletal muscle and hydrolyse the TG content for energy storage [REF goldberg 1996 19]. Upon hydrolysis, superfluous membrane phospholipids are transferred by the phospholipid transfer protein (PLTP) towards HDL. PLTP, a plasma glycoprotein and a family member of the lipopolysaccharide (LPS)-binding proteins [REF XC Jiang 1999 20], is involved in the metabolism of both the APOB lipoproteins as well as HDL. Deficiency in PLTP expression results in a marked decrease in plasma levels of APOB containing lipoproteins [REF 21] as well as HDL [REF 20].

In the circulation chylomicrons exchange APOE and APOC’s at the expense of APOA-1 and APOA-IV with HDL, resulting in a smaller TG poor and APO enriched remnant particles [REF patrick]. Furthermore, chylomicron exchanges TG for HDL-CE, achieved via cholesteryl ester transfer protein (CETP), which is present in humans not in mice [REF Ha 1981;Jiao 1990 22,23]. Hepatic clearance of the remaining remnants commences with sequestration in the space of Disse via an APOE dependant route. Synthesized in many tissues but predominately the liver, APOE is a constituent apolipoprotein of CM, VLVL and HDL, an essential for lipid transport between tissues since it binds with a high affinity to the LDLr. The liver subsequently converts the remnant content either into bile acids or reuses the content for VLDL metabolism.

3.3.2 VLDL/LDL

Hepatic metabolism of VLDL is a highly controlled mechanism, facilitating endogenous produced cholesterol transport. Within the ER membrane of hepatocytes a single copy of APOB100 is lipidated with triglycerides and de novo and/or exogenous cholesterol, subsequently supplemented with new synthesized APO E and C’s [REF gibbons 1990;268- 1-13 Spring 1992 ; 267 14839-45 Tiwari S, Siddiqi SA. 2012 May;32(5):1079-86]. The needed VLDL TGs are obtained by the liver either from de novo synthesized fatty acids (FA), a sterol family member derived via the MVA pathway [REF Cornforth 2002 24], extracted from the circulation as nonesterified FAs, or recycled from lipoprotein remnants cleared by hepatic receptors [REF Gibbons 2003 25]. Since hepatic VLDL metabolism is dependent on the availability of TG’s, the de novo synthesized APOB100 undergo degradation when they are not lipidated [REF]. Once the VLDL particles enter the circulation, an interaction with LPL in the endothelial cells reduces the TG content similar to CM degradation. The remaining VLDL remnant is deprived from TGs, intermediate-density lipoproteins (IDL), and is either removed from the circulation through hepatic clearance via the LDLr or converted by LPL and hepatic lipase (HL) into low-density lipoproteins (LDL). The LDL particle maintains the APOB100 molecule and is subjected to LDLr mediated internalization and degradation of the particle.

Subsequently, LDL derived FC is either reused for endogenous lipoprotein metabolism or excreted via the bile.

3.3.3 HDL

HDL biogenesis is a complex interaction of membrane bound and circulating plasma proteins and can be diverted into five major processes [REF Zannis 2004]. (1) Production and secretion of APOA1 by either the liver of intestine [REF Zannis 1985]. Whereas intestinal APOA1 enters the circulation via CM’s and is rapidly transferred towards HDL during hydrolysis [REF]. Hepatic derived APOA1 is the origin of nascent pre-β HDL particles Consequently targeted APOA deficiency in mice result in 83% lowering of the HDL fraction and subsequent phenotypes [REF reviewed by Hoekstra and van Eck 26] (2) Via an ABCA1 dependant pathway, hepatic APOA1 incorporates cellular phospholipids leading to the formation of lipid poor pre-βHDL particles [REF]. (3) Once released in the circulation, lipid poor pre-βHDL particles take up excess amounts of FC from peripheral cells via ABCA1/G1 mediated efflux to form unesterified cholesterol enriched discoidal particles. The pivotal role of ABCA1 in biosynthesis of HDL is demonstrated in ABCA1 deficient patients (Tangier disease) and knockout mice, where the inadequate transport of cholesterol towards the lipoprotein results in the hypercatabolism of lipid poor nascent HDL particles [REF27,28]. (4) Subsequently, esterification of HDL-FC initiated by LCAT in the plasma leads to the maturation into spherical HDL3 particles [REF Zannis 2006]. Next, HDL3 are converted into larger HDL2 particles via a PLTP driven acquisition of phospholipids, along with the attraction of apolipoproteins released upon lipolysis (via HL) of VLDL. (5) Circulating HDL2 is transported back to the liver where scavenger receptor class B type I (SR-BI) mediates selective uptake of FC and CE without internalizing or degradation of the HDL particle [REF 1 25 MvE]. The most important property of SR-BI is considered its ability to act as the HDL receptor [REF 29,30], mediating bidirectional FC flux. In vivo deficiency of SR-BI showed FC accumulation in HDL particles, resulting in an enlarged particle [REF] associated with impaired serum decay and hepatic uptake of [3HCEt]-HDL [REF 31]. The process of extrahepatic uptake of CE and subsequent transport towards the liver is called reverse cholesterol transport (RCT), which is important in lowering accumulation of cholesterol in extrahepatic tissue.

HDL-cholesterol (HDL-C) can be cleared from the circulation via alternative routes. First, HDL particles can be enriched with APOE obtained via either extrahepatic tissue or from the circulation. APOE on HDL enables removal from the circulation via hepatic LDLr or LRP1 whole particle mediated uptake [REF]. Second cholesterol clearing route is, as previously noted, via the ability of HDL2 to transfer CE towards VLDL and LDL through a CETP mediated exchange. Hence, CETP expression is not present in rodents, excluding this pathway in our mice models.

Glucocorticoids (GC) are a class of corticosteroids, a family member of the steroid hormones, and are synthesized within the adrenal cortex (zona fasciculata). Steroidogenesis of GC’s is regulated by dynamic circadian rhythms and upon stress induced hypothalamic-pituitary-adrenal (HPA) axis activation [REF 32]. Furthermore, a range of processes are under the influence of GC’s including stress response and inflammation regulation, combined regulate the “Fight or flight” response and reduce the impact of a stressor induced septic shock [REF]. These characteristics make synthetic produced, as well as natural occurring GC’s, an interesting therapeutic treatment for a variety of inflammatory conditions. The use of GC’s since the 1940 treats symptoms of chronic inflammatory conditions like rheumatoid arthritis [REF Buttgereit 2012 26–29.], asthma, skin infections, ocular infections, multiple sclerosis or as an immunosuppressant for patients following organ transplantation [REF]. It is estimated that at any one time, ~1% of the UK adult population receives an oral GC therapy (REF van staa 2000 105-11].

Like all steroid hormones, synthesis of GC’s requires the ubiquitous substrate, cholesterol. Within the mitochondria of the zona fasciculata of the adrenal a stepwise enzyme controlled pathway leads to the production of GC’s. Most enzymes in the steroidogenesis pathway are either from the cytochrome P450 or hydroxystreoid dehydrogenases (HSDs) family and function unidirectional. Free cholesterol, derived from either de novo synthesis or endogenous receptor mediated uptake, is transported towards the mitochondria and bind to the streroidogenic acute regulatory protein (StAR) that mediates the cholesterol movement into the mitochondria. Here, P450scc, encoded by CYP11A1, initiates the first step in steroidogenesis, conversion of cholesterol into pregnenolone. This rate limiting step is crucial for the final output, as shown in mice, both a decline in substrate as well as inhibition of CYP11A1 results in a lower maximal GC output [REF APOA1, SR-BI KO]. Next, as summarized in Figure XX, the conversion of pregnenolone leads to the formation of the glucocorticoids; cortisol in humans and corticosterone in mice [REF]

Steroidogenesis of GC’s is initiated upon stressor induced activation of the HPA axis, subsequently followed by the release of pituitary gland derived adenocorticotropic hormone (ACTH) [REF]. ACTH binds to the G¬s-coupled melanocortin-2 receptor, present on adrenal cortex cell membrane, resulting in an instant increase of cytoplasmic adenosine monophosphate (cAMP) [REF]. The main function of adrenal cAMP is controlling the expression of CYP11A1 (P450ssc) thereby regulating the steroidogenesis. Beside this crucial role, cAMP is involved in various other pathways associated with the steroidogenesis, including, stimulation of HMGCOA reductase synthesis for de novo cholesterol production, increasing genetic expression of genes involved in the receptor mediated cholesterol uptake route (SR-BI / LDLr), and stimulating expression of HSL and inhibiting expression of ACAT, thereby increasing the availability of FC for the synthesis.

Action of GC’s is transduced via its binding to the GC receptor (GCr), present on the cellular membrane, this initiates translocation of the GCr towards the nucleus where it triggers genomic mechanisms reviewed by Kadmiel et al [REF 33]. Any imbalance in the plasma levels of GC could result in pathological disorders known as respectively, Addison’s disease (low plasma GC) or Cushing’s syndrome (high plasma GC) [REF]

Because GC’s are potent molecules that influence a variety of pathways, sufficient feedback-mechanisms are required. One of with is the direct inhibitory role by GC’s on the expression of corticotropin-releasing factor (CRF) in the brain, supressing the stimulation of ACTH synthesis [REF].


Despite the importance in mammalian physiology imbalance in the circulating cholesterol levels implicated in many diseases, such as cancer [REF Montero 2008], diabetes mellitus type 2 [REF Cho 2009], and Alzheimer’s disease (AD) [REF arispe 2002 Shobab 2005]. Among the cholesterol associated diseases, cardiovascular diseases (CVD) are the most frequent cause of death in western society [REF ZL37]. The underlying pathology driving CVD is atherosclerosis, an ongoing process of thickening of the vessel wall leading to deprivation of oxygen and nutritions in distally located tissues [REF]. Atherosclerosis is characterized as a chronic inflammatory disease, driven by high cholesterol levels. The development of atherosclerotic lesions starts with the infiltration of LDL particles into the vascular wall driven by physical forces (hypertension), chemical insults (hyperglycemia) or genetic alterations [REF R Ross 1999 115-126]. Within the endothelial wall LDL particles become oxidized through either non-enzymatically or enzymatically pathways [REF]. In addition, it has been proposed that the interaction with endothelial cells, smooth muscle cells or (monocyte-derived) macrophages could drive the oxidation of the LDL particle (oxLDL) [REF Yoshida 2010 1875-1882]. Furthermore, the general hypothesis is that oxidation of LDL particles is not possible in the circulation due to strong anti-oxidant defences present in plasma and on the lipoproteins [REF Yoshida 2010].

Modified LDL particles activate the expression of adhesion molecules (selectins P- and E , intercellular adhesionmolecule-1; ICAM and vascular cell adhesion molecule-1; VCAM) (REF Glass andWitztum, 2001; Mestas and Ley, 2008). Subsequently monocytes react to the inflammatory trigger and migrate into the subendothelial space via diapedesis. Upon entering in the subenodthelial space, monocytes differentiate into macrophages and internalize the modified LDL particles via a scavenger receptor mediated uptake (Scavenger receptor A and CD36) [REF]. To maintain cellular cholesterol homeostasis and avoid cellular cholesterol induced toxicity, transporters ABCA1 and ABCG1 efflux excess cholesterol towards HDL particles via the RCT, as described above [REF]. As soon as cholesterol influx exceeds efflux, macrophages and dendritic cells turn in immobile lipid loaded cells with a “foamy” appearance (foam cells). The formation and accumulation of foam cells in the sub-endothelial space are the hallmark for lesion initiation, called fatty streaks [REF 43 ZL]. Lesion progression is a dynamic process including cell proliferation and migration, as well as cell death and presumably migration of cells [REF ZL 44]. Where the early lesions contain primarily foam cells, the content of more advanced lesions are characterized by a variety of cell types accompanied by a small necrotic core. In more advanced lesions, smooth muscle cell migration is triggered by the inflammatory response from the media into the intima. Here, SMC’s start proliferating and produce a fibrous cap covering the plaque [REF 46 45 ZL]. Over time the plaque advances, narrowing the vessel lumen and blocking the blood flow. Rupturing of the fibrous cap exposes the lesion core to the circulation, resulting in activation of blood coagulation and thrombus formation, which causes most acute coronary syndromes [REF].


Development of atherosclerosis is under the influence of a range of environmental as well as genetic modulating factors. Studying atherosclerosis is due to its chronicity and complexity difficult in unregulated human cohorts. The development of homozygous animal models provided a controlled setting that enables studying the mechanisms and processes driving the lesion progression and regression. The most extensively used animal in CVD research is the mouse, chosen for its low maintenance cost, quick reproduction and increased availability of models with an atherosclerotic phenotype [REF]. However, the lipoprotein metabolism of mice differs from man in crucial aspects, influencing the development of atherosclerosis (TABLE 2). The major difference in lipid metabolism between mice vs humans is the lipoprotein profile. In mice the predominant lipoprotein is HDL, whereas humans display a LDL phenotype [REF Miranda]. A possible reason for the lipoprotein difference is the absence of CETP expression in mice [REF 34 Barter 2003 160-7 ]. Second, an important dissimilarity is that the predominant cholesterol transporting lipoprotein in mice is facilitated by VLDL versus LDL in humans. In sum, mice do not develop atherosclerosis without major interference like, genetic modification or diet feeding.

5.1.1 APOE-/- MICE

One of the commonly used models for atherosclerosis is the total body APOE knockout (APOE KO) mouse developed by the group of Maeda in 1992 [REF]. APOE belongs to the class of lipid transporter proteins and is synthesized by many tissues and cell types including liver, brain and macrophages [REF]. The most profound role of the APOE ligand is facilitating binding to the hepatic LDL receptor by TG enriched lipoproteins [REF Mahley & Ji (HOEKSTRA17]. An additional key role for APOE is found in the brain and adrenals, where it facilitates intercellular cholesterol transport [REF]. The targeted mutation of the APOE gene in mice resulted in severe hypercholesterolemia driven by accumulation of APOB containing lipoproteins [REF 35,36]. The most outstanding phenotype of APOE KO mice is its spontaneous development of lesions upon feeding non-cholesterol containing diet. Early stage lesion formation is measured in young mice around 8 weeks of age, with a strong progressive development of the lesion between 12 and 38 weeks [REF t’hoen]. Feeding APOE -/- mice a high-fat, high-cholesterol diet increases plasma cholesterol levels and accelerates the lesion development [REF ZL 61 T35 39].

5.1.2 LDLr -/- MICE

A second common used model for atherosclerotic lesion formation is the low density lipoprotein receptor (LDLr) total body knockout mouse. The LDLr is a cell surface receptor expressed primarily on mammalian hepatic cells that binds and internalizes lipoproteins carrying APOE, such as LDL, thereby regulating the plasma cholesterol levels [REF Ishibashi 1993 TH].

The association between elevated LDL-C level and an increased CVD occurrence is reflected in patients with familial hypercholesterolemia, an autosomal disorder caused by mutations in the LDLr gene [REF]. Mice with a homozygous deficiency of the LDLr gene (LDLr-/-), display increased plasma cholesterol levels by 2 to 3-fold, however lack the spontaneous development of lesions. Feeding LDLr-/- mice either a high cholesterol diet (1% cholesterol 4.4% fat) or a western type diet (0.06% cholesterol and 21% fat) subsequently increases the plasma cholesterol levels by 8 – 16-fold. The total cholesterol increase is primarily caused by sharp augmented levels of the LDL-C fraction, which in turn induces lesion development over a period of XX weeks [REF].

5.1.3 SR-BI -/- MICE

As noted above, the integral membrane glycoprotein; SR-BI, is a key player in the metabolism of the anti-atherogenic HDL particles [REF Rigotti 12610-12615]. SR-BI mediates bidirectional flux of FC, CE and PL between APOB containing lipoproteins and cells. Expression of SCARB1, the gene encoding for SR-BI, is primarily found in the liver, steroidogenic tissues and endothelial cells as well as numerous other organs [REF acton 1996]. Patients with mutations in the gene encoding for SR-BI have elevated levels of HDL [REF vergeer 2011 en de rest van het rijtje in paper]. To study the role of SR-BI on cholesterol metabolism and in particularly HDL and the RCT route in more detail, SR-BI-/- mice were developed [REF Krieger]. These total body SR-BI-/- mice showed a range of cholesterol uptake related pathologies including, reticulocytosis [REF 37, 38], reduced platelet counts [REF Kaplan 2010, Korporaal 2010, 38,39 Ouweneel unpublished?], increased serum oxidative stress levels [REF 40], reduced hepatic expression of ABCA1 and APOA1 [REF], as well as reduced maximal output of adrenal derived glucocorticoids [REF]. In addition, the profound phenotype of SR-BI knockout mice is the enlarged and increased levels of HDL particles. Paradoxically, the augmented amount of the ant-atherogenic HDL particles did not improve its anti-atherogenic properties, but resulted in an increased susceptibility to atherosclerosis development upon western type diet feeding (0.25% cholesterol and 15% fat) [REF van eck 2003 23699]. We hypothesized that this controversy is driven by the loss of functionality of the HDL particles, however further research on this topic is needed.


Over the years inhibition of lesion formation and progression has been extensively studied in a variety of animal models. The concept that the existing atherosclerotic lesion was capable to regress, dates back ~60 years [REF friedman 1957 586-588]. Since then several rodent and non-rodent models are developed to prove this concept. Many rodent models are based on the APOE -/- or LDLr-/- progression models (reviewed by J.E. Feig [REF 2014 13-23]). Increasing the efficacy of the anti-atherogenic RCT route by additional introduction of APOA1 has been one of the early successful models resulting in substantial lesion shrinkage and variations on this theme were successful [REF Tangirala et al., 1999; (Belalcazar et al., 2003 (Shah et al., 2001 (Tian et al., 2015; Wang et al., 2016 ). Including the reintroduction of human APOA1 in LDLr-/- mice raising the HDL levels and significantly regressed foam cell rich lesions [REF Tangirala 1999 1816-22]. The potential role of HDL in lesion regression was underlined by the infusion of APOA1/Milano/PC complex. Increasing the HDL levels in APOE-/- mice reduced the foam cell content of existing plaques within 48 hours.

In addition to increasing the HDL levels, murine models that allow lesion progression followed by a lowering of the APOB fraction, display lesion regression. Lowering of the APOB fraction can be initiated via reintroduction of bone marrow derived APOE in APOE knockout mice resulting in a normalized plasma levels [REF 41vd Stoep 2013 1594-602]. A more invasive model is the aortic arch transplantation where a lesion enriched segment of a hyperlipidemic APOE-/- aortic arch is transplanted into a normo-lipidemic recipient [REF Reis 2001]. Subsequent regression was achieved with both early and advanced lesions within 3 days or 9 weeks post transplantation [REF Llodra 2004 Trogan 2006, Trogan 2004].

Recently the Reversa mouse (LDLr-/-APOB100/100MTTPfl/flMX1Cre-/- -/-) was developed. A LDLr knockout mouse with hyperlipidemia, driven by APOB100 only containing lipoproteins. In this model the hyperlipidemia can be reversed by the expression of the MX1-Cre transgene that inactivates the gene encoding for MTTP [REF Feig 2011, Lieu 2003] resulting in the reduction of the plasma NON-HDL fraction. Lowering of the NON-HDL fraction in the presence of existing lesions resulted in the Reversa mouse reduction of the lipid content, increased presence of collagen accompanied by in an egression of CD68 positive macrophages from the plaque.

Most recently a less technically demanding or time consuming model has been developed using an antisense oligonucleotide (ASO) that is targeted to the LDLr mRNA, inducing hypercholesterolemia and subsequent lesion formation in C57BL/6 mice. By using sense oligonucleotides (SO’s) targeted to the LDLr the hypercholesterolemia is reversible, inducing lesion regresion [REF Basu 2018 560-567].

Although the mainstay for regression is lowering cholesterol levels, Ross et al [REF Ross 1999] described atherosclerosis as a chronic inflammatory disease, since immune cells and its response promote lesion progression in every stage. Modulation of immune cells and pathways to provoke lesion regression is a relative new therapeutic strategy that could be the novel focus for future research [REF foks 2013 4573-80].


Primary progressive multiple sclerosis (PP-MS)


Patients with primary progressive multiple sclerosis (PP-MS) pose a challenge in monitoring early disease progression (1, 2). PP-MS patients develop progressive disability without relapse or remission, and the lack of diffuse macroscopic damage and lesion formation provides difficulty in guiding treatment options (3). The pathophysiological mechanisms of PP-MS are unknown, yet an interplay of neurodegeneration and demyelination has been suggested (4-7). In addition to the well-recognized role of grey matter (GM) pathology, histopathological studies from post-mortem samples of PP-MS patients have demonstrated that normal appearing white matter (NAWM) shows extensive and diffuse pathology, and the possibility to fully characterize NAWM damage could elucidate the mechanisms of MS progression (8-10).

Quantifying water diffusivities throughout WM tissues may elucidate microstructural changes that characterize NAWM, providing measures that determine clinical outcome over time in diseases such as PP-MS (10,11). Advanced MRI techniques such as diffusion tensor imaging (DTI) have allowed for a greater visualization of microstructural changes of PP-MS cerebral white matter (WM) (11) through its ability to identify water diffusion at the microstructural level (12, 13). DTI has been implemented to detect focal MS lesions, but DTI derived metrics such as fraction anisotropy (FA) and mean diffusivity (MD) lack specificity to demyelination and axonal loss (14). Demyelination and axonal loss have similar impacts on DTI derived metrics, limiting its use to determine small modulatory changes in cerebral tissue rather than characterizing the NAWM of PP-MS (15). Due to its dependence on deriving MD and FA, DTI also only functions through the approximation of low b-values (1000s/mm2 ) that comprise the integrity of tissue contrast and quality of images (16). DTI is only capable of monitoring microstructural diffusivity patterns in strictly anisotropic environments, limiting its use to WM regions rather than extending to the corticospinal tract (CST) where significant inflammation occurs in PP-MS (17, 18). Therefore, DTI-derived metrics can quantify small modulatory changes in MD and FA, but fail in discriminating the pathological processes underlying complex diseases such as MS (19).

More recently, diffusion kurtosis imaging (DKI) has been introduced as an extension of DTI that overcomes its limitations. In complex microtissue such as cerebral WM, water diffusivity is highly anisotropic and non-Gaussian in nature due to axonal fibrosity (20). DKI is able to characterize complex microstructural tissues that deviate from the Gaussian form. This deviation can be regarded as the kurtosis of a system, and non-Gaussian distributions have a positive kurtosis value (k>0). The higher kurtosis values allow for the quantification of diffusion in anisotropic and isotropic environments (21, 22). This allows DKI to be sensitive in its detection of microstructural changes in the WM and CST (23). DKI estimates non-Gaussian distribution in WM, thereby providing derived metrics that accurately reflect cerebral neurodegeneration in heterogeneous tissue, allowing for the full characterization NAWM abnormalities such as intra-axonal damage; axonal loss; extracellular inflammation; gliosis and demyelination (24, 25).

Empirical diffusion metrics derived from DKI provide indirect characterization of microstructure, resulting in ambiguity regarding WM tissue properties (25). Biophysical modeling of the WM has been a recent focus in the field of MRI to interpret the diffusion metrics derived from DKI. A WM tract integrity (WMTI) model has been proposed to elucidate that mechanisms behind WM degeneration and its correlation to decreased clinical function (26, 27). Through DKI’s estimation of non-Gaussian probability of diffusion, it can be incorporated with biophysical models such as WMTI to provide metrics that accurately reflect neural degeneration (28-30). WMTI is a two compartment model that divides the WM into the intra-axonal space and extra-axonal space, providing several metrics that reflect PP-MS disease progression (25). WMTI derived metrics include axonal water fraction (AWF), tortuosity ( T ) intrinsic axonal diffusivity (D axon) , radial extra-axonal diffusivity (D e,radial) , and axial extra-axonal diffusivity (D e,axial) . The AWF and D e,radial, T are sensitive to demyelination and axonal loss, and D axon and D e,axial are sensitive to structural changes along the axon bundle in the intra-axonal space (27).

While DKI derived WMTI-metrics have been applied to investigate neurodegenerative diseases such as Alzheimer’s, there are only a few studies utilizing DKI derived WMTI-metrics to analyze microstructural changes in MS (28-31). In regards to PP-MS, previous studies have only used DTI as a mode of exploring WM integrity, quantifying the increased MD and decreased FA. Still, very little is known regarding the pathological process underlying short term disease progression resulting in worsened clinical outcome over time in PP-MS (32).

Therefore, the aims of this study were to utilize novel DKI-derived WMTI metrics to characterize the presence and extent of WM abnormalities, to assess the impact of WM abnormalities on clinical disability, and to investigate the sensitivity of WMTI metrics to short-term disease progression in PP-MS.



Twenty-six patients who met the modified McDonald diagnostic criteria and presented a primary-progressive course of MS were prospectively enrolled. Twenty sex- and age-matched healthy subjects served as controls (11F/9M; mean age, 51.1 years; range, 34–63 years) for the comparison of MRI metrics. Inclusion criteria for PP- MS patients were: (i) age between 25–65 years; (ii) an Expanded Disability Status Scale (EDSS) lower than 6.5 at screening visit; (iii) disease duration up to 15 years. The use of immunomodulatory drugs was allowed but, if treated, patients had to be on current treatment for at least 6 months. Exclusion criteria for all subjects were: (i) neuropsychiatric disorders other than MS, (ii) ophthalmological pathologies (i.e., diabetes mellitus or glaucoma), (iii) history of alcohol or drug abuse, (iv) contraindications to MRI. Twenty patients with baseline and 6 months clinical and MRI examination were included in the longitudinal analysis.

Clinical assessment

All subjects underwent clinical and MRI assessment on the same day. Clinical disability was assessed at baseline, after six and twelve months with the EDSS, timed 25-foot walk (T25FW) test, 9-hole-peg test (9-HPT) and Symbol Digit Modality Test (SDMT). Best-corrected visual acuity (VA) was assessed binocularly at the same time points, using low contrast letter acuity Sloan charts (1.25%, 2.5%, and 100%) at 4 m (Precision Vision, IL, USA). SDMT raw scores were converted to z-scores. Clinical worsening was defined as EDSS score increase of one point if the baseline EDSS score was less than or equal to five, an increase of 0.5 if it was greater than five or as a change of >20% for 25-FWT and 9-HPT scores. Disease activity was defined as presence of new T2 lesions during the assessment period. Disease progression was defined as clinical worsening and/or disease activity (i) at month six compared with baseline, confirmed at month 12 or (ii) at clinical follow-up visit 12 months after study termination compared with month 12 and/or disease activity (i) at month six compared with baseline, confirmed at month 12.

MRI acquisition

MRI was performed using a 3.0 T scanner (Philips Achieva, The Netherlands) with an 8- channel SENSE phased-array head coil (Philips Achieva, The Netherlands). The MRI protocol included the following sequences: a) axial dual echo TSE sequence: TR = 2500 msec, TE1 = 10 msec, TE2 = 80 msec, FOV = 230×230 mm, matrix size = 512×512, 46 contiguous 3 mm-thick slices; b) sagittal 3D T1-weighted turbo field echo sequence: TR = 7.5 msec, TE = 3.5 msec, TI = 900 msec, flip angle = 8°, voxel size = 1x1x1 mm, 172 contiguous slices; c) a twice-refocused spin echo EPI sequence for DKI with b-values of 1000 and 2000 s/mm2 and 30 directions each (repeated twice with opposite phase encoding), in addition to six b=0 s/mm2 images (TR = 8550 msec, TE = 89.5 mssec, flip angle = 90°, spatial resolution = 1.98×1.98x2mm).

Lesion segmentation

For the MS patients, quantification of T2-hyperintense and T1-hypointense lesion volume was performed in each patient by a single experienced observer unaware of subject identity, employing a semiautomated segmentation technique (DISPLAY, Montreal Neurological Institute [MNI]) as previously described (28).

DKI image processing

DKI data were transferred to an offline workstation and processed using in-house developed software in Matlab (R2015a, Math Works, Inc, Natick, MA) to derive the following WM tract

integrity metrics for a coherently aligned single fiber bundle: AWF, Daxon, De,axial, De,radial, and T. In order to register the lesions masks on the DKI maps and extrapolate the NAWM, first T2-weighted and T1-weighted images were co-registered on b=0 images using the automated affine registration tool FLIRT with boundary-based registration, then the resulting transformations were applied to the corresponding lesion masks. For each subject, we used his own T2 lesion mask to identify which voxels of the considered WM tract were affected by a lesion and to compute the lesion volume inside each tract.

Regions of Interest (ROI) analysis was performed to investigate group differences and tissue- specific microstructural damage in WM tracts. ROI analysis was restricted to the corpus callosum (CC), posterior thalamic radiation (PTR) and CST, whose well- ordered axonal structure best corresponds to the WM model used to derive WMTI metrics. All ROI were co-registered on each subject’s diffusion space using the non-linear registration tool FNIRT. Mean values of the different WMTI metrics were then extracted from patients NAWM and healthy controls WM. Since the WM model used to derive WMTI metrics cannot be applied within WM lesions, mean values of FA and MD, which can be derived from DKI images, were extracted from the T2-hyperintense lesions to obtain an estimate of tissue disruption in macroscopic lesions.

Statistical analysis

Statistical analysis was performed using SPSS 23.0 (SPSS, Chicago, IL). A Shapiro-Wilk test was used to test the normality of the data. Mann-Whitney and Fisher exact test were applied to assess differences in terms of age, gender and disease duration between patients and control, progressed and not progressed patients. An analysis of variance model on ranks was applied to investigate differences in MRI metric between patients and control at baseline, taking into account age and gender as covariates. Correlations between MRI metrics and clinical parameters were tested using non-parametric Spearman rank correlation coefficient. Changes in WMTI metrics over 6 month follow-up were assessed using non-parametric Wilcoxon test. A logistic regression analysis followed by a receiver operating characteristics (ROC) curve analysis was performed to assess the performance of WMTI metrics in discriminating between progressed and not progressed patients. Lastly, an analysis of variance model on ranks was applied to investigate differences in MRI metrics between progressed and not progressed PP-MS patients. Statistical significance was set at p < 0.05. Given the exploratory nature of this study, adjustment for multiple comparisons was not performed. Standards protocol approvals, registrations, and patient consents Written informed consent was obtained from all participants before the beginning of the study procedures, according to the Declaration of Helsinki. The protocol was approved by the Institutional Review Board of the Icahn School of Medicine at Mount Sinai.


In order to establish a baseline for control and PP-MS for comparison, EDSS, age, lesion volume, and clinical assessments were implemented.

Table 1. Demographic and clinical characteristics of the study groups at baseline

HC (n=20)

PP-MS baseline (n=26)

Females, n



Age, yrs

51.1 ±9.80

50.9 ±10.3

DD, yrs

8.8 ±4.6

EDSS, median score (range)

4.0 (1.5-6.0)


-2.10 ±1.44

9HPT, seconds

34.09 ±17.9

T25FWT, seconds

7.34 ±2.16

VA 1.25%

20 ±12

VA 2.5%

31 ±11

VA 100%

54 ±5

Total T2 lesion load

5.56 ±7.43

T2V CC, mL

0.12 ±0.17


0.60 ±0.84


0.10 ±0.15

Eight out of the 20 patients who repeated MRI at post month 6 exhibited sustained disease progression based on either EDSS (n = 1), 25-FWT (n = 2), 9-HPT (n = 1), new T2 lesions (n=1); on both 25-FWT and 9-HPT worsening (n = 2), or on both EDSS and new T2 lesions (n = 1).No significant group differences were observed when comparing control and patients with PP-MS with regard to age and sex (p = 0.60 and p = 0.50 respectively). Patients’ and control demographic and clinical features are reported in Table 1. At 6-month follow up, patients were once again screened, and determined values reflect changes within the PP-MS sample over time (Table 2).

Table 2. Demographic and clinical characteristics of the study group at 6-month follow up.

PP-MS baseline (n=20)

PP-MS follow-up (n=20)

p -values

Females, n



Age, yrs

50.15 ±11.08

51.15 ± 11.08

DD, yrs

9.1 ±4.9

10.1 ±4.9

EDSS, median score (range)

4.0 (1.5-6.0)

4.0 (2.0-6.0)



-2.17 ±1.49

-2.27 ±1.38


9HPT, seconds

33.14 ±14.6

34.98 ±24


T25FWT, seconds

7.13 ±2.03

7.17 ±2.19


VA 1.25%

19 ±11

22 ±12


VA 2.5%

31 ±10

30 ±12


VA 100%

54 ±5

53 ±7


Total T2V, mL

6.19 ±8.09

6.90 ±8.77


T2V CC, mL

0.12 ±0.18

0.15 ±0.21



0.67 ±0.94

0.85 ±1.24



0.12 ±0.16

0.16 ±0.48


No significant group differences were observed when comparing progressed with not progressed patients with regard to age, sex and disease duration (p=0.816, p=0.468 and p=0.877 respectively).

Between group comparison of WTMI metrics at baseline

To characterize the presence and extent of WM abnormalities, DKI-derived WMTI metrics were derived from the CC, CST, and PTR of control and PP-MS patients. Compared to control, PP-MS showed the presence of widespread changes in all analyzed tracts.

Figure 1. DKI-derived WMTI metrics in the corpus callosum, corticospinal tract, and posterior thalamic radiation.

AWF (A), Tortuosity (B), D a,axial (C), D re,radial (D), and D axon ( E) were measured for selected ROI’s: CC, CST, and PTR. Comparison between control (n=20) and PP-MS (n=26) showed significant decrease in AWF, Tortuosity, and D a,axial, f or all analyzed tracts. DKI images were processed using Matlab software (Mathworks). Statistical analysis includes the use of a Shapiro-Wilk test and Mann-Whitney and Fisher exact test (**p<0.01, ***p<0.001).

AWF values significant decreased in the CC (Figure 1a; p<0.001), and T values also decreased in the CST, PTR, and significantly in the CC as well (Figure 1b; p<0.01). D a,axial were also significantly decreased in the CC and CST (Figure 1c; p<0.01). D e,radial showed a widespread increase in CC and CST (Figure 1d; p<0.001, p<0.01), reflecting demyelination within the NAWM of PP-MS patients. Interestingly D axon values showed no significance between groups, suggesting that D axon is limited in its sensitivity to axonal loss and degeneration.

Longitudinal analysis of WMTI metrics

To investigate the sensitivity of WMTI to short-term disease progression in PP-MS DKI-derived WMTI metrics were analyzed between PP-MS patients at baseline and at 6-month follow up.

Figure 2. Longitudinal analysis of DKI metrics of PP-MS patients at baseline and 6-month follow up.

AWF (A), Tortuosity (B), D a,axial (C), D re,radial (D), and D axon ( E) were measured for selected ROI’s: CC, CST, and PTR. Comparison between PP-MS at baseline (n=26) and PP-MS at 6-month follow up (n=26) showed significant decrease in AWF, Tortuosity, and D a,axial, f or all analyzed tracts. DKI images were processed using Matlab software (Mathworks). DISPLAY, Montreal software was used for lesion segmentation. Statistical analysis includes the use of a nonparametric Wilcoxon test and a analysis of variance model (*p<0.05, **p<0.01, **p<0.001)

Over the 6-month period, progressed patients saw a significant decrease in AWF, T, and D e,axial values. AWF showed its most significant decrease in the CC and also within the CST (Figure 2a; p<0.001, p<0.01). T was decreased significantly within the CC of progressed PP-MS patients after follow up (Figure 2b; p<0.01). Decreased AWF and T values within the CC and CST of PP-MS patients suggests widespread axonal loss. The D e,axial values only showed slight significance with decreasing values in the CST (p<0.05), also suggesting the presence of axonal deterioration present in short term disease progression. D e,radial v alues showed a widespread increased in all analyzed tracts, specifically in the CC, suggesting rapid short term demyelination within the NAWM of PP-MS patients (Figure 2d; p<0.001, p<0.01, p<0.05).

WMTI correlation with motor, visual and cognitive disability at baseline

To assess the impact of WM abnormalities on clinical disability, WMTI values were correlated between clinical parameters using a nonparametrics Spearman rank correlation coefficient. AWF, Ƭ, De,axial and De,radial of the CC were significantly correlated with EDSS (rho=-0.456,

p=0.001; rho=-0.470, p=0.001; rho=-0.301, p=0.042; rho=0.321, p=0.030 respectively). With the

Figure 3. ROC curve for baseline AWF and disease progression.

Results of receiver operator characteristic (ROC) curve analysis with disease progression as outcome variable and baseline AWF as predictor (area under the curve= 0.854; 95% confidence interval [CI] = 0.687-1.000, p=0.021, Sensitivity 75%, Specificity 75%).

exception of D axon and D e,axial, all metrics from the CC correlated with cognitive impairment as measured by z-SDMT score (AWF rho=0.457, p=0.025; Ƭ rho=0.486, p=0.016; De,radial rho=-0.446 p=0.029). AWF of the CST was significantly correlated with EDSS (rho= -0.366, p=0.012) and 9-HPT (rho=-0.393, p=0.047). AWF of the PTR was significantly associated with VA for low contrast 100%, 2.50% and

1.25% (rho=0.476, p=0.014; rho=0.463, p=0.017; rho=0.556, p=0.003 respectively). Over 6 months, a significant decrease in AWF and a significant increase in T2 lesion load were detected in the body of CC, PTR and in the CST. No significant difference was detected for any of the other WTMI metrics in NAWM (p= 0.067-0.794) or for FA and MD within macroscopic lesions (p=0.191 and p=0.135, respectively. To assess the validity of WMTI metrics in discriminating between progressed and non-progressed patients, a ROC curve analysis was performed. Baseline AWF values in CST significantly discriminated clinically progressed patients from not-progressed patients (p=0.021, area under the curve (AUC)=0.854, 95% confidence interval [CI] = 95%: 0.687-1.000, Sensitivity 75%, Specificity 75%) (Fig. 3), and at follow-up progressed patients showed lower AWF values in CST than not-progressed patients (p=0.004, 0.360±0.029 vs 0.406±0.032). This suggests that AWF is the most sensitive marker of axonal loss and neurodegeneration in PP-MS, highlighting its future role in clinical applications to observe early stage biomarkers that will aid in proper detection and treatment of PP-MS.


In this study, DKI-derived WMTI metrics established the pathological mechanisms responsible for NAWM damage in PP-MS at baseline and at 6 month follow-up and evaluated its impact on different clinical domains.

Since treatments for PP-MS are now available, there is an increasing need to identify those PP-MS patients who will have a more severe outcome over time. A few relatively recent studies attempted to identify measures able to predict the clinical outcome in patients with progressive MS and found that lower baseline EDSS scores and short-term changes are associated with a higher risk of subsequent clinical regression (33–36). Miller et. al., 2018 investigated the relationship between brain volume and clinical degeneration over 3 years in PP-MS and found that the rate of brain volume loss was most rapid in the severe subgroup who experienced clinical progression and least rapid in the stable subgroup who did not experience confirmed progression. Our results were similar in that AWF values in the CST significantly differentiated between progressed and non-progressed patients at 6-month follow up.

In PP-MS, in addition to the well-known role of GM damage, there is extensive histopathological and MRI evidence that NAWM is affected, albeit to a lesser extent, by the same pathological processes that characterize WM lesions, namely inflammation, demyelination, axonal injury, macrophages infiltration and gliosis (37,38). Even if little is known about the pathological relationship linking WM and GM damage in PP-MS, some evidence (39) suggest that WM changes predict subsequent GM abnormalities, rather than the opposite. Furthermore, abnormalities in NAWM rather than in WM lesions have a greater association with later GM damage developed over 2-years follow-up. Against this background, our analysis of WMTI adds to the current knowledge about prognostic factors of disease progression in PP-MS.

DKI is an extension of DTI that incorporates non-Gaussian diffusion effects (40, 41), potentially unveiling intra- and extra-axonal processes and providing higher specificity than DTI to underlying disease mechanisms. In line with this, the between group comparison of WMTI-derived parameters showed the presence of abnormal values for all WMTI-metrics estimated in PP-MS patients NAWM. We found widespread decreased values of AWF, Ƭ, De,axial and an increase values of De,radial. These results are consistent with those of a recent study conducted on RR-MS patients (42). Specifically, the ROI analysis, restricted to the CC, revealed statistically significant differences in all WMTI-metrics except for Daxon. Mean values of AWF, T, and D e,axial, D e,radial were decreased, which reflected chronic axonal degeneration and demyelination. The similarity of findings across studies in RR- and PP-MS patients are in line with the view, largely recognized in recent years, that the primary- progressive phenotype is part of the MS spectrum, showing mainly quantitative rather than qualitative differences with the other phenotypes (2, 36). In this study, we hypothesized that the increased pathological specificity of DKI-derived metrics to the processes that underlie disease burden and disability would be reflected in clinically meaningful correlations (42) . Indeed, we found statistically significant correlations between WMTI metrics and cognitive, motor and visual scores. At baseline, SDMT- a measure of processing speed, attention and working memory- correlated with all WMTI-metrics except for Daxon and De,axial; 9-HPT- a measure that reflects manual dexterity- and EDSS correlated with AWF in CST; visual acuity (1.25%, 2.5%, 100%) correlated with AWF in PTR. The presence of significant correlations between clinical disability and WMTI metrics specific for demyelination and axonal damage supports the concept that the clinical-radiological paradox in PP-MS is due to the lack of pathological specificity of conventional MRI measures.

Our most interesting finding, however, is related to AWF, which seems to capture the ongoing, progressive axonal loss over time as well as its clinical impact. Progression over time was predicted by AWF values and not by lesion load in CST, suggesting a predominant role of NAWM abnormalities over WM macroscopic damage in PP-MS. In this light, AWF seems to be the most sensitive marker of tissue disruption and disability prediction among WMTI. Finally, the significant AWF reduction in all the examined tracts over 6 month confirms that the prevalence of neurodegeneration over demyelination is the main pathological mechanism sustaining PP-MS evolution. Additionally, unlike in RR-MS (43), diffusion parameters within macroscopic lesions did not change significantly over the 6-month follow-up, confirming once more the prominent role of microscopic tissue damage over focal lesions in this clinical phenotype.


Our findings support the role of WMTI metrics such as AWF, T, and de,axial as a specific set of WM pathology markers and suggest that these novel metrics may allow for a better characterization of NAWM in PP-MS. WMTI metrics can help distinguish patients with a faster disease progression from those with a stable course, opening a window on the mechanisms underlying PP-MS progression. In line with the results, Bodin et. al., 2016 stated that in PP-MS, the inflammatory disease activity measured as new lesion formation is not the primary mechanism of disability progression in PP-MS. The authors suggest that a notable component of brain volume loss and disease worsening in PP-MS is independent of concurrent inflammatory activity. We can speculate two different types of population among PP-MS patients and, while all PP-MS patients show significant neurodegeneration over time, only subjects showing prevalent neurodegenerative damage at baseline develop significant clinical worsening over the short-term period, suggesting that “neurodegeneration” progresses faster than “demyelination”. Further longitudinal studies on larger samples are warranted to explore the relationship between WTMI metrics and GM pathology to better understand the sequence of events driving clinical progression (44).

It is important to note that the WM model used to derive our WMTI metrics relies on several, albeit common assumptions regarding the WM microstructure. In particular, it assumes that Daxon< De,axial, and that axonal fibers are organized in a relatively parallel fashion along a single direction, which explains our choice of specific WM tracts for the ROIs analysis. The overparameterization of WM models influences the quantitative estimation and biological accuracy of the system negatively (45, 46). Recent research has suggested that Daxon>De,axial, a parameter that is not met using WMTI. WMTI derived metrics are only valid for voxels with axons that are highly aligned in a single bundle, limiting the ROI’s for deriving metrics. Jesperson et al.,has suggested that applying a Watson Distribution of axons, as shown in the WM model Neurite Orientation Dispersion and Density Imaging (NODDI), would circumvent the strongest assumption of WMTI-derived metrics, thus resulting in more accurate metrics and expanding the range for appropriate ROI’s. Future investigations should focus on implementing parameters that assume that utilize Daxon>De,axial rather than Daxon<De,axial and and combining the Watson Model from NODDI with WMTI to provide more accurate and extensive WM derived metrics (46-48). Doing so will provide the most accurate, cost effective, and timely tools to diagnose MS and predict clinical disability over time, allowing flexibility in treatment options and clinical plans for over

2.5 million people world wide. Despite these limitations, this is the first study that, applying DKI in PP-MS patients, has highlighted the sensitivity of such technique in discriminating disease progression by means of a specific MRI marker of axonal damage.


To what extent was the battle of Kursk a Soviet victory?: college essay help near me

Section 1: Identification and Evaluation of Sources

This investigation will explore the question: To what extent was the battle of Kursk a Soviet victory? This will mainly focus on the time period during the Battle of Kursk, an unsuccessful German assault on the Soviet salient around the city of Kursk from July 5, 1943 to Aug 23, 1943, but it will also discuss the preparation for the Operation Citadel, a German offensive campaign to Kursk that leads to the Battle of Kursk, to examine the long-term causes of the German defeat at Kursk.

The first source that will be evaluated is Colonel David M. Glantz’s report, “Soviet defense tactics at Kursk, July 1943,” written in 1986. The purpose of this source is to access the Soviet tactics at Kursk and how the Soviet tactics have developed throughout the Battle of Kursk from July 5, 1943. The content of this source contrasts the Soviet tactics before the Battle of Kursk and during the Battle of Kursk, how the Soviet tactics have adopted to the German blitzkrieg, and how effective the Soviet adjustments were in the Battle of Kursk. The origin of this source is valuable because Glantz is a an American military historian who is known for his books on the Red Army during World War II and the chief editor of the Journal of Slavic Military Studies, proving that he is knowledgeable on this topic. The date of publication of this source, 1986, is another value since it indicates that Glanz has been able to analyze other sources from other military historians. In fact, he has cited documents from the militaries around the world, including the US Army Foreign Military Studies. One limitation of this source is that this source doesn’t take into the account of the German situations, like the fact that Germany were running low on reserves and resources during the Battle of Kursk, which is a significant factor that contributed to the outcome of the battle. The purpose of this source is another limitation for historians since it doesn’t present a balanced argument whether the result of the Battle of Kursk is a German tactical failure or a Soviet’s tactical victory.

The second source evaluated in depth is Robert M. Citino’s presentation in the International Conference on World War II at the National WWII Museum in 2013. The purpose of this presentation is to provide a tactical – how the generals’ plans for the battle– and operational – how it really turned out – approach to the Battle of Kursk in an entertaining way to the audience. The content of the source outlines the course of the battle, from the beginning of the Battle of Kursk to the end of the battle when Hitler ordered the halt of the operation, through two perspective: military tactical perspective and operational perspective. One value of the source is that the address provides insights from the preparation of the Battle of Kursk and doesn’t only focus on the course of the battle. By providing information about the German’s and the Soviet’s preparation, the source gave the audience a better understanding of the outcome of the battle. Other value is the fact that Citino’s address presents multiple perspectives: the German and the Soviet generals’ perspectives and German factors and the Soviet factors that contribute to the outcome of the war. However, one limitation is that Citino had only about 70 minutes to present. Therefore, he may have not given all the details due to the time limit. The purpose, to present the facts in an entertaining way, is also another limitation as some of his points maybe exaggerated or blurred for the purpose of entertainment.

Section 2: Investigation

The Battle of Kursk (July 5–August 23, 1943) was an unsuccessful German assault on the Soviet salient around the city of Kursk during World War II. Prior to the Operation Citadel, which was a German offensive to take Kursk and led to the Battle of Kursk, the Heer (German Army) was facing a shortage in infantry and artillery. To initiate the offensive, Germans moved the majority of their Panzer units near Kursk, increasing the chance of Red Army to counterattack and weakening other fronts. Furthermore, the German industries were unable to replace damaged war equipment. On the other hand, the intelligence gathered by the Soviet and German troop concentrations spotted at Orel and Kharkov (map in Appendix A) alarmed the Soviet in advance, enabling them to fortify Kursk. It seemed like the Red Army had the upper hand. But, the Soviet defences were ineffective against the Germans and had to suffer from heavy losses during the battle. In fact, historians argue that the war was not a Soviet victory. In fact, the Western Allies have saved the Soviets as they landed on Sicily, and Hitler had to halt the Operation Citadel and pull the forces out of the Battle of Kursk in a fear of Western Allied invasion of mainland Europe.

The Germans were ill prepared for the Operation Citadel, as they did not have the resources, like manpower and oil, and industrial capacity to initiate and sustain the offensive. Prior to the operation, the German army suffered from the results of Operation Barbarossa that resulted in a shortage of infantry and artillery. By 1943, it had lost a considerable amount of its elite forces and was replaced by newly recruited soldiers, who were undertrained. Although it had managed to acquire around 777,000 men for the operation, its actual combat strength was equivalent to two-third of its rational strength.10 Even these newly recruited soldiers cannot eliminate the shortage of men. By the start of the Operation Citadel, units were in total 470,000 men understrength. Germany was also facing shortages in fuel for the Luftwaffe. In fact, the Luftwaffe could only sustain an intensive air effort for more than a few days after the operation began. Along with shortage in resources, Germany lacked the industrial capacity to sustain the offensive. On May 4, when Hitler called his senior officers and advisors to discuss the Operation Citadel, Albert Speer, the Minister of Armaments and War Production, delivered the limitations of German industry to replace losses to Hitler. German industries couldn’t replace damaged aircraft and tanks over the course of the operation. Due to the shortage of resources and lack of industrial capacity, which made Germany ill prepared for the offensive, Hitler’s top officers, Col General Jodl and Col General Guderian, oppose the operation. In fact, Col General Guderian even advised Hitler to “Leave it (Kursk) alone.” But, Hitler initiated the operation and the Battle of Kursk began. As the operation continued, the problems became evident. Germans were had to face a gradual decrease in the total number of their tanks and aircraft as the operation continued and gradually losing its strength and air support, preventing the Germans from advancing at their full momentum. The lack of infantry became a bigger problem during the operation. For the Germans to undertake a successful offensive in 1943, they had to hold their ground to secure counterattacks while advancing into the Soviet defenses. However, the deficient in infantry divisions made it harder for the Germans to secure their original front and new territories.12 This meant that the German Panzer units had to carry out defensive and offensive at the same time, slowing the Germans down. In addition, they had reallocated 70% of all the tanks located on the Eastern Front to for the operation, leaving the fronts defenseless against Soviet attacks.10 Soviet counterattacks, concentrated on the German weak points have slowed the German advance even further. Slowed German offensive meant that the Germans could not conduct their primary strategy, blitzkrieg, and couldn’t win a decisive victory, which the Germans needed. The prolonged battle caused problems as the majority of elite German forces and resources were devoted to the battle. These weakened other fronts, making Allies land on Sicily. Hitler had to promptly halt the operation to reinforce Italy (map in Appendix B) in fear of the Allied invasion of Italy through Sicily.

Although the Red Army prepared thoroughly against German offensive and attempted counterattacks, the result of the Operation Citadel wasn’t a complete Soviet victory. In fact, the Allied forces saved the Soviets. Before the Germans began their offensive, Soviets were able to fortify Kursk after receiving intelligence about German troop concentrations at Orel and Kharkov (map in Appendix A) and details of an intended German offensive in the Kursk sector. They constructed three main defensive belts in fronts around Kursk and placed each subdivided into several zones of fortification. They interconnected each belts barbed-wire fences, minefields, anti-tank ditches, anti-tank obstacles, dug-in armored vehicles, and machine-gun bunkers.  They hoped to draw the Germans into a trap and destroy their armored vehicles, creating an optimal condition for counterattack.  In addition, special training was provided to the Soviet soldiers to help them overcome the tank phobia. However, the defenses were less effective than the Soviets have hoped.  German tank losses were lower than Soviet expectations. The Soviet defenses slowed the German advance, but German advancements still were faster than the Soviets’ expectations. As the battle continued, the Soviets were in danger of getting encircled by the German Panzer units. Despite the strong Soviet defense, the German generals were still considered the German victory likely. The Germans able effectively eliminate Soviet units.  During the Battle of Kursk, the Soviet lost roughly three times more men, two times more tanks and four times more self-propelled guns than the Germans. However, Hitler cancelled the Operation Citadel on July 12 to reinforce Italy as the Western Allies had invaded Sicily, in a fear of Allied invasion of Europe. Therefore, it is safe to say that the Soviet defenses, which were well prepared as Professor Geoffrey Wawro described, “Maginot line put on steroids,” weren’t the main reasons that Germans halted the offensive but, instead, the invasion of Sicily. Some historians, like Dennis Showalter, argue that the Battle of Kursk tactical defeat for the Red Army due to the Soviet losses.

In conclusion, the outcome of Battle of Kursk was a German failure instead of a Soviet victory. Although the Soviets were heavily prepared for a German offensive,  the major reasons for the defeat came from German’s attention was too focused on Kursk and couldn’t win a swift victory due to their ill preparation. German victory seemed likely until the Allied invasion of Sicily, which saved the Soviets, as Hitler had to halt the offensive and pull the forces out of the Battle of Kursk in a fear of Western Allied invasion of mainland Europe.

Section 3: Reflection

When investigating history, a historian could find many limitations. For example, identifying the number of death may seem reliable since I am identifying a quantitative data. A problem that I came across though is that it is very hard to prove the reliability of the numbers of causalities as the numbers vary according to the origins of the sources. This is because the governments have often deflated the causalities to inflate their military pride. Maybe the biggest task of a historian is rather to minimize the bias. During my investigation, I noticed that this task is much easier to fulfill when there are multiple sources from multiple origins so historians can compare each source to minimize the bias of the sources.

Winston Churchill once said, “History is written by victors.” This was true when I investigating the Battle of Kursk. While I was researching, I had to consider the fact that the Allied perspective dominates the examinations of the battle. Number of sources often exaggerated the Allied successes and neglected German failures. This was a challenge as I tried to give a balanced argument. To give a balanced argument, I explore further by reading some sources written by German general who were involved in the Operation Citadel. When investigating, historians struggle to find balanced arguments as most of the sources are dominated by one perspective.

Were the moon landings a hoax?

When Buzz Aldrin first stepped foot on the moon on July 20th, 1969 at 10:56 p.m. EDT, the world was forever changed. After losing many space battles to the Russians, finally America had taken its rightful place as number 1 yet again.  But maybe not. Maybe the government set it up. Maybe Armstrong actually set foot on a set in Hollywood. Or Area 51. Maybe the United States, so desperate to finally get the upper hand had set the whole thing up. Since all of the footage and photos came from NASA, how can you prove it was real? After all, didn’t we learn in this that if it can’t be proved wrong it must be true? Maybe not, but the evidence, or lack thereof is convincing to some. A 1999 poll said that 6 percent of Americans thought the landing was fake while another 5 percent were not sure.  This number is even far higher with young people. The Smithsonian reported that 27% of Americans 18-24 “expressed doubts that NASA went to the Moon” in a 2004 poll.   In the end, critics of the landing say there are too many inconstancies in the footage like the lack of stars in the sky or the waving of the American flag in an environment that should not have wind. NASA responds with evidence of the rocks that were recovered in the missions.   So, which is it? History? Or horseshit?

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard…”

NASA says that these famous words from JFK are what gave life to the Apollo program.   The 1962 speech made a promise to make it to the moon in the decade and NASA got right to work. Set in the backdrop of the cold war, the “Great Space Race” between Russia and the United States was intense, and Russia was winning. Sputnik 1 was the first man-made satellite to orbit the Earth. And, it was Russian. Four years later, Yuri Gagarin was the first person to ever be in space- Another Russian.  Desperate to get on top again, the United States began the Gemini missions, which were set to practice important tasks for moon landing like space walks and orbital docking.   All had been going fairly well until a simulation on the Apollo-Saturn (AS) 204 resulted in a flash fire that killed three astronauts, Gus Grissom, Ed White, and Roger Chaffee. The accident also killed the Apollo program for 21 months.

In October of 1968, the program resumed, and more tests were run to practice things like being able to actually reach the moon, to test the equipment for the landing, and even to orbit the moon without landing. In July of 1969, astronauts Neil A. Armstrong and Edwin E. Aldrin were the first humans to ever set foot on the moon. The landing was broadcast live on television due to a 7-pound camera that had been stowed away in a part of the Eagle. The camera was released to be able to capture the event by Armstrong himself who pulled a lanyard on the ladder on his way to the moon’s surface. His partner, Aldrin, then pushed a TV circuit breaker and the world was able to watch man’s first steps on the moon live. The broadcast was watched by 600 million people,   which by today’s standards is 3 times higher than the Super Bowl.   An impressive number even today, but in 1969 the 600 million viewers accounted for one fifth of the world: a far more impressive number that held the record for viewers for years to come.

In total, 12 astronauts, all Americans, have set foot on the moon through the Apollo program. These astronauts retuned over 840 pounds of rock/dirt from the moon which have led to a deeper understanding of the moon and how it was formed. They left behind an American flag and a plaque that read “Here men from the planet Earth first set foot upon the moon. July 1969 A.D. We came in peace for all mankind.” The entire Apollo program cost the government 20.4 billion dollars. This is the same as 120 billion dollars today. The program was cancelled in 1972 for “mundane reasons such budget decisions and NASA’s research goals.”

It’s a beautiful story. The United States beats Russia once and for all, accomplished our goals, and made “one giant leap for mankind.” But, is it a true story? Americans seem to be unsure. A 2001 special on Fox called Conspiracy Theory: Did We Land on the Moon?  Summarized and presented the evidence that the landing was all a hoax. The following information is all from that film. Top proponents of the conspiracy say that the astronauts did go to space, they just didn’t land on the moon. Instead, Armstrong and Aldrin spent 8 days just orbiting the Earth, waiting to reenter. Conspiracy theorists believe it is technically impossible to land on the moon, and they say this is why no other country has tried it.  Most conspiracy theorists believe that the moon landing was actually produced and filmed in the infamous US Air Force facility, Area 51.  They say that the set of the lunar landing is still standing in the facility, which is why Area 51 is so heavily guarded. The evidence of this? Conspiracy theorists point to Russian satellite images that show craters that look like those on the moon, and an airplane hangar that could be used as a sound stage.

Beside the above claims, the main evidence conspiracy theorists mainly rely on inconsistencies for their evidence. They begin with the lack of engine noise in the landing videos. Prevalent in the take-off videos, the sound of the engine and rockets cannot be clearly heard during the landing and conspiracy theorists say you would not actually be able to hear the astronauts over the engine roar. They also say that the landing vehicle, the “Eagle” landed far too easily on the moon. They point to Armstrong’s test in the desert that was attempted a few months before the landing. The test was a complete disaster and resulted in Armstrong having to eject as the vehicle crashed and exploded. The next time a landing vehicle was seen was the flawless landing on the moon’s surface. As the vehicle landed on the surface that Armstrong describes “almost like a powder,” conspiracy theorists say there should have been a blast crater created by the blasters in the landing vehicle. Yet, not only is there no blast crater seen in any photos or videos from Armstrong’s landing, there are no photos or videos of a blast crater in any of the six Apollo missions. Conspiracy theorists also say that as the vehicle creates this crater, (which it didn’t but it should have) moon dust should have been lifted by the vehicle which should then fall onto the feet of the vehicle. Yet, there is no dust on the feet in the photos from the landing. Conspiracy theorist and self-claimed moon landing investigator Bill Kaysing, who previously served as an engineer for the company that designed the Apollo rockets says that lack of dust is “conclusive evidence of the hoax.”

The next inconstancy that conspiracy theorists point to is video of the American flag that Armstrong puts in the moon. It appears to be waving, which is problematic since there should be no wind on the moon. Conspiracy theorists say this is clear evidence that there was a breeze on set in area 51 and the videos could not have been taken in space. They also look at the still photography taken on the moon. The photos were taken by the astronauts, who had cameras strapped to their suits. The issue is that the man who designed the cameras says that there was no way for Armstrong or Aldrin to see what they were actually capturing. With the way the cameras were attached, the astronauts could not adjust the camera for good angles or framing with their hands and they had to use their body and make their best guess. Yet, many of the photos have objects and people perfectly framed. Conspiracy theorists say the photos are framed too well and too high quality to have been taken “blindly.”

They also inspect the photos themselves and point out inconstancies in lighting angles and backlighting, and they look at the shadows and believe they have multiple light sources, even though they should only be lit by the sun. Conspiracy theorists look at and compare the background of the photos and say that some photos have identical backgrounds when looking at the mountains and craters even when they are supposed to have been taken miles away from each other. Conspiracy theorists say that this a sign of bad photoshopping, or the 1969 equivalent of photoshop. The final inconstancy pointed out is that in all of the photos and videos, the sky is only pure blackness. No stars are visible in the sky, which conspiracy theorists say is because the landing was in a set. Paul Lazarus, producer of Capricorn One which is a movie about faking a landing to Mars, says that he believes the technology was and is available for NASA to fake the landing, as he did in his movie with Mars. Even former NASA Astronaut Brian O’Leary who also served as a scientific advisor to the Apollo program said in the special that it was certainly plausible NASA could have pulled off the hoax.

If this is all a hoax though, what happened to astronauts Gus Grissom, Ed White, and Roger Chaffee that burnt to death in the simulation? Conspiracy theorists have an answer for that too. They were executed. Not formally, but rather that the “accidental” fire ignited in the simulation was set on purpose. One of the biggest advocates for this theory is the family of one of the victims: Gus Grissom. Grissom was an open critic of the space program, and both his wife and son believe that at the very least, NASA has and is holding from them information about what really happened. Conspiracy theorists take this idea much further and say that government officials purposely set the fire to silence critical Grissom before he learned, or before he could have told too much to the public. For their evidence, conspiracy theorists cite the mysterious circumstances around the fire, the lack of investigation details released, and the fact that the pod that they died in is now forever locked away in a military facility and cannot be investigated.

Those beliefs, or at least some aspects of those beliefs are held by somewhere near 10% of the nation. But why? Conspiracy theories are always fun to think about, but why do Americans actually believe their government would or could pull this off? An article from the Smithsonian tries to offer some answers. They suggest that it is mainly young people who believe the conspiracy theory because they were not around during the time of Apollo. Another factor that make young people the most skeptical are the plethora of websites sites throwing out the conspiracies, that young people can access easier than ever before.

The most convincing point the article makes however, and the one that I relate to the most, is the growing distrust of the government. After government scandals like Watergate and the Lewinsky Scandal, we have become so distrusting of government and politicians that I for one think the government is capable of almost anything. This distrust in my generation has led to theories like the idea that 9/11 was an inside job and maybe has caused a rise in belief in theories like the moon landing hoax.

How do those involved in the landing respond to the claims? Well, Buzz Aldrin punched a conspiracy theorist in the face. NASA published a fact sheet in 1977 listing why the moon landing was not a hoax and said that the discussion and argument is “an insult to the thousands who worked for years to accomplish the most amazing feats of exploration in history. And it certainly is an insult to the memory of those who have given their lives for the exploration of space.”

For rebuttal, NASA and other government officials have attempted to explain some of the inconstancies pointed out by conspiracy theorists, but a NASA spokesmen said that replying to all of the claims would be unnecessary and a waste of time. One of their explanations is that the reason that the flag looked like it was blowing in the wind is because the inertia from placing the flag in the ground kept it moving. As for the lack of stars, officials say that since the moon reflects sunlight, and glare from the sunlight would have made it almost impossible for the astronauts or the cameras to see the stars. They also say the exposure settings on the camera help explain why no stars can be seen.  In response to the lack of a crater caused by the blasters, scientists argue that the vehicle’s blasters were running too low and the vehicle was not directly over the moon for enough time to cause a crater. What about the multiple light sources seen in photos?  Spaceflight historian Roger Launius, of the Smithsonian’s National Air and Space Museum says that there are in fact multiple light sources present on the moon. “You’ve got the sun, the Earth’s reflected light, light reflecting off the lunar module, the spacesuits, and also the lunar surface.”

NASA’s basic argument is that between bad photography equipment and simple science, all of these inconstancies can be explained.  They just aren’t willing to take the time or energy to explain every single claim.  Conspiracy theorists say that this is an easy excuse for the things that NASA cannot explain and that they don’t buy the explanations they do give. They base their arguments on circumstantial evidence of discrepancies and a few outlandish claims. The government plays the role of the defense in the argument, trying to combat what theorists say with science and the general premise of a trustworthy government.

So, which is more convincing? If the government was on trial accused of falsifying information and I was a jury member; I certainly couldn’t convict them. As someone who does not trust the government, at all, the conspiracy theorists do make some strong points. There are some real concerning inconstancies that they point out and are not fully addressed by the government. Now with that said, some of their claims are absolutely absurd and are so farfetched that they risk discrediting themselves. On the other hand, the government has some good explanations that make sense. However, just because they can explain one or ten of the inconstancies, doesn’t mean they are being fully truthful. The government simply is not able to completely discredit all of the claims. Conspiracy theorists also can not completely verify the claims either, and the burden of proof lies with those making the claims.

So where do the conspiracy theorists go wrong? For starters, they have the appeal to the person fallacy in their logic. “You can’t trust them government people” yells the man in the tinfoil hat… The entire premise of this theory is that the government does bad things and cannot be trusted. Although it is not one person being attacked, it is still appeal to the person as they are criticizing the morals and intentions of the government. There are also signs of the appeal to authority fallacy in a couple of cases. For example, the man who worked for the company that designed the rockets isn’t an expert on photography exposure by any means, but he is presented as an expert and reliable source based solely on his loose connection to the project. This fallacy is not seen in all of their logic, but in some of the presentation of their ideas, it can be seen. You can also see the fallacy of appeal to ignorance riddle throughout their arguments. Since NASA has all the evidence and NASA can’t be trusted, and since NASA’s evidence might be deceitful, their argument cannot be proven false, therefore their arguments must be true right? Not so much. Conspiracy theorists relay on the fact that they cannot be proven wrong as evidence when it is simply not. Finally, the conspiracy theorists suffer from the false dilemma fallacy. Anything that is not consistent with their understanding of science or anything they believe cannot be fully understood must be a government conspiracy and they do not leave any room for other explanations.

As for human perception pitfalls, assuming the claims are false, the conspiracy theorists suffer most from the pitfall of misinterpretation of random events.

Considerations for distance learning that impact student performance: college admission essay help

With the rapid development of the internet, many colleges and universities have begun to offer online courses as a viable alternative to traditional face-to-face instruction. According to Reyneke (2018), distance learning is in high demand around the world because it allows access to many students who do not enjoy the luxury of studying full time at contact-based universities by providing flexibility in terms of time, space, and finances. The term distance learning is often associated with e-learning, mobile learning, or online learning. According to Paulsen (2002), distance learning is characterized by the separation of teachers and learners (which distinguishes it from face-to-face education), the influence of an educational organization (which distinguishes it from self-study and private tutoring), the use of a computer network to present or distribute some educational content and the provision of two-way communication via a computer network so that students may benefit from communication with each other, teachers, and staff. This instructional practice utilizes a wide range of tools and technology, therefore, the minimum requirements for successful distance learning include the acquisition of hardware such as a computer and webcam, mobile device, applications like Google Meet, Zoom, WebEx, and a stable internet connection.

Perceived Strengths of Distance Learning

Distance learning is said to have numerous advantages and applications. The effectiveness in educating students, the cost-effectiveness in combating the rising expense of higher education, and the ability to provide a world-class education to anyone with a broadband connection are just a few of the most crucial (Lorenzetti, 2013).

On a qualitative study to determine learners’ perspectives on web-based learning conducted by Petrides (2002), participants stated that while replying in writing rather than verbally, they tended to think more deeply about the subject areas, showing that distance learning allowed for greater thought than face-to-face classroom discussion, allowing them to write carefully about their ideas. Flexibility with time is another area of strength of the online learning environment that has been identified by researchers (Petrides, 2002; Schrum, 2002). In Petride’s (2002) study, he reported that participants in an online course said it was easier to work in collaborative groups because there was no need to alter everyone’s schedule. Convenience is also an advantage reported in the online learning literature. Distance learning gives students more leisure time to do other things like being physically active because there is no time constraint by having to be in class at a certain location or a certain time. This can both be a good thing or a bad thing as the students can find more self-improvement, or this can be a form of distraction (Mangis, 2016).

Distance Learning Amidst a Pandemic

Due to the pandemic, distance learning has been abruptly implemented in full force, resulting in a slew of issues that have impacted the performance of students (Dubey P. & Pandey. D, 2020). Chung (2020) stated that one of the greatest challenges regarding distance learning is internet connectivity. In the Philippines, due to a lack of adequate infrastructure and access to technology, it has been questioned whether the country has been able to successfully implement technology for constructivist learning. Recent studies conducted in the Philippines have shown that the accessibility of resources will significantly impact the students’ distance learning journey and will widen the education gap (Villanueva & Nunez, 2020). Furthermore, some students from middle to low-income households or distant regions do not have internet access and devices. Albay Rep. Joey Salceda further expressed that only 17% of Filipino students have internet access at home while only 3.74% have mobile phones. Moreover, only about 5% of students have internet access at home stable enough for online learning activities (Daguno-Bersamina & Relativo, 2020).

Although distance learning offers flexibility by minimizing limitations on a study in terms of time, access, place, the pace of education, and method of study (Dzakiria, 2012), students face other distinct challenges. Leontyeva (2018) stated that in an online learning environment, the quality of distance learning has both its advantages and disadvantages, with the availability of distance learning as a primary advantage, with the psychological aspect of it greatly impacting a student. When it came to performing tasks like tests and course requirements, the majority of the students reported feeling less anxious. Meanwhile, it was determined that a significant disadvantage of distance learning was the lack of open, full-fledged communication with professors and students. The lack of communication, not only with the professor but also with the students themselves hampered the confidence and awareness of the students. It has also been reported that students are having a difficulty using the set up as well as the inefficiency of technology making the lesson more difficult to grasp for students. Students’ isolation and underdeveloped communication skills are other issues with distant learning options. (Northenor, 2020).

Furthermore, based on the study “Online Teaching and Learning in Higher Education during the Coronavirus Pandemic: Students’ Perspective”, it has been determined that students easily get distracted and lose focus as a result of teachers’ lack of well implemented strategies in order to keep them focused, but also due to their lack of prior experience with this style of instruction. In addition to these aspects, environmental disruptors such as the noise made by family members or neighbors and lack of adequate learning space also influence the amount of time for which students can concentrate while learning online. Another study revealed that students frequently encounter difficulties such as time management, coping with personal stress, deficient IT skills, and a lack of proficiency in English as instructional language (Geduld, 2013). Distance learning, according to a recent study by Akaaboune et al. (2021), continues to be a concern, as their findings revealed lower levels of exam performance for students as well as challenges with academic dishonesty.

A recent study by Cacault (2021) determined that students attending lectures via live streaming platforms impacts both ends of the learning spectrum, with low-ability students’ performance declining while high-ability students get an increase in their academic performance. Furthermore, the information gathered demonstrated a wide range of impacts on students’ academic performance, potentially widening the gap of educational inequality. This was also in line with the result of a study by Bettinger (2017) wherein it was established that individuals with lower prior GPAs had much more unfavorable consequences from attending online courses. Cacault further specified an important element that students have a general preference for face-to-face classroom interaction with peers and professors and when offered a choice between live-streamed lectures or face to face classes, students opted to go to in face classes and live-streamed lectures only mildly decrease the attendance of in face classes. This is reinforced by the findings of Weil et al (2014) wherein students still did not want to forego the opportunities of being in class with peers and faculty. Even though they value online activities, having it experienced in actual is still favored for the students.

Making remote learning work for all students, according to Armstrong and Mensah, is difficult. The best tools can be in place, but without equitable access by all students to the tools, adequate preparation time and training for faculty, and the adoption of existing curricula, or the development of brand-new course syllabi, it will be difficult to replicate the in-person learning experience, online. Studies have further revealed the complexity of distance education, highlighting the numerous variables involved in any educational setting, not to mention other factors affecting the field, such as social, economic, and geopolitical challenges (Saba, 2000). Complexity leads to different adjustments needed to be made from both students and professors.

Al-Adwan (2018) stated that students’ perceptions of online learning are influenced by a great deal of factors with the “relative advantage, complexity, social influence, perceived enjoyment and the self-management of learning” coming out on top. Al-Adwan further remarked that if the online learning environment is appropriately constructed to have a beneficial impact on students’ academic performance, they will be more inclined to fully integrate themselves into the online learning environment. On another note, online learning environment developers should line their vision and objectives when developing on designing meaningful and customized learning applications to address point per point the student’s academic and non-academic requirements which in turn, results in an increase of performance across the horizon.

Moreover, Ali (2020) highlighted the importance of staff readiness, confidence, student accessibility, and motivation play in online distance learning. For successful assimilation of Higher Education Institutions’ online learning platforms, proper implementation of ICT support and integration into the online learning environment is needed. Ali further noted that online learning changed the classroom environment from a teacher-controlled to a learner-controlled environment, shifting the educator’s function to that of a facilitator. With this in mind, it is crucial to emphasize that having the proper attitude and perceptions about online distance learning will have a significant impact on the efficiency and effectiveness of distance learning integration.

Faculty readiness to integrate themselves into the digital learning platform, according to Dubey and Pandey (2020), is measured by whether they are prepared in advance to complete the learning modules, their competency in handling existing technologies for online learning, and their ability to address potential challenges as quickly as possible. They also provided that in light of the inherent problems of distance learning, solutions to quickly adapt to the changing environment should be implemented such as instilling awareness for the professors and students, strengthening the IT infrastructure of the learning institutions and conducting online training to the non-academic staff to integrate them seamlessly into the learning environment.

According to Gopal’s research, the quality of the instructor is the most important aspect that influences student satisfaction during online classes. This necessitates a high level of efficiency from the lecturer during the lectures. The lecturer needs to understand students’ psychology to deliver the course content prominently. If the teacher can deliver the course content properly, it affects the student’s satisfaction and performance.

In the Philippines, where there are still growing COVID-19 infections as of now, the continuance of the implementation of distance learning is inevitable. Several aspects must be considered as the Philippines implements this method of education. In addition to the more obvious concerns about internet speed, material costs, and delivery methods, instructor capacity, the learner’s circumstance and context, and the learning environment’s efficiency must also be considered as these can greatly impact the performance of students.


Booklet about Indian Architecture: college essay help

A : Investigating


Personal interest: The reason I chose this project is because architecture holds adamant value in my life and is a topic which raises a lot of questions in my mind. Being raised in a family which often conversates about civil architecture and engineering, I have been interested in these fields from a young age. My father himself is a civil engineer and he is one of my greatest inspirations in life, therefore I wanted to look deeper into the basis of architecture- as this is the background of engineering.

Context: My topic falls into the global context of orientation in time and space because essentially, I strengthened my understanding of the evolution in Indian Architecture and why it is changing through time which is related to this context. (See appendix a)

Goal: My goal was to create a short yet informative booklet about Indian Architecture and why it is so crucial in where architecture stands in today’s world. To understand how architecture is at a high, modern point currently- one must look back into the past and discover how it began. I wanted to achieve this specific goal because I am very enticed to architecture and I wanted the audience of my product to learn something new after reading the booklet.


The project seeks to explore the architecture in India. I chose this specific project as I have always been very interested in architecture, buildings from all around the world caught my eye and the variations in solely housing always engrossed me. I initially became interested in architecture when I moved to Dubai in 2004. The modern houses in this country easily amused me and throughout my life I became very interested in the design of buildings.

This project builds on my prior learning within the MYP 4 Geography in year 10. During this course we had a unit about civilisations and I did a research project about the ancient Indus Valley civilisation. During this project I developed the skills of researching and also having an open mind because some of the new information I learnt about this civilisation was shocking. The project will also build on my English Language & Literature skills which i have developed over the course of 5 MYP years. My personal project will develop my understanding of architecture by increasing my knowledge of the subject through research. The project will develop my skills in researching and finding real information as I will look through a range of different sources but only utilize valuable information. The project may be challenging for me as it will challenge my understanding of architecture in India. It will also be challenging for me because it will challenge my ability to believe all information I read, I will have to make sure I read the same specific information from at least 2 or more sources before I use it in my booklet.


For my selection of sources I began by searching the internet, then I had to pick and chose series of sources by reading the information and analysing the webpage. Although there were a large range of digital resources available, I had to be selective because not all sources that I came across were fully trustworthy therefore I could not use every piece of information I found. Then I went online on book websites to find suitable books, relating to architecture and I found 3 books which I ended up purchasing from bookshops in India. The first book i used is Masterpieces of Traditional Indian Architecture by author Satish Grover, this book gave me real life examples of Indian Architecture to look at and analyse. The second book, Architecture in India since 1990 by Rahul Mehrotra describes the change within the culture behind Indian Architecture since 1990. The book has taught me that pluralism, fusion and hybridity are the dominant traits of cultural change in twenty-first-century India. The resultant architecture reflects this fabric of one of the world’s largest and most populous states. Architect, educator and author Rahul Mehrotra has been at the forefront of the Indian contemporary architecture scene for more than two decades and demonstrates his valuable understanding through this book. This book was useful to me because it allowed me to fathom what architecture was like in the past and its development through time. The final book I used was Percy Brown’s Indian Architecture (Hindu Period), which provides information regarding Hindu Architecture specifically and it showed me the innovation which took place in the past. I believe my source selection is favorable as I used a range of sources and I conducted CRAAP tests for each of my sources (see appendix d).



These are my success criterion for the product :

1. The booklet should have at least 7 pages (A4).

2. The booklet should have a range (at least 5) of different photos, including some pictures taken by myself.

3. A range (at least 7) of sources should be used- primary and secondary information

4. The book should have information about modern indian architecture.

5. The focus of the book should be Indian Architecture.

6. Information about Indian architecture in the past and present should be included.

7. The book should have a contents page.

8. The booklet should have a title

9. The booklet should have a front cover.

10. The booklet should have a mini blurb.

11. The booklet should have page numbers at the bottom of each page.

12. The book should be divided into at least 5 different chapters

13. The back cover should have information about the author.

14. The booklet should have a citations page.

I chose these criterion because they would ensure that I make an explanatory and high quality booklet. Each specific criteria holds its own importance (see appendix c). I had to ensure that my criterion mentions everything that a booklet should include – a front cover, title, back cover and blurb. A blurb is crucial to include in a booklet (, n.D).


The personal project procedure required extensive time planning, in order for one to manage time efficiently. To plan out my time, I created a gantt chart (see appendix b )

To record my process I wrote a process journal, each time I worked on my project I evaluated what I did (see appendix f). Through the process journal I made use of Organization and Reflection skills as I organized my tasks and reflected on what I did each day whilst working on the project.

I believe the use of the website “managebac” has also been very useful to me in terms of organizing the basis of my project and the website allowed me to have everything in one place. I also kept track of my mentor meetings on here (see appendix h).


In completing my project, I faced 2 major obstacles in managing my time. The first one was gathering all my sources – books and websites. The second was putting all information into my own words. I believed I planned my time for researching well, as I had given myself enough time to research each chapter and I had also given myself additional time to check all my research. I had to use the organizational skill whilst planning my time and planning which sources to use. I also had to use reflection skills to reflect on my process every day I worked on the project.



My goal was to “To compile an informative booklet about Indian architecture and its evolution along the years including information about the past, present and ideas for the near future.” I speculate that my product has most certainly met this goal. I believe my product reflected the global context well whilst also achieving the goal and meeting all my success criterion (see appendix e). I found that one can certainly see my product as a response to the global context by looking solely at the front and back cover of the book (see appendix i).


Throughout the project I have had to demonstrate critical thinking skills such as identifying problems and

developing aims, goals and objectives right when I began the project, make inferences and draw

Conclusions from my gathered research, identified gaps in knowledge and formulate key questions before I began researching. I also had to put to use creative thinking skills such as making intuitive judgement whilst searching the web and time planning. I had to utilize the transfer skills of using my knowledge,

understanding and skills across subjects to create my final product. For example i had to use my knowledge of the Indus Valley from the MYP 4 individuals & societies course and I had to use my English Language & Literature skills which i have developed over all ym 5 MYP years whilst writing the booklet.


Through personal project I have developed better communication skills, because when I began my project I shared minimal contact with my mentor due to the fact that my mentor was fairly busy and thus I had to solve problems solely. However the further I got into the project, the more support i needed, therefore I had to take initiative to contact my mentor and i had to book meetings with them through google calendar. During the meetings with my mentor i had to use active listening techniques to understand them and receive appropriate feedback.

I also had to use social skills such as Negotiating goals and limitations with my mentor, to ensure the satisfaction of the both of us. I also helped my peers when appropriate and encouraged them. Furthermore, I often asked my peers for advice and I then respected and took use of their feedback.



My product met each one of my criterion (see appendix e), as I had made reasonable criterion from the start which I did not have to change. For example, my criteria was that my product should have at least 7 A4 pages an din the end it had 20 pages. My objective was to include interesting and valuable information which will interest the reader, the intention was never to pile the reader with extensive amounts of information leaving them lost within my booklet. I never wanted the readers of my product to be incapable of condensing the text due to the sheer volume of information, therefore I aimed for my product to be a short booklet and not a book. I think my outcome was of satisfactory quality, this is because I had to rush the last few chapters and therefore my last few sections are quite short. I believe my product would also be of higher quality if I formally got it printed out and binded. I used to think that writing a booklet would be easy but now i have learnt that the process of writing the entire booklet is quite long and every element must be precisely completed.


Completing my booklet has certainly extended my knowledge and insight on Indian Architecture because I read so many different books and websites and picked up a lot of new information. The project has also shown me how indian architecture connects to the global context of “Orientation through space and time”, as there is so much history and background to the architecture within the country. I now have a grasp of my topic and I know so much more than I did at the start of the project.


I believe that the personal project has assuradely helped me grow as a learner. Through the project I have come to know that I learn best under pressure whilst there are deadlines that I need to meet. The project has shown me that deadlines help organize a person and the pressure of meeting this deadline drives me- forcing me to work hard. I learnt about the fundamentals behind researching, and the value of picking out the right sources. The project has taught me how valuable books are because nowadays, I heavily rely on the internet as my main source of research and the personal project has reminded me of how much information you can gain from reading a solid, hard copy book. some mistakes made in research were that i kept choosing the wrong images which were not acceptable for me, and I forgot that I had to use pictures which were free to use or share, even commercially and often times I forgot to filter them if I was on google images and this took up a lot of my time. I had to ensure I was using the correct images as this falls into media- literacy skills. I found that the process during what was a substantial device which allowed me to record my achievements every step throughout the project. The process journal helpt me because I wrote what my next steps in learning would be and I constantly referred back to it. The personal project has helped me apply many IB Learner Attributes such being as : inquirer, being knowledgeable, being reflective, being a communicator, being a thinker and ultimately being open minded. Through the personal project I have had to use social, collaboration skills by respecting and accepting sociocultural difference whilst researching, considering and analyse different opinions, points of view, ideas and preferences, being empathetic to any information which is shocking to me and respect different opinions and the points of view of others whilst reading the books- as I had to respect the viewpoint of the authors. I learnt that I was very emotionally attached to India, and the research was very meaningful to me. I always use support when I needed it, for example I had meetings with my supervisor (see appendix h) and I consulted bookshops for research. I met my goal and I am rather satisfied from my product, but moreover I am satisfied about the whole learning experience and I am happy that this project has helped me learn so much about my home country. This project has helped me grow as an academic student because I have learn about the importance of time planning and of academic honesty – which are both crucial skills in the DP and in college. This process has inspired me majorly because now I have built a stronger internet for architecture and I would like to learn about architecture in other countries besides India as well as this, the researching for my product has also hooked me onto the subject of civil engineering. In the future, I would like to learn more about architecture and civil engineering as they are both imperative in the modern era. The process has also inspired me to write more booklets or informative texts, because I want people to feel the way I felt about architecture after researching and I would definitely like to share my knowledge. The project has left me as an inquirer and I opt to nurture my curiosity in the near future.


Ahluwalia, R.

Ahluwalia, R. (2017). Taj Mahal dropped from tourism booklet by Indian government. The Independent. Retrieved 3 December 2017, from

Architecture from India | ArchDaily

Architecture from India | ArchDaily. (2017). ArchDaily. Retrieved 5 November 2017, from

Architecture: Past, Present, and Future . (2017). Retrieved 26 June 2017, from

Ajanta Ellora Caves > Images, Paintings, Tours & History @Holidify

Ajanta Ellora Caves > Images, Paintings, Tours & History @Holidify . (2017). Retrieved 27 November 2017, from

Brown, Percy (1942) Indian Architecture (Hindu Period). D B Taraporala Sons & co. Bombay.

File:Brihadishwara Temple at Sunset – Thanjavur – India 02.JPG – Wikimedia Commons. (2009). Retrieved 5 November 2017, from

Free Image on Pixabay – Hawa Mahal, Indian Architecture

Free Image on Pixabay – Hawa Mahal, Indian Architecture. (2017). Retrieved 22 October 2017, from

“Great Bath” Mohenjo-Daro

“Great Bath” Mohenjo-Daro. (2017). Retrieved 5 November 2017, from

Grover, S. (2005). Masterpieces of Traditional Indian Architecture. Greater Kailash, Delhi : Roli Books Pvt Ltd.

India, A.

India, A. (2017). Architecture Buildings In India. Retrieved 7 September 2017, from

“Indian Architecture.” Indian Architecture – Architecture Styles Of India – Architecture In India – Indian Architecture Styles,

Indo-Persian culture

Indo-Persian culture. (2017). Retrieved 22 October 2017, from

Kalder, D.

Kalder, D. (2009). What’s the point of blurbs?. the Guardian. Retrieved 4 February 2018, from

Lotus Temple

:: Lotus Temple ::. (2017). Retrieved 14 November 2017, from

Mehrotra, R. (2011). Architecture in India since 1990 . Berlin, Germany : Hatje Cantz.


Symmetry. (2017). Flickr. Retrieved 12 November 2017, from

Welch, A.

Welch, A. (2017). Indian Architecture: India Buildings – e-architect. e-architect. Retrieved 5 November 2017, from

What is a Blurb? (with pictures)

What is a Blurb? (with pictures). (2018). wiseGEEK. Retrieved 4 February 2018, from

10 Masterpieces showing Diversity in Indian Architecture –

10 Masterpieces showing Diversity in Indian Architecture – (2015). Retrieved 23 November 2017, from


Appendix a) Process Journal Entry 1 – June 21 2017

Global Context

Today I chose the global context which best fits with my project. The Global Context that will give context to my personal project is “Orientation in time and space”. The line of inquiry I am aiming to go down is “Turning points in humankind” and “Explorations of humankind” because I am planning to show how architecture was discovered and the value it has in the world. I must research about the impact architecture has on humankind, initial architectural structures, causes and consequences of architecture, the evolution of architecture and modern architecture in comparison to the past. The resources I consulted today were the Global Context Presentation. The global context ensures that my project has a background theme in it and the global context will also help guide me while doing my project.

Appendix b) Gantt Chart/ Process Journal Entry 2 – Saturday the 24th of June

Today I created a gantt chart for my personal project. This chart is very important because it will help me with my time planning and I will refer back to it throughout my entire project. A resource that I consulted was an old gantt chart I created last year in my design class, this helped me create one today because the template was similar. It was difficult to plan out all my tasks as I had to ensure I was not missing any, before I allocated time to them. A challenge I faced was also estimating how much time each of my tasks would require, because I did not know how long specific tasks such as researching would take. Today, I put to use the skill of organization which is helping me self manage Over the next few weeks, I aim to gather a good amount of research for the document. An organizational skill which I used today was “Plan and manage activities to develop a solution or complete a project” when I created the chart. Another skill that i put to work today was “ Select and use applications effectively and productively”, because I used google sheets as a piece of software to make my chart on.

The gantt chart

This image is fairly zoomed out, the tasks are hard to read. Therefore, I shall mention the tasks :

Research about the elements that make up in a good book
Research about Indian architecture in the past
Research about Modern Indian Architecture
Research about architectural variations around the country
Research about architecture in the future
Develop my own Ideas about possible architecture in the future
Ensure detailed explanations about the all topics are written
Make sure all research is complete
Carry out any additional research which is required
Check that all sources are cited properly
Convert research into well organized paragraphs/sections for the book
Find approved (free to use or share, even commercially) images for the book and cite them
Look through my personal photos and pick out photos to include
Add the photos to the allocated paragraphs/sections
Create a title for the book and write a blurb
Design a front and back cover for the book on “canva”
Check all elements of book and make sure everything is complete

Appendix c) The Criterion Table

The Criteria

Why is this criteria important?

1. The booklet should have at least 7 pages (A4).

Because a booklet should be of a decent length, enough for the reader to pick up new information.

2. The booklet should have a range (at least 5) of different photos, including some pictures taken by myself.

Because a selection of images is required in order for the booklet to be interesting and so that the reader can view something visual as well.

3. A range (at least 7) of sources should be used- primary and secondary information

Sources are vital because this is where the information comes from, a large variety should be used to ensure that I have good information.

4. The book should have information about modern indian architecture.

Modern architecture should be mentioned in the book because it gives the reader a perception of today’s world.

5. The focus of the book should be Indian Architecture.

The focus must be Indian Architecture because that is what my goal states.

6. Information about Indian architecture in the past and present should be included.

Because the aim is for the reader to grasp knowledge about what architecture in India was like in the past and also how it has evolved today- this also allows my project to link to my global context.

7. The book should have a contents page.

This is vital because every informative, nonfiction book includes a contents page- allowing the reader to easily navigate throughout the information.

8. The booklet should have a title

A title is imperative in order to catch the audience’s attention. A title is a basic norm which represents the booklet- it summarises what the booklet will be about.

9. The booklet should have a front cover.

A front cover is required as it is always present in any booklet or even book (which I found whilst conducting primary research in bookstores, see appendix g) . A front covers goal is always to attract the audience.

10. The booklet should have a mini blurb.

A blurb, according to “The Guardian” and “” is required to advertise the work. Again, it is used to interest the reader in your work.

11. The booklet should have page numbers at the bottom of each page.

Having page numbers not only makes the booklet look sophisticated and formal, it is also required in order for the contents page to function correctly.

12. The booklet should be divided into at least 5 different chapters

Dividing the booklet into chapters allows the reader to read what they are interested in, without having to read the entire book. Chapters are also required in order to organize information. 5 chapters were the minimum I decided I had to have because this would ensure that I have enough information for a booklet.

13. The back cover should have information about the author (me).

A section which is about the author is necessary in order to give the reader a background about who wrote the booklet- thus the reader can decide if this affects the information or not and weather or not there is any bias in the booklet.

14. The booklet should have a citations page.

A citations page is substantial because any information which is not my own must be cited. There should be no signs of plagiarism in the booklet whatsoever, as this falls into information literacy skills.

Appendix d) The CRAAP test exemplar

The source:

Mehrotra, R. (2011). Architecture in India since 1990 . Berlin, Germany : Hatje Cantz.


When was the information published or posted?

The book was published in 2011

Has the information been revised or updated?

No, it was moderated before publishing but once it is published it can not be updated.

Does your topic require current information, or will older sources work as well?

My topic requires both- current and old sources as I am looking into architecture from the past as well as in the present day.

Are the links functional?

This is a book, therefore there are no links.


Does the information relate to your topic or answer your question?

Yes it severely relates to my topic because I am looking into Indian Architecture and the whole book is based around Indian Architecture, specifically.

Who is the intended audience?

The intended audience is anyone who would like to learn about indian Architecture. For example- civil engineers, architects or even pupils.

Is the information at an appropriate level

I believe this information is at an appropriate level as it is a book published for people who want to learn about architecture- who will most likely be at a decent level in terms of understanding.

Have you looked at a variety of sources before determining this is one you will use?

Yes, I looked at all kinds of different books regarding my topic before purchasing this one.

Would you be comfortable citing this source in your research paper?

Yes I would be comfortable citing this source in my research paper, as it is reliable and it is a published solid source.


Who is the author/publisher/source/sponsor?

The author is called Rahul Mehrotra, who is an architect himself.

What are the author’s credentials or organizational affiliations?

Rahul Mehrotra is principal of architecture firm RMA Architects of Mumbai, India and is Professor of Urban Design and Planning and Chair of the Department of Urban Planning and Design at the Harvard Graduate School of Design (GSD) in Cambridge, USA. (RMA Architects, n.d)

Is the author qualified to write on the topic?

Yes certainly as he has many architectural projects.

Is there contact information, such as a publisher or email address?

Yes, there is a publisher & contact information at the back of the book.


Is the information supported by evidence?

Yes, the whole book includes several examples of real life architectural works.

Has the information been reviewed or refereed?

It was moderated before publishing.

Can you verify any of the information in another source or from personal knowledge?

Yes, many websites agree with the information given in the book.

Does the language or tone seem unbiased and free of emotion?

Yes, the information is written sophistically and under seriousness.

Are there spelling, grammar or typographical errors?

No, this is a high quality published book.


What is the purpose of the information? Is it to inform, teach, sell, entertain or persuade?

The purpose of this information is to teach and inform people who are interested in architecture, yet it is a published book which can be sold and viewed as a weapon for boosting the authors economic state. One may also be entertained or persuaded by the book.

Do the authors/sponsors make their intentions or purpose clear?

Yes, the title is clear and their intentions are clear and well explained throughout the book.

Is the information fact, opinion or propaganda?

Facts and a small amount of opinion

Does the point of view appear objective and impartial?

Yes the point of view appears objective and impartial.

Are there political, ideological, cultural, religious, institutional or personal biases?

None, the author is not bias whatsoever.

Appendix e) Table of testing the Product against Criterion

The Criteria

Was the criteria met?

1. The booklet should have at least 7 pages (A4).

Yes, the booklet had more than 7 pages.

2. The booklet should have a range (at least 5) of different photos, including some pictures taken by myself.

Yes, the booklet had 14 images.

3. A range (at least 7) of sources should be used- primary and secondary information

Yes, 15 sources for information were used.

4. The book should have information about modern indian architecture.

Yes, information regarding modern architecture is included.

5. The focus of the book should be Indian Architecture.

Yes, the book is solely based around Indian Architecture

6. Information about Indian architecture in the past and present should be included.

Yes, information regarding Indian architecture in the past & present is included.

7. The book should have a contents page.

Yes, there is a contents page.

8. The booklet should have a title

Yes, the title of the book is “Indulging Indian Architecture”.

9. The booklet should have a front cover.

Yes, the booklet has a front cover designed by me.

10. The booklet should have a mini blurb.

Yes, the booklet has a mini blurb as follows : “Explore Indian Architecture and learn about its connection to the country’s diverse culture. Dating all the way back from 3000 B.C.E until modern day. With a population of 1.3 billion, India is full of creative minds which contribute to the country’s modernised architecture.”

11. The booklet should have page numbers at the bottom of each page.

Yes, every page has a page number at the bottom of the page.

12. The booklet should be divided into at least 5 different chapters

Yes, the booklet has 8 chapters.

13. The back cover should have information about the author (me).

Yes, the back cover contains an about the author section stating : “In essence, a 15 year old girl who is very fascinated by the subject of Architecture. Connecting to my roots, I have written this booklet about my eminent home country, India. Indian Architecture is something which heavily assisted us towards the architecture we have in the modern era. The country has always withheld advanced architecture which has evoked many questions for me. Follow me, in my journey of discovering compelling architecture.”

14. The booklet should have a citations page.

Yes, the booklet has a references section which includes the sources of every image used and every website I visited and the books I read.

Appendix f) Process Journal Exemplar

Entry 17 – Sunday the 1st of October 2017

Today I started to take all my gathered research & put it into my own worded paragraphs. This took a good amount of time because I had to use a wide range of vocab which would fascinate my reader and to find these words I had to utilize the thesaurus (online) a lot. I had to make sure that my own writing differed from the ones found on websites and this was quite challenging because no matter how hard I tried some sentences were almost impossible to fully modify. I had to put to use information and media literacy skills today by understanding that each website or image I use must be sourced and accessing information to be informed and inform

others, reading critically and for comprehension, read a variety of source for information (books, websites), collecting research from a variety of print and digital sources making connections between a variety of sources, utilizing different media to obtain different perspectives and lastly Utilizing appropriate multimedia technology. Whilst converting the research into my own paragraphs I Had to demonstrate critical and creative thinking skills as the work had to assuredly be my own. Whilst putting research into my own paragraphs I also had to use transfer skills- As I substituted the information into my own work. My next steps in learning would be to make sure the writing in my paragraphes is disciplined and sophisticated- so that it engages the reader. The next time I work on my project, I must look at the quality of writing.

Appendix g) Using Books as a source of Information

Entry 10 – Wednesday the 13th of July 2017

Today I went to 3 different bookshops in Delhi(as I am here for the summer vacation). My aim for the day was to look for books which can help me in my research, and I had to be very selective with this. I had to utilize different media to obtain different perspectives and i had been using a lot of websites, however I wanted to use books because I would be able to get a grasp of the authors knowledge and viewpoints. I wanted to use many different sources on my project and therefore I already had a list of 10 books I was looking for before going into the bookstores. The list included books such as : Percy Brown’s “Indian Architecture (Hindu Period)”, Satish Grover’s “Masterpieces of Traditional Indian Architecture.”, Rahul Mehrotra’s “Architecture in India since 1990”, Adam Hardy’s “The temple Architecture of India” and lastly bindia Thapar’s “Introduction to Indian Architecture”. I purchased 3 of these exceptional books and cited them instantly, as I knew this would all contribute to my research.

Citations :

Grover, S. (2005). Masterpieces of Traditional Indian Architecture. Greater Kailash, Delhi : Roli Books Pvt Ltd.
Mehrotra, R. (2011). Architecture in India since 1990 . Berlin, Germany : Hatje Cantz.
Brown, Percy (1942) Indian Architecture (Hindu Period). D B Taraporala Sons & co. Bombay.

Appendix h) Mentor Meetings

Appendix i) Product Front & Back Cover


The parotid gland / parotid gland tumors

Lit. Review


The parotid gland is the largest salivary gland. It is located on each side of the face between the zygomatic arch and the angle of the mandible. It is divided into superficial lobe and deep lobe by the plane of the facial nerve and its branches which pass through it. Salivary gland neoplasms are rare and constitute approximately 3% to 6% of all tumors of the head and neck region(1). Of this, the parotid gland tumors constitute 60-75%. Pleomorphic adenoma is the most common parotid gland tumor(P.G.T) followed by Warthin’s tumor.

P.A.: Most benign tumors can be easily cured by wide local excision, but pleomorphic adenoma, which is the most common salivary gland tumor, has a propensity for local recurrence. Simple enucleation is discouraged.

Depending on the tumor size and location, the conventional superficial parotidectomy(SP) or the more recent techniques like partial superficial parotidectomy(PSP) or extracapsular dissection(ED) maybe performed for resection of benign parotid gland tumors located in the superficial lobe of the gland. The proximity of the tumor to the facial nerve is an important factor in determing the surgical approach. Damage to facial nerve should be avoided and also care should be taken to prevent other complications. The malignant and deep lobe tumors most often require a total parotidectomy(TP) with preservation of the facial nerve.

(28 ref.webpages)-[Minimising damage to the facial nerve is one of the primary objectives of parotid surgery and has encouraged the development of alternative surgical techniques including limited superficial parotidectomy, extra-capsular dissection and selective deep lobe parotidectomy. 18910 ]


The concept of surgical excision of a parotid tumor was given by Bertrandi in 1802. Initially this surgery involved an extensive approach, causing serious disfiguration and disability(2). In the first half of the twentieth century, the surgery performed was tumour enucleation and had a rapid shelling out of the lump with limited exposure and a high risk of tumor rupture and also there were other forms of subtotal removal.5–7 These approaches were practiced to reduce risk to the facial nerve, and also beacause of a lack of understanding of the biology of these mixed tumours. Recurrence was noted in 23–31 per cent of patients hence treated. (3:3)

In the next half of the twentieth century, the approach to parotid tumors changed. Patey and Thackray recommended superficial conservative parotidectomy to be the standard operation for parotid tumors lateral to the facial nerve. It was reported in the British Journal of Surgery that this standardisation of parotidectomy techniques had revolutionised the surgery of parotid glands (3:10). They were of the opinion that incomplete excision and implantation were the most important factors responsible for the recurrence of primary mixed tumours.(3) Codreanu was the one who performed the first total parotidectomy with preservation of the facial nerve. In 1958, Beahrs and Adson (2:6) described surgical technique of current parotid gland surgery with the relevant anatomy.

The decision to excise parotid lesions are made on multiple factors including obtaining a definitive histological diagnosis, suspicion of malignancy, growth of the lesion or associated discomfort as well as patient preference for removal.(3)

Even though superficial parotidectomy(SP) dramatically reduced the rates of recurrence to 1-4%(31:15), it almost invariably caused many complications like facial nerve paralysis which could be temporary or permanent, loss of parotid function and poor esthetics. The justification for superficial parotidectomy was the concept that the best way to protect the nerve was complete exposure and dissection. Complications like hematoma, sialocele, fistula, Frey’s syndrome, and greater auricular nerve injury, were more frequent after parotidectomy than after enucleation. (2:9,10)

More recently, in the cases of benign and low grade malignant superficial parotid tumors, more conservative approaches are being followed which have lesser aforementioned complications compared to SP or total parotidectomy(TP). These are: 1. Partial Superficial Parotidectomy(PSP) in which only the branches of facial nerve in association with the tumor are dissected and a generous cuff of normal tissue around it is excised, and 2. Extracapsular Dissection(ECD) in which only the tumor with the surrounding minimum small cuff of tissue just outside the capsule of the parotid tumor is excised(5) but in this there is no intentional dissection of the facial nerve. Depending on the loacation and size of the tumor and its proximity to the facial nerve, the preferable approach is opted.

Surgical time is less for PSP and ECD as compared with SP or TP. There is also a higher risk of facial nerve weakness in SP or TP as compared to PSP or ECD.

Parotid gland: surgical anatomy and Facial nerve

(4)-The parotid gland is the largest salivary gland. It is a totally serous gland. Each parotid gland is located in the parotid spaces which surround the posterior part of mandibular rami. Each gland is divided into a superficial lobe and a deep lobe by the plane of the facial nerve which passes through it. The retromandibular vein serves as a radiographic landmark between the two lobes as it is close to the location of facial nerve. There is a dense sheath derived from the submuscular aponeurotic system (SMAS) overlying the surface of parotid. The superficial lobe constitutes approximately 80% of the parotid gland,which is lateral to the facial nerve, antero-inferior to the external auditory canal, and extends down to the angle of the mandible. The remaining 20%, the deep lobe, extends through the stylomandibular tunnel which is formed anteriorly by mandiblular ramus and posteriorly by stylomandibular ligament. The location of a mass in the parotid gland should be defined as superficial, deep, or extraglandular to plan the best surgical approach which can lower the risk of damage to the facial nerve.(4)

The parotid duct or Stensen’s duct arises from the superficial lobe around the masseter muscle and it runs anterior to the buccal fat, where it pierces the buccinator muscle. It then exits the papilla at the buccal mucosa and opens opposite the second maxillary molar.

There is an accessory gland present in approximately 21% of parotid glands(4:3). The accessory gland is located just superior to the main parotid duct, superficial to the masseter muscle. The Stensen’s duct is joined by one excretory duct from the accessory parotid gland. The pathologies that affect the parotid gland can affect the accessory gland. There are several lymph nodes associated with the external surface of the parotid gland, and occasionally a lymph node can be found in the deep lobe as well.

There are 3 important nerves which are related to the parotid gland: the facial nerve, the greater auricular nerve, and the auriculotemporal nerve.(1)

1. Greater auricular nerve is present on the parotid tail. It divides into anterior and posterior branches. It innervates the skin of the face over the parotid gland, mastoid process and near the tragus and the earlobe. It also supplies the sensory innervation to the mass of the parotid gland.

2. The auriculotemporal nerve is a branch of the mandibular division of the trigeminal nerve. It gives the parasympathetic fibers by the otic ganglia to the parotid gland.

3. The facial nerve exits posterior to the styloid process from the stylomastoid foramen.

Facial nerve:

The facial nerve exits the skull at the stylomastoid foramen, posterior to the styloid process, and gives off three branches(48),

1. The posterior auricular nerve. This nerve innervates the auricularis muscle,

2. A branch to the posterior belly of the digastric muscle, and,

3. A branch to the stylohyoid muscle.

Then it penetrates the parotid fascia and enters the parotid gland where it divides into upper zygomatico-facial and lower cervico-facial branches, which in turn terminate into 5 branches:

• temporal branch

• zygomatic branch

• buccal branches which are superficial and deep

• Marginal mandibular branch, and

• cervical branch

Benign parotid tumors(BPT):

Parotid tumors are rare and account for approximately 1% to 3% of all head and neck tumors.(6:1) Of this, 75%–85% are benign.(6:1,2,3) The most common benign and malignant parotid tumors are Pleomorphic adenoma and Mucoepidermoid carcinoma respectively.(6) Neoplasms constitute 75% of all parotid masses, and the remaining are non-neoplastic infiltrative processes like cysts and inflammatory processes. Benign parotid tumors are more likely to occur in females than in males, except for Warthin tumor. Fifth to sixth decade is the predominant age of occurrence, and occurs commonly in Caucasians.

, and size, extent, location and features of the mass. It is very important to know whether the neoplasm is benign or malignant prior to surgery as well as some benign lesions like stones or cysts may not require surgery at times.

Clinical presentation:

Clinical evaluation begins with patient history. Differential diagnosis of a non-neoplastic disease is important. Redness, fever, swelling and elevated WBC count signify an infection(acute or chronic sialadenitis) or obstruction(sialolithiasis). Intraoral examination may identify purulence or calculi at the parotid duct. Benign lymphoepithelial lesions are non-neoplastic glandular swellings associated with autoimmune diseases, like Sjogren’s syndrome.

Benign neoplasms usually present as painless, slow-growing, well-circumscribed, mobile masses. They remain asymptomatic for months to years. Symptoms like pain, fixity, rapid growth, and facial nerve weakness suggest malignancy. Sometimes small, low-grade malignant tumors may present as asymptomatic benign lesion. Hence, fine-needle aspiration should be used when in doubt. It aids in distinguishing benign and malignat tumors.

Diagnostic aids:

WHO classification of benign salivary tumors(6,55):

• Benign epithelial tumors:

o Pleomorphic adenoma

o Myoepithelioma

o Basal cell adenoma

o Warthin tumour

o Oncocytoma

o Canalicular adenoma

o Sebaceous adenoma

o Lymphadenoma:

 Sebaceous

 Non-sebaceous

o Ductal papillomas:

 Inverted ductal papilloma

 Intraductal papilloma

 Sialadenoma papilliferum

o Cystadenoma

• Soft tissue tumours:

o Haemangioma

o Vascular malformations

o Benign lymphoepithelial cysts

o Lipoma

o Lymph node

o Cystic hygroma

o Congenital anomalies

Ichihara et al(6:21) proposed classification of benign parotid tumors into superficial, deep, and lower pole(region inferior to marginal mandibular nerve) tumors. The lower pole tumors were sub-divided into superficial and deep tumors based on relation to marginal mandibular nerve. These had WT:PA ratio of 2.5:1, more males(ratio 1.6:1), and had on average older patients compared to superficial tumors(57.4 vs 52.2 years). They concluded that for lower pole tumors, identification and preservation of marginal mandibular nerve might be sufficient, without wide dissection. This is different from “true deep lobe” tumors. Hence preoperative localization is important which is possible using CT/MRI.

Pleomorphic adenoma(P.A.)

It is the most common tumor of the salivary glands and represents 45% to75% of all salivary gland tumors. It is also known as benign mixed tumor as it is believed to be derived from both epithelial and mesenchymal components in varying proportions.

They most commonly occur in the parotid gland. Approximately 90% of PA are found lateral to the facial nerve, which, sometimes, may extend to the deep lobe. Remaining 10% are found in the deep lobe. Clinically it presents as a unilateral, painless, mobile, well-demarcated, slow-growing mass. These are more often present in women and middle-aged individuals. Very rarely multiple and/or bilateral PA have been reported. Cut surface has a whitish gray rubbery surface and has a capsule of variable thickness.

(can shorten this)-Histology: PA consists of a mixture of epithelial and myoepithelial components. Epithelial: may be in the form of ducts, nests, cords, or solid sheets of cells. myoepithelial: cells that appear plasmacytoid or spindled in a fibrocollagenous, myxochondroid, or chondroid background. This is considered to be responsible for the characteristic myxoid or chondroid stroma.

Almost all pleomorphic adenomas have focally thin capsules, and 1/4th of them demonstrate satellite nodules or pseudopodia. These pseudopodia are believed to be one of the reasons for recurrence, another being tumor spillage. Hence a wider excision is preferred for PA. But recent studies have demonstrated limited parotidectomy to have similar recurrence rates to wider excision.

Recurrence, if occurs, is challenging to manage. It can occur 5-7 years after primary surgery. Malignant transformation can occur in 3% to 15% of PA and risk increases with observation, therefore surgical excision at the time of diagnosis is preferrable.


Myoepitheliomas are rare benign salivary tumors (1.5% of all salivary tumors and 0.8% to 3.4% of benign parotid tumors(6)). These are homogenous, white, well demarcated, with a smooth surface. Microscopically, represent one end of the spectrum of mixed tumor in which ductal structures are extremely rare to absent. They are composed of predominantly spindle cells, epithelioid and clear cells may also be present.

Clinical features are similar to pleomorphic adenomas. No gender predilection has been noted. The majority of these neoplasms behave in a benign manner, malignant transformation has been reported.

The treatment is surgical excision.

Basal Cell Adenomas

Basal cell adenomas constitute approximately 2.4% to 7.1% 2,3,36 (6)of benign parotid tumors. They occur more commonly in women and in 4th to 9th decades of life. They are 3rd most common tumors after PA and WT. They present as a slow-growing, asymptomatic mass similar to other benign neoplasms. Its a well-circumscribed, solid tumor with a gray-white to pink-brown cut surface. Parotid basal cell adenomas are encapsulated and histologically characterized by basaloid appearing epithelial cells. Four histologic subtypes have been identified: 1) solid, 2) trabecular, 3) tubular, and 4) membranous.

The treatment is surgical excision with a cuff of normal surrounding tissue.

Papillary Cystadenoma Lymphomatosum: Warthin Tumors or adenolymphoma

(doubtful abt edit, that is, plagiarism;find out this reference source)

It was first described by Aldred Warthin in 1929. Warthin tumor constitutes 25% to 32% of benign parotid tumors and is the second most common benign salivary gland neoplasm after pleomorphic adenoma. It is found almost exclusively in the parotid gland, more commonly in males(ratio 2:1) as it is associated with cigarette smoking and is more prevalent in Caucasians. Compared to other benign tumors, WT presents in older population (50-60years of age) and 5-12%% of the time it occurs bilaterally.(6)

Clinically, WT presents as an asymptomatic, slow-growing mass often in the tail of the parotid gland at the angle of the mandible. These are usually ovoid encapsulated masses and have a smooth or lobulated surface. On sectioning, its commonly cystic containing mucoid, brown fluid.

Microscopically, WTs contain lymphoid tissue and eosinophilic epithelial papillae that project into cystic spaces. The cystic lining is arranged in two layers of cells and the cystic lumen may contain thick secretions or cellular debris.

Technetium scans can aid in the preoperative diagnosis, as WTs concentrate this isotope and give the appearance of a “hot gland”. FNAB shows a mixture of mature lymphocytes and oncocytic-appearing epithelial cells. WT doesn’t show malignant potential.

Treatment is surgical excision but the extent is debated as there are reports of recurrence.

Oncocytoma (Oxyphilic Adenoma, Oncocytic Adenoma)

Oncocytoma constitutes less than 1% of all salivary gland tumors. This is a benign neoplasm of the oncocytes that arises most commonly in the parotid gland. Oncocytic cells can also be seen in other benign and malignant salivary pathologies.

Usually presents in the fifth to sixth decade of life at the time of diagnosis and gender distribution is almost equal. Oncocytomas are painless, slow growing tumors that are well circumscribed, lobulated and encapsulated and may have an orange or red hue. Histologically, the tumor has a granular appearance because of abundant hyperplastic mitochondria. It shows an increased uptake for pertechnetate and hence can be seen on radionuclide scans.

Treatment of is surgical excision. It has a good prognosis. Malignant transformation is rare.

Management of benign parotid gland neoplasms:

Tumors of the parotid gland should be removed completely. They are generally removed with an adequate cuff of surrounding normal tissue. The facial nerve is carefully dissected and preserved. The extent of facial nerve dissection and the amount of resection of parotid tissue depends on the location, size, and histology of the tumor. 98

Small tumors that are located in the parotid tail may require dissection of only the lower division of the facial nerve, saving the upper division from unnecessary dissection. The tumor is removed with the surrounding parotid tissue.

For larger tumors of the superficial lobe, a complete superficial parotidectomy is usually required. Deep-lobe tumors require excision of the deep lobe with careful preservation of the facial nerve. Parapharyngeal tumors are most commonly excised through a cervical-parotid approach. 99

Diagnostic aids:


USG is an effective tool to distinguish between cystic and solid tumors, and also for characterizing anatomy of superficial lobe tumors. Its disadvantage is that it cannot properly visualize deep lobe tumors or facial nerve. Benign neoplasms are usually hypoechoic with smooth margins. PA are distinguished from WT by poor no vascularization, whereas WT are hypervascular. PA have lobulated contour, acoustic enhancement posteriorly, and sometimes, internal calcification. WT usually conatin anechoic cystic areas internally.

CT and MRI:

Computed tomography(CT) and Magnetic resonance imaging(MRI) provide excellent anatomic information because of high fat content of the parotid gland. CT and MRI help in evaluating the relationship of the tumor to the gland or adjacent structures, including bone and soft tissues. CT aids in evaluating the location of stone in case of sialolithiasis, or the magnitude of infection and presence of abscess in case of infections, and also helps in distinguishing solid and cystic lesions.(56) The distinction between benign and malignant lesions is poor on CT.

MRI is more sensitive in distinguishing neoplasms and it best displays the spread to adjacent soft tissues or perineural spread.(cumming’s). One advantage of MRI scan is the ability to likely diagnose pleomorphic adenoma when a hyperintense and well localized mass is noted on T2-weighted image(52). In case of Warthin tumor, there is a heterogenous signal on T2-weighted image with bright signal in areas of cyst formation.

CT and MRI are effective in evaluating the size, extent and location of the tumor, but they are poor on their own in diagnosing individual histologies, and distinguishing between benign and malignant lesions.

Fine needle aspiration biopsy(FNAB):

FNAB is a very important diagnostic tool for differentiating benign and malignant tumors. It helps the surgeon in counselling the patient and planning for extensive dissection and cervical lymph node dissection, in cases of malignancies. Also, it saves the patient from unnecessary surgery by differentiating inflammatory and neoplastic conditions. The sensitivity and specificity of FNAB ranged from 84% to 74% and 98% to 88%, respectively(52:16.20,19).

Image-guided FNAB may be needed for clinically non-palpable tumors and tumors located deeper. In cases where FNAB was non-diagnostic, ultrasoung or CT-guided core needle biopsy(CNB) can be used as an alternative. According to recent studies(6:31), CNB is safer and more accurate than FNAB, especially in cases of malignant tumors.

Nuclear imaging:

Nuclear imaging is useful in confirming the diagnosis of Warthin tumors and Oncocytomas as these have increased uptake of pertechnetate. They appear as “hot nodules” in radionuclide scans. This is helpful in differentiating Pleomorphic adenoma from WT in patients with surgical contraindications.(cumming’s:6)


This new technique is still under investigation. A mechanical force is applied(with a probe) after which ultrasound is used to measure the deformation of tissues. It can be used to differentiate between different histologies of parotid tumors.(6:32)

Hence, in cases of parotid tumors, a thorough patient history, clinical examination and preoperative diagnostics tests should be carried out before proceeding to the surgical management.

Surgical management:

The goal of a parotid tumor surgery is to completely remove the tumor so as to reduce the risk of recurrence, and to preserve the function of facial nerve and its branches. For a benign parotid tumor located in the superficial lobe of the parotid gland, the surgical trends that are in current practice are:

Superficial parotidectomy(SP):

SP is considered to be standard procedure in treatment of benign parotid tumors. In this, anterograde dissection of the entire course of the facial nerve(FN) is carried out, and the whole of the superficial lobe along with the tumor is removed “en bloc”.

The standard incision is an ‘S’ shaped curvilinear incision(modified Blair incision). It begins in the preauricular skin crease and curves around the ear lobe and mastoid process and extends to the cervical skin crease. Depending on the extent and location of the tumor, the incision maybe modified appropriately. The skin flap should be elevated in a plane superficial to the parotid capsule. The anterior flap is raised till the masseter muscle, and the posterior flap is elevated until the mastoid process and posterior portion of sternocleidomastoid is exposed. Injury to the peripheral branches of facial nerve can be avoided by staying close to the skin.

The greater auricular nerve is seen first running from posterior border of sternocleidomastoid(SCM) muscle to pinna, parallel to the external jugular vein and is divided or retracted. Dissection is continued, separating the posterior edge of the parotid gland from SCM, and superiorly from the external auditory canal. This sharp dissection is continued until the tragal pointer is identified. FN is located deep to this point. Blunt dissection is continued till the junction of posterior belly of digastric and SCM is identified. The main trunk of the FN is located approximately 4mm superior and parallel to this junction(52).

After identification of the main trunk of FN, careful dissection is carried out on its surface. Bifurcation of the FN is noted. Blunt dissection of the entire course of the facial nerve is done carefully, separating the entire superficial lobe along with the pseudocapsule, and excising it. The dissected facial nerve is entirely preserved with all its branches remaining intact. Absolute hemostasis must be achieved before closure of the wound begins. Suction drain is placed and the wound is closed in two layers. Sterile gauze dressing is placed.

Locoregional parotid reconstruction(58,59):

After total, superficial, or partial parotidectomy, patients are usually left with a cosmetic defect which causes disfigurement and decrease in the quality of life of the patient. The goal of reconstruction is to prevent Frey’s syndrome, correct any facial nerve abnormality, and improve esthetics by restoring the facial contour. The different types of flaps used in reconstruction are:

• SCM muscle flap, which can be superiorly or inferiorly based. It is the most commonly used flap. Can be used with or without acellular dermis.

• Superficial muscular aponeurotic system(SMAS) flap: used for small defects. Can also be used in conjunction with SCM flap.

• Temoroparietal fascial flap: based on superficial temporal artery. Used for defects larger than 3cm. can be used as interposition flap.

• For larger defects, supraclavicular artery island(SAI) flap, and Anterolateral thigh(ALT) free flap can be used.

• Cervicofacial advancement flaps are used where tumor involves the overlying skin

Identification of facial nerve:

Surgical landmarks of the facial nerve include: (21)

1. The tympanomastoid suture line, it is a palpable hard ridge which is deep to the cartilaginous part of the external ausitory canal. The facial nerve lies 2-6mm deep to it

2. The tragal pointer, it is considered as the most important landmark. Facial nerve lies approximately 1cm deep and inferior to it. As the tragal cartilage is dissected free from the parotid fascia, its medial aspect acts like a blunt “pointer”. The disadvantage is that it is mobile, assymetrical and blunt.

3. Posterior belly of the digastric muscle. It is deep to the sternomastoid. Lateral retraction of sternocleidomastoid muscle exposes the posterior belly of digastric. Facial nerve is located approximately 1cm above the digastric near its insertion at the mastoid tip

Facial nerve can also be identified by retrograde dissection of one or more peripheral branches of the nerve or by using a facial nerve monitor.


In this technique, parotid tumor with the associated part of the gland is removed and facial nerve only in the vicinity of the tumor is dissected. There are two approaches of dissecting the FN for this technique: anterograde and retrograde.

For anterograde dissection of the FN, same steps are followed as SP, till the main trunk of the facial nerve is identified and then the dissection of the branches is continued depending on the location of the tumor. Only those branches in relation to the tumor are dissected, the remaining branches are protected.

For retrograde dissection, a standard ‘S’ shaped preauricular incision is made according to the size and extent of the tumor. The skin flap is raised to the superior, anterior and inferior borders of the gland. The peripheral branch or branches of the facial nerve are identified first and then retrograde dissection of the nerve is done, the extent of which depends on the size and extent of the tumor. The remaining branches are protected and the main trunk of the facial nerve doesn’t have to be intentionally identified.

For both of these, the tumor is removed along with the corresponding healthy part of the gland, usually a rim of 1-2cm. Hemostasis is achieved, layered suturing is done, and a suction or pen-rose drain is placed.

Anatomic landmarks for branches of the facial nerve (11):

Buccal branch: runs parallel to the Stensen duct and divides into two branches, running upward and downward along the duct.

Marginal mandibular branch: it extends anteriorly and inferiorly within the parotid gland. It crosses superficial to the retromandibular vein and it may have 3 branches.

Zygomaticotemporal branch: it crosses the zygomatic arch 8 to 35mm anterior to the anterior concavity of the bony external auditory canal, and superficial to the superficial fascia of the temporal fascia.(11)

In a study of 363 PSP’s conducted by O’Brien(52:24), the incidence of temporary and permanent facial nerve weakness was found to be 24% and 2.5% respectively. Recurrence was reported in 0.8% of the patients. He concluded that for tumors located in the superficial lobe, the treatment of choice is PSP. He also indicated that PSP can be performed for excision of most superficial lobe malignant tumors. Lim et al(52:25) reviewed 43 cases of superficial lobe malignant tumors treated by PSP. The overall survival rate and disease-free rate at 5 years was 88% and 79%, respectively. Recurrence occurred in 6 high-grade(n=16) and 2 low-grade(n=27) cases. They concluded that PSP “with appropriate postoperative radiotherapy may be an acceptable procedure in the treatment of low-grade parotid cancers confined to the superficial lobe if the facial nerve is sufficiently distant from the tumor.”(52:25)

In a comparative study of 101 patients with benign tumors(52:26), 52 underwent PSP with 0.5 to 1cm tumor-free margins, and 49 underwent SP or total parotidectomy(TP). Early complications were observed in 40% PSP and 100% SP or TP patients. Temporary facial nerve weakness was significantly more frequent in SP or TP group. During a 4-year follow up period, no recurrences were noted ineither groups. Hence they justified PSP technique in the removal of benign parotid tumors.


In this, the parotid tumor is dissected immediately outside the tumor pseudocapsule without intentionally identifying or dissecting the trunk or branches of the facial nerve unless the psedoucapsule is in close proximity to any of the branches.(14)(25) This is the most conservative and practical approach in the surgery of benign parotid tumors.

The standard pre-auricular incision ‘S’ shaped incision is given. The incision can be modified depending on the size and extent of the tumor. The skin flap is raised immediately superficial to the parotid fascia. Then the borders of the tumor are marked with ink. Incision is given atleast 1cm from the edges of the tumor to have a good access. Care should be taken to avoid injury to the greater auricular nerve. The normal parotid tissue is retracted away. It reveals loose tissue planes and 2-3mm of tumor capsule. Finding a safe plane of dissection is the key to ECD. Good hemostasis is very essential. Tumor pulling has to be avoided to reduce the chances of capsule rupture. As dissection continues deeper, slow dissection is done and proximity to facial nerve branches can be evaluated with the help of a nerve stimulator. After the resection of tumor, the parotid fascia is sutured back. This prevents the loss of contour seen in SP or sometimes PSP and also eliminates dead space. Also, this will eliminate the chances of Frey’s syndrome. A suction or Penrose drain is placed and the skin is closed in 2 layers.

The safety margin of ECD, especially in cases of Pleomorphic adenoma has been debatable. Some authors(57:4) insist a 2cm safety margin for PA, while others(57:3) state that a thin connective tissue margin is sufficient for safe removal and minimum recurrence rates. Resection of the tumor directly adjacent to tumor capsule is unavoidable in 30% of the cases(57:5). According to the literature, there is no significant difference in the recurrence rates among different techniques.(57:5,8). Infact, tumor spillage plays a more important role in recurrence. No recurrences have been reported in a study of 67 PAs operated by ECD, over a mean follow-up period of 7.4 years.(57:7). The rates of postoperative complications after ECD are significantly lower as compared to SP(57:8,9,10). ECD provides favourable esthetic results. As most of the healthy parotid tissue remains in place, there is no major difference between healthy and operated sides. In cases of recurrence, the patient who had initially been operated by ECD is at an advantage than the one operated by conventional parotidectomy.(57) ECD cannot be applied for malignant or deep lobe tumors. Ideal conditions are small, benign tumors in the parotid tail. Tumors in the superficial lobe can also be removed by ECD by experienced surgeons. Care has to be taken to prevent rupture of the tumor capsule. Intraoperatively, the technique of ECD might have to be shifted to formal parotidectomy, hence the surgeon has to be practically well experienced in conventional parotidectomy. FNAB should routinely be used for preoperative confirmation of benign nature of the tumor, in case of ECD

Complications of parotidectomy:

More often than not, parotid surgery if followed by complications. The goal of the surgery is to minimise complications as much as possible. Despite this, complications do occur in some cases. The frequency of complications, in general, is proportional to the degree of invasiveness of the surgery.(57)


• Hemorrahge or hematoma

• Seroma

• Infection

• Necrosis of skin flap

• Hypertrophied scar


• Facial nerve weakness(temporary or permanent)

• Frey’s syndrome

• Greater auricular nerve weakness

• Salivary fistula or sialocele

• Cosmetic deformity

• Recurrence

Table 1 Complications of parotid gland surgery

Facial nerve weakness:

The risk of facial injury is proportional to the length of the nerve dissected. Facial nerve paresis depends on the length of the facial nerve dissected. Hence rate of facial paresis after conventional SP is higher than after limited parotidectomy or ECD. Weakness can affect just one or many branches. Marginal mandibular branch is the most commonly affected as its longer and very sensitive, leading to temporary weakness of the lip. This usually improves in 4 to 6 weeks(13). If the nerve function returns within 3-6 months, it is considered as temporary facial paresis. This can sometime take upto one year. In some cases, there is permanent facial paralysis, where the nerve function doesn’t return to normal. For severe facial nerve damage, reconstruction can be done using nerve grafting.

In a study of 894 patients(17), 395 ECDs and 499 SPs were performed. The rate of temporary facial palsy after SP was 10.6% which lasted for 35.1 days, and after ECD was 11.4% which lasted for 36.7 days. After a one-year follow-up, 9 patients after ECD and 3 patients after SP, had facial palsy.

The facial nerve function can be graded using House-Brackmann grading system. It ranges from Grade I normal to Grade VI total paralysis.

Frey’s syndrome:

This is believed to be caused due to abnormal regeneration of parasympathetic fibers of parotid gland, which, instead, connect to the subcutaneous sweat gland. Post surgically the surface of the parotid is left exposed to subcutaneous tissue; as a result this abnormal regeneraton occurs. In cases of conventional parotidectomy, reconstruction of the parotid defect using SCM or SMAS flap, helps prevent Frey’s syndrome to a great extent. Acellular dermis(AlloDerm) is also used in minor defects.

For ECD, there is not much defect left post-surgically. Suturing the parotid capsule back together prevents Frey’s syndrome.

In a study of 349 patients(29), 44% from the SP group, and 1.3% from the ECD group developed Frey’s syndrome. Management of Frey’s syndrome includes reassurance of the patient, use of anti-cholinergics or benzodiazepenes; if there is no improvement then Botulinum toxin injections can be given. Radiotherapy can also be used.

Greater Auricular nerve weakness:

The GAN enters the parotid fascia at the tail of the parotid. For tumors present in the parotid tail, preservation of GAN is difficult. GAN can be preserved about 60% of the time[]. Injury or sacrifice of the GAN causes dysesthesias of the cheek and earlobe. This is usually temporary.

In a study of 377 patients treated by ECD, GAN weakness was found to be the most common complication, occurring in 10% of the cases. 5% developed seroma, 3% hematoma, and 2% developed salivary fistula.


Rate of recurrence after enucleation was very high(20% to 40%). Introduction of superficial parotidectomy reduced these recurrence rates to 1% to 4%. Presently, there is no significant difference in the recurrence rates of SP, PSP or ECD, according to the literature.(references). Pleomorphic adenoma is the most prone to recurrences. This is due to its histology, containing satellite cells and pseudopodia. Recurrence is believed to be due to intraoperative tumor spillage, rather than the extent of resection.

In case of recurrence, patient who had initially been operated by ECD is at an advantage than the one operated by SP.


Politically Significant Music: college essay help near me

With reference to the relevant academic literature – whether musicological or sociological – describe the ways in which scholars have written about politics in relation to popular music. Then, making use of well-chosen examples of artists and/or music from the twenty-first century, evaluate the extent to which those authors’ concepts remain relevant.


Music as a medium to convey political ideas and carry political conversation has been undeniably effective. I aim to examine through the course of this essay, considering the context this medium continues to stay relevant in, the degree to which this remains true. In this essay, I focus largely on two concepts: the interpretive value of music in relation to politics, and the hierarchy of politically significant music. Drawing from several musicological and journalistic writings, I will aim to determine what aspects of popular music make it an effective agent in bridging the gap between positions of power and the working class. Throughout the essay, the tool I have used to compare or rebutt arguments is postmodernism. With its roots characterized by broad skepticism, subjectivism, and relativism, postmodernism makes questioning the significance of an objective hierarchical classification of music possible, especially in a 21st-century society, where critical and analytical tools employed in the study of popular music are being continually developed and refined.

The Interpretive Value Of Music

Music falls into the natural role of being a moderating connection between conflict and resolution for evaluating the quality of conflict resolution. Compared to language as prose which tends to delimit interpretation, music as a medium serves to liberate interpretation( O’Connel, Branco 2010). This remains valid since bracketing conflict with resolution becomes immediately limiting because it ignores the existence of a middle ground. By defining war and peace as a singular reading of conflict we are taking an equivocal position that calls into question its fixity as a concept. The beauty of music as a medium for conflict resolution lies in the fact that its cultural politics lies not only in its lyrical expression but in the nature and character of its journey and the poetry in its words. It opens up opportunities to contest expressive meaning, interpretation, and cultural capital. As Tricia Rose states, “it is not just what one says, it is where one can say it, how others react to what one says, and whether one has the means with which to command public space” (Rose 1991). Cultural politics is not simply poetic politics, it is the struggle over context, meaning, and public space. In fact, Lawrence Kramer argues that if not for musical hermeneutics, the value of music is meaningless. It seeks to show how, to interpret music verbally, is to give it a legible place in the conduct of life ( Kramer 2011). However, in the same manner, that O’Connell and Branco see limitations with the interpretation of language as opposed to music, music, especially in a political context, is by itself a complex language, so sometimes poses the same issues faced by prose that music tries to counteract. As is the case with the freedom to interpret, it can be exploited to convey, through the romanticization of political disruption, ideas that might not be necessarily honourable. Postmodern musicology asserts that music must be understood in terms of the particularity of its relations to, broadly conceived, its various contexts.

The Hierarchy Of Politically Significant Music

In order to effectively study politically significant music, we must move away from the implication that the hierarchy of politically significant music exists in a perpetual form. As Jacques Attali shows, the ever-evolving nature of popular music is prophetic. He states, “It has always been in its essence a herald of the times to come. Thus, as we shall see, if it is true that the political organization of the twentieth century is rooted in the political thought of the nineteenth, the latter is almost entirely present in embryonic form in the music of the eighteenth century” (Attali 1985). He argues that in a society where it has turned from an immaterial pleasure to a commodity, it is the music that is illustrative of the evolution of our entire society that is at the top of the influential pyramid. On considering the fluctuating nature of prominent culture, the topic of the hierarchy of socially relevant music cannot be addressed without discussing Theodor Adorno, and his analysis of mass culture and products of the music industry. A German intellectual and a classical pianist with a love for challenging, European music, Adorno expressed his deep distaste for popular music and how it champions the destruction of creative and intellectual thinking: “Even the best -intentioned reformers who use an impoverished and debased language to recommend renewal, strenghten the very power of the established order they are trying to break” (Adorno 1970) , essentially stressing that listening to pop music or low brow music made you no better than the repressive and capitalist industry that manufactures it giving way to the growth of an economic consumerist system that supports the well-off.As musicologist Tia DeNora points out, “Adorno’s project begins philosophically with a critique of reason” (DeNora 2003). However, while Adorno’s socio-musical work holds considerable seriousness in its validity, it fails to acknowledge a difference between mainstream commercial pop and less formulaic popular sounds. His insistence that all pop music was based on an economic system that gave power to a few and had one fundamental characteristic: standardization, limited him from appreciating the varied and often anti-establishment stance that pop music took. Instead, he remained steadfast in his belief that progress was found only in music that dismantled traditional approaches to harmony and replaced them with new sounds to stimulate the intellect as well as stir the soul, making it possible to deem his work irrelevant, since the world is seen as the product of multiple perspectives all of which have some truth. Mendall (2006) understands that the hierarchy lies on the process of composition- “the important thing is not how you justify it, the important thing is how it sounds” and asserts the issue that “the only way you can be recognized is if in the intellectual environment they accept you or reject you.” However, Mendall refuses to erase his identity and political views from his music, which might marginalize him within this musical culture, even though he is conscious that the complex relationship between his musical identity and cultural identity could be a straightjacket. This argument is based on hypocrisy- he maintains a certain authority and legitimacy as a composer of contemporary Western art music while criticizing the system it represents(Randall 2017). In connection with Mendall’s thinking, a left-leaning group in the 1950s and 60s, Scratch orchestra saw two oppressive blocs ‘serious art music’ and ‘commercialization of pop’. They rejected both serious music and commercial pop and argued that music should be freer and more open in terms of who could participate. Alan Lomax disagreed and celebrated pop- and so did a cultural brand that theorised messages that were communicated through pop culture– fashion choices, record covers. Pop culture enables the mobilisation of media to challenge dissociations and asymmetries of spectatorship in a way that art music is unable to. The blind demogration of pop culture is flawed, and this line of thinking is enabled by postmodernism (Lomax 1966). Dave Randall asserts the importance of a post-modern perspective and argues that the premise of all three positions is problematic. He stresses that they all start with the assumption that the social impact of music is determined by style (Randall 2017). All the styles are inter-fluid,which is why taking

independent positions might be problematic. This is supported by the concept of post-structuralism, which dictates that all aspects of human experience are textual, that is everything we know about ourselves and the world is based on language. The arbitrary binary, that is high-brow and low-brow demonstrates that the motivation for it lies in cultural dominance. Ania Loomba proposes in her critical review of postcolonial theories that this is the result of the postcolonial perspective. Scholars need to think, she argues, “about how subjectivities of genre are shaped by questions of class, gender and context”. Western art music represents the “colonized stance” while popular music represents the opposition. Differentiating the popular from ‘higher’ forms of music that especially include western art music and certain types of jazz fail to acknowledge the colonial elitist and racist philosophies that have shaped hierarchies of the “high” and the “low” in western art forms. This, however, adds suspect to critique that is not coming from the outside, since it is then an exercise in self-criticism, is polluted by postcolonial subjectivity(Loomba 1998). In the context of western pop culture in the 21st century, the colonized stance can be understood to be replaced by the capitalist stance. This implies that since high brow would have to exist in a colonial setting to be relevant politically if we’re assuming that popular music holds any power of influence in the present postcolonial setting, it requires a capitalist economy to exist. M.I.A.: The British-Srilankan musician and activist was instrumental in expanding the conversation of political music to places that had largely been excluded. Released alongside her third album, ‘Maya’, ‘Born Free’, a music video/documentary directed by Romain Gavras covers the blanket topic of oppression, put in a horrifying context that is fictional yet imaginable thus increasing its shock value, simply because it is conceivable. M.I.A. exercises ideological power and resistance through signs and language. Inspired by genocide against Tamilians in Sri Lanka, coming face to face with third world war zones, slums and border towns, and outwardly challenging institutions that have been ignored even by the US. Music enables insurgence in the most undisguised and visible manner. An undeniably powerful piece of cinema, ‘Born Free’ is an excellent example of how music as a device can be used in an interpretive manner, such that consequentially it can prove more effective than addressing the issue directly. It is a medium where witness and documentary truth-telling coexist with an aesthetics of verbal and visual play.When an element of performativity is introduced it becomes that much easier to digest. “Packaging inherent politics in the form of pleasurable dance music” as Perera explains, MIA recites issues in a conceivable manner such that the guerilla stories of a distant war zone merge with an insurgent metropolitan tactics of survival(Perera 2016). The effectiveness of the sentiment M.I.A. is trying to convey in ‘Born Free’ is so palpable because the irrepressible Energia of M.I.A.’s music is inseparable from the technological conditions of its emergence:a landscape of pre-recorded samples, computer-generated mixes, file-sharing and the internet, as rightly pointed out by Meenakshi Durham in her analysis of the production of M.I.As music. It’s drum-driven dance music that compulsively engages. The aggressiveness of track is brought out with the soundtrack of sirens, heavy machinery, electronics, explosions, and shrieking. The graphic intensity through which it is presented is bound to garner backlash. Statements questioning the validity of pop culture such as “MIA’s message about ethnic cleaning is diluted with too many shocking images to make a serious point” were made. (The Guardian 2010) Pop culture surprisingly garners more offense than real-life atrocities, perhaps since we allow it to seep more into our lives considering the safety of it being a trend in passing. Pop culture and pop artists hold a unique position in this sense since even when the ethics are being questioned, it still enables conversation.

Beyonce: The concept of only high brow music being worthy of political notice, if we consider Adorno’s perspective on high brow music as western art music, it holds almost no grounds for argument in the 21st century. However, postmodernism allows us to reimagine the meaning of high brow. If highbrow, as Adorno saw it, was any music that was autonomous and strayed from the traditional, then it is possible to validate numerous 21st-century popstars as composers of highbrow music. With a focus on Beyonce, a monolith in pop culture, the first of her kind to achieve record levels of success, the justification of her ‘highbrow’ness stems from the claim that “Formation is both provocation and pleasure; inherently political and a deeply personal look at the black and queer bodies who have most often borne the burnt of out politics. all shapes and shades of black bodies are signaled here and move- dare we say “forward”? – information. Even the song’s title is subversive, winking at how we have constructed our identities from that which we were even allowed to call our own.” (The Guardian 2016) It is journalistic writing such as this, which is by The Guardian that is consumed in enormous amounts by the masses enables pop stars to gain some sort of political validity. However, this privileges the capitalist focusing on how much money a musician makes rather than what the music actually conveys. As the New Yorker’s Alex Ross commented, “once you accept the proposition that popularity corresponds to value, the game is over for the performing arts. There is no longer any justification for giving space to classical music, jazz, dance, or any other artistic activity that fails to ignite mass enthusiasm” (New Yorker 2017). Pushing the capitalist agenda even further, Beyonce addresses race issues in such a manner that her language dictates that her solution requires embracing the class divide. The last line of Formation “Always stay gracious, the best revenge is your paper”. Where paper stands for the universal sign for cash, Beyonce presents a convenient conclusion to the discussion of racial inequality. This brings to question whether pop music should be given the importance of being politically admissible, because, the stardom of artists such as Beyonce overshadows the opinion of other African-American artists who take different, arguably more appropriate approaches such as that the US needs less inequality brought about by a redistribution of wealth. Take, for instance, hip-hop artist Killer Mike who voiced his opinion and belief that everyone deserved economic freedom but faced a much sparser reception since this happened around the same time Formation had been dropped and had garnered a massive amount of interest. Had this been subject to criticism in the conventional way that high brow music is subjected to, perhaps ‘Formation’ would hold a different social significance, perhaps there would be louder questions of monetization of racial struggles. Her abiding interest in money, however, can also be seen as having played to her advantage, not in a reductive materialist sense, but because she has a deep understanding of how money informs social and romantic relations.


Where theorists like Adorno have bordered on the hyperbolic on their condemnation of pop culture, and its inability to be politically accurate; postmodernism celebrates the ambiguity and mediocrity that comes with popular music commenting on politics because of its ability to be more conceivable. It suggests that everything is constructed, nothing is real and as such can question truth formations and political status quo. Any discussion regarding pop music in relation to politics calls for society to ultimately operate in a way that is not ignorant. While music as a medium of conflict awareness open doors for communication, the danger that it romanticizes violence always persists. It becomes inaudible in the commodity and hides behind the mask of stardom.In a culture that accepts the democratization of music with open arms, it might be time to face the reality that is the death of the highbrow. Subsidiarily, the intention of studying pop music in relation to politics and employing tools of musicological analysis is not necessarily an aim in itself. Instead, by studying how this music functions in society both as a medium for art and news, it can be used to address many different issues of broader relevance. It makes it possible to brings to the surface, in the global sphere, the unspeakable violence of small, hidden wars and existing class and racial struggles. We desperately need a more measured critique of political connections with pop music in order to attain fair assessments of its costs and benefits.


O’ Connell J., Castelo-Branco S. (2010) Music and Conflict. Chicago:University of Illiniois Press
Jeong H. (1999) Conflict Resolution: Dynamics, Process and Structure. Ashgate Publishing
Attali J. (1985) Noise: The Political Economy of Music. Manchster: Manchester University press
Loomba A. (2005) Colonialism/Postcolonialism. New York: Routledge
El-Ghadban Y. (2009) Facing the Music: Rituals of Belonging and Recognition in Contemporary Western Art Music. American Enthnologist Goehr L. (1994) Political Music and the Politics of Music. The American Society for Aesthetics
Perera S. (2016) Survival Media: The Politics and Poetics of Mobility and the War in Sri Lanka. New York: Palgrave Macmillan Adorno T. (1997) Aesthetic Theory. London: Bloomsbury Publishing
DeNora T. (2003) After Adorno: Rethinking Music Sociology. Cambridge:Cambridge University Press
‘High Brow, Low Brow, Who cares?’ Culture Track, 8 April 2016
Ross A. (2017) ‘The Fate of the Critic in the Clickbait Age’ The New Yorker, 13 March
McFadden S. (2016) ‘Beyoncé’s Formation reclaims black America’s narrative from the margins’ The Guardian, 8 Feb
Wilson S. (2015) Music at the Extremes: Essays on Sounds Outside the Mainstream. Jefferson, North Carolina: Mcfarland & Company
Rose T. (1991) “Fear of a Black Planet”: Rap Music and Black Cultural Politics in the 1990s. The Journal of Negro Education
Pickard A. (2010) ‘Does MIA’s Born Free video overstep the mark?’ The Guardian, 28 April
Kramer L. (2011) Interpreting Music. London: University of California Press Lomax A ( 2017) Folk Song Style and Culture. Routledge
Discography/Videography Romain-Gavras, 2010. M.I.A, Born Free [online video]
IONCINEMA, 2018. Matangi / Maya / M.I.A. | 2018 Sundance Film Festival [Online Video]
M.I.A ,2010. Born Free [CD]. USA: N.E.E.T., XL, Interscope
Beyonce, 2016. Formation [CD]. USA: Parkwood, Columbia


Digital Gasoline/diesel Meter

Chapter 1: Introduction

1.1 Problem Summary

Gasoline/Diesel pump are cheating with common man nowadays. Numerous of the gasoline/diesel pumps today manipulate their pumps, such that it displays the amount as entered, but in truth, the quantity of gasoline/diesel entered in the customer’s tank is very less than the displayed value. The pumps are being used for the profit of the gasoline pump owner. This results in huge profits for the gasoline/diesel pumps,. but at the same time the common man is being cheated. Mostly of the two wheelers and four wheelers vehicles in India consists of analog meters which will not show accurately the quantity of gasoline/diesel present in the vehicle and also it is not likely to cross verify the capacity of gasoline/diesel filled at the gasoline pump.

In addition, this current and competitive world, goods are being digitized owing to its remunerations, user easiness. So we took a project named “Digital Gasoline/diesel Meter”. It consists of a digital screen for the precise quantity of gasoline/diesel enclosed in the gasoline/diesel container. The above furnished fact is measured in the project and it’s found out that a correct answer for signifying the exact availability of gasoline/diesel in the tank digitally. A level sensor and an ECM/ECU is used to find out the gasoline/diesel height which is fiscal and also precise.

This document focuses on the study of a variety of gasoline/diesel level measuring sensors appropriate for our development. A number of issues with respect to the current level measurement technique are identified and so an improved alternate digital sensing technology has been suggested, described and justified.

1.2 Aim and Objectives

In this current and express running world the whole lot is going digital to be easily clear and also to give correct answer. Considering this idea we started a project, “Digital Gasoline/diesel Meter”, which shows the precise amount of gasoline/diesel residual in the gasoline/diesel tank as compared to the formerly used gasoline/diesel meter in which a needle moves to give a uneven estimate of the gasoline/diesel left.

A gasoline/diesel indicator is an instrument used to point to the level of the gasoline/diesel enclosed in the tank. Normally being used in cars and bikes and auto rickshaws, these might also be used in any tanks as well as underground storage tanks.

The value will be in arithmetical digits such as 1, 2, 3 …10 liters. This plan largely focuses on the warning of gasoline/diesel level in automobile vehicles. In the recent times we are constantly hearing about gasoline/diesel fraud. Most of the gasoline/diesel pumps have cheated the common man such that it displays the quantity as entered but the amount of gasoline/diesel entered in the vehicle’s tank is very much less than the displayed value. In India, most the automobile vehicles consist of analog meters that are why it is not convenient to accurately find out the amount of gasoline/diesel available in the two wheelers and four wheelers. And it is also not convenient to examine the amount of gasoline/diesel in the petrol pump filled in the vehicle.

The main objective of this project is to create a digital display to show the correct amount of gasoline/diesel contained in the vehicles tank and also helps in cross inspection of the quantity of gasoline/diesel filled at the gasoline/diesel pump.

1.3 Problem

1.3.1 Gasoline/diesel Gauge

A gauge is a gadget used to point out the level of gasoline/diesel stored in a tank. Basically used in cars, two wheelers and other automobile vehicles and may also be used for any other type of tanks.

The system consists of two prominent parts: sensor and indication of gasoline/diesel level. The sensor unit typically uses a float type sensor (level sensor) to measure gasoline/diesel level while the indicator system measures the quantity of electric current flowing during the sensing unit and indicates the gasoline and diesel level.

There are main two types of techniques are:

• Conventional float type

• Microcontroller based measurement technique

Commonly, the fundamental gasoline/diesel indicator system makes use of a resistive float type sensor to measure the level of gasoline/diesel in the tank. It consists of the sender unit responsible to calculate the level of gasoline/diesel in the tank, and the gauge until accountable to display the measured gasoline/diesel level to the driver.

1.3.2 Flow Sensors

The sensors are used to calculate the quantity of fuel in the tank. It is made from aluminum and its alloys. They are calibrated during the manufacturing process with proper standards and specifications.

A level sensor has to work under every possible atmospheric condition. It’s not only exposed to a great deal of changeable temperatures and vibrations, nevertheless it also has to work under the fuels itself including ethanol, methanol, acidic sulphur or fuel additives that can impact the dependability of the sensor.

1.3.3 Analog Fuel Meter

Around the globe, mostly all the vehicles are fitted with an analog fuel meter. It shows three levels of fuel that are, Empty, Half and Full. Therefore it is not possible to predict the actual fuel available in the fuel tank. In Fig1, we can see analog meter, which shows the fuel level by using needle. Therefore we do not get accurate knowledge of the fuel available in the tank. Due to less knowledge of fuel present in the tank we can be in trouble due to low fuel in the vehicle.

Fig 1: Analog Meter

In the project we are going to implement digital fuel meter.

1.4 Literature Survey

Firstly, we went to different service stations, garages and industries to find out the basic problem that is being faced by common man. And talked with public and to know more about the problems they are facing during the operation of vehicle. So, the basic problem we found out was the theft of fuel from the petrol pumps. And the indication of fuel in the vehicle wasn’t proper and they tend to get trouble somehow.

So that made us to take this project. In this, we wanted to create a digital meter with liquid crystal display to show the accurate amount of fuel left in the tank accordingly with the help of new flow sensors and level sensors and ECM/ECU.

Secondly, we searched the information on the internet to find out more about the current trends and the digital meter. We found the analog to digital conversion and the papers on the current topic. We thoroughly read the papers and patents from other parts of the world and authors to know more about the topic.

Thirdly, we took help of the internal guide and the faculty members of our college to know more about the sensors and the materials of the system, the circuit diagram, and etc. They told us to refer different kinds of books and references to get equipped with the details of the topic i.e. “Digital Fuel Meter.”

Nitin Jade, and Co. worked on modified type digital fuel indicator system which encases the advancement of the current fuel meter to the new digital type unit. They modified the sensors uses with the new level sensors which has accuracy up to 85-95%. They made an effective design that prevents petrol theft at the various petrol pumps in the country.

A review by Umesh P. Hade, Prof A.R. Suryawanshi made a review on the mileage indication of the vehicles by implementing a new digital meter in the system with new flow type sensors and microcontrollers. They calculated various different mileage techniques which can further use fuel economically and avoid fuel wastage.

Avinashkumar and his team prepared and establish a paper on digital fuel level indicator for two wheelers with accurate distance traveler. They converted analog to digital which had two parts namely sender and receiver without accurate signals generating unit. So they tend to device it due to problem arising because of it. It has future advancements to for the later development in the devices. The main advantage was to develop a device that can accurately measure the level of fuel in the tank and to correctly determine the distance it can travel with the remaining amount of fuel.

1.5 Tool/Materials

1.5.1 Flow Meter

It is a device to calculate the non-linear, linear, volumetric and mass flow rate of the liquid.

Fig 2: Flow Meter

The different types of flow meters are:

• Mechanical Flow Meter

o Piston Meter

o Gear Meter

o Variable Area Meter

o Turbine Flow Meter

o Woltman Meter

o Single Jet Meter

o Paddle Meter

• Pressure Based Meters

o Venturi Meter

o Orifice Meter

o Cone Meters

• Optical Flow Meters

• Sonar Flow Meters

• Vortex Meter

1.5.2 Level Sensors

They detect the level of fuel in the tank.

Fig 3: Level Sensor

The different types of level sensors are:

• Float Sensors

• Capacitance Sensors

• Magnetic Sensors

• Pneumatic Sensors

The basic considerations are:

• Density and viscosity

• Chemical composition

• Atmospheric conditions

Level Detection and Measurement by Using a Float Sensor

Fig 4: Working Principle of Level Sensor

Basic Principle: It works on the principle of buoyancy, which states that the weight of the body immersed in liquid is equal to the level of liquid displaced by it.

Construction: It consists of a float, a sensor unit, a temporary magnet and a switch controller at the end of the rod. A scale is attached to the indicate the level of liquid displaced.

Working: To find out the level of the liquids is usually carried out with the help of float sensors. The float moves up and down due to the movement of liquid and transfer movement to the rod which indicates the reading on the scale.

It is very simple, accurate and usable for all types of units.

It needs many types of equipment including pressure vessels.

Largely used in storage tanks for precise measuring the level of liquid in it.

1.5.3 Fuel Tank

It is a container which is safe for flammable fluids. They are manufactured through casting process and are made available in variety of sizes and shapes.

Fig 5: Fuel Tank

1.5.4 Digital Fuel Meter

Fig 6: Digital Fuel Meter

Fig 5 is the basic circuit diagram of the digital fuel meter.

It uses float sensor, a battery, ignition system, a relay switch, resistance and coil. With the help of float sensors it sends the signals electrically to the displaying unit which converts the signals into readable form and displays on the screen numerically.

Chapter 2: Design: Analysis, Design Methodology and Implementation Strategy

In this chapter, we are going to discuss about the development that we did along the course of the semester. It deals with different types of canvas such as Ideation, Product Development, AEIOU Matrix and Observation Matrix.

It relates with the processes that we did along the course of the project. It highlights all the details that are carried out in the project.

2.1 Observation Matrix

Fig 7: Observation Matrix

In this matrix or framework, we went to different places of work and met different people to know about the general problems they are facing. The users and stakeholders are the persons we met and had a long discussion with them.

The basic activities that are carried by the general system used in the vehicles is being shown here. It includes passenger transport, goods carriage, heavy loaded wheels, etc.

The stories that are shown in the figure relates to the project and suggests us to take up the project. The sad and happy stories individually suggests what is beneficial and what not.

2.2 Ideation Canvas

Fig 8: Ideation Canvas

It relates to the ongoing system that is being employed. The cheating factor and not accurately displaying the fuel level in the tank is the basis for us to take up this project. The ideation canvas tells us what’s going on in the current scenario and what is being needed in the future. The basic solution is to convert analog to digital and use of new float sensors and equipments. Accurately show the level of fuel in the tank digitally.

2.3 Product Canvas

Fig 9: Product Canvas

This canvas deals the project related topics and how to implement, construct and carry on in the market. It tells us the purpose of the project, which is to get analog to digital and numerically show the level of fuel in the tank. And to add a LCD screen in the interior. And remove theft of petrol from the petro pumps.

It benefits the general public like students, common man, sports stars, etc.

The experience of the product is good as reviewed by the public cause now they can be sure of the level of fuel in the tank which is displayed digitally in the screen and with the help of it they can’t be cheated by the petrol pump.

The change it needs is to make it less expensive by using other types of sensors and to have easy operating principle.

The review is good as said by the public by their experience.

2.4 AEIOU Matrix

Fig 10: AEIOU Matrix

It is a general matrix related to Activities, Environment, Interactions, Objectives, and Users (AEIOU).

Activities include those activities which are related to the current system used in the vehicles that is analog system. They tend to show the cons of the system and the problems arising due to it. Therefore it is an important part of the matrix as it highlights the problem of the project.

Environment is related to the atmospheric conditions which affect the system as fuel tanks should be kept away from the engine and direct heat of the sun. The wiring of the system must be well insulated and sensors must work under all kinds of environment.

Interactions are done with common man which help us to make more changes to the project and to do our work.

Objectives are those which define the project work. The most important objective of our project is to convert analog system to digital system and to display it on the screen numerically and accurately measuring the level of fuel in the tank.

2.5 Computer Aided Design

Fig 11: Circuit Diagram

We with the help of CAD/CAM software made a rough circuit diagram of the basic components of the system. It includes a fuel tank, sensors, wirings, ECU/ECM module and a LED Display. Sensors sense the level of fuel in the tank and pass on the signals to the next part which interprets the signals and to the calculations and produce the results on the screen.

Fig 12: Components of the Digital Fuel Meter

Chapter 3: Summary

The current system that uses microcontroller and float type sensors is not as accurate and are more traditional. Nevertheless the microcontroller based method is extra precise than the former one.

The Digital Fuel Meter is much more precise than the analog meter. And more reliable and will cost much lesser than the former. They will digitalize the system with the new type of sensors and ECM/ECU module and displaying on screen. They precisely measure the amount of fuel in the tank and also notify us the distance it can travel with the amount left.

In this semester, our team decided to work on “Digital Fuel Meter”. It indicates the level of gasoline/diesel numerically and theft detection can also be done.

It provides much more accurate results up to 90-95% than the analog meter.

Our plan is very helpful for a ordinary man as it avoids him by getting cheated.

3.1 Advantages

The following are the advantages of the digital fuel meter.

• It helps to point out the amount of fuel in the tank.

• Also shows the amount of fuel left and the distance it can travel with the remaining amount.

• Theft of petrol at the petrol pumps can be reduced widely.

• It also dictates the mileage of the vehicle with proper calculations.

• More precise than the analog meter.

• Reduce errors to an extent.

• Parallax errors that occur in analog meters are reduced.

• It shows the results numerically

• No moving part in the system so reduce the risk of damage.

3.2 Disadvantages

The following are the disadvantages of the digital fuel meter.

• Highly costly.

• Construction is difficult.

• It requires high maintenance.

• Sensors need separate maintenance and care.

• LCD’s work with the help of batteries so when get discharged, the display will be poor and unable to read.

• If there are fluctuations, readings can vary.

3.3 Future Scope

This device can be improved or modified in the near future.

• It can help to find the location of the vehicle.

• Can be used to control theft of vehicle.

• It can be used to control the speed of the vehicle by indicating it on the screen.


1. Daniel R. McGlynn, “Vehicle Usage Monitoring And Recording System”, US Patent 4072850, February 1978.

2. G.Kiran Kumar, Associate Professor and HOD, MRL Institute of Technology, Hyderabad, “Digital Fuel Meter.”

3. Nitin Jade, Pranjal Shrimali, Asvin Patel, Sagar Gupta “Modified Type Intelligent Digital Fuel Indicator System” IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) e-ISSN: 2278-1684, p-ISSN: 2320-334X PP 20-23 in International Conference on Advances in Engineering & Technology – 2014 (ICAET-2014).

4. Patent US 8281655 B2, “Fuel Gauging System Utilizing A Digital Fuel Gauging Probe.”

5. Patent US 3537302 A, “Automatic Digital Fuel Indicator For Car.”

6. Patent US 4102191 A, “Digital Fuel Gauge.”

7. Patent US 20080309475 A, “Vehicular Meter Unit and Display Device.”

8. Patent US 3812710 A, “Electronic Fuel Consumption Meter.”



11. Sarang R. Gogilwar and Co., International Journal of Research in Advent Technology, Vol.2, No.4, April 2014. E-ISSN: 2321-9637, “Fuel Level Measurement Techniques: A Systematic Survey.”


Biodegradable synthetic polymers – starch blends: a review


Plastic packaging materials perform an important role in the food industry due to their toughness, light weight and flexibility that ceramic and metals cannot meet (1). Nevertheless, the environmental effect caused by traditional plastics materials mainly associated with the waste-disposal problems due to their low biodegradability and the impediment of recover all the plastic used to be recycled (2) , has promoted to develop environmentally-friendly commodities (3), (4). Nowadays, the industry is looking forward to introduce biodegradable polymers in the market, aiming to replace the oil-based materials, and contributing with solutions to the environmental problems (5).

Biodegradable polymers are defined as polymers which can be transformed in dioxide carbon, water , methane and other low molecular-weight products after crossing a degradation process, which takes place through the presence of living organisms (bacteria, fungi , yeast, algae, insects, etc.) (6) at specific conditions of light, temperature, oxygen (aerobic or anaerobic conditions), etc.(7) (8), (9). Nonetheless, the influence of polymer chemical structure and its origin are remarkable at the degradation process too (3). The time that the biodegradation needs to take place ranges between few weeks to several months, depending on the environmental conditions and the inherent polymer molecular structure (6).When these kind of polymers are degraded , the production of harmful substances is reduced because the residues can be incorporated at the geochemical natural cycle (10),(9).

Bio-based and biodegradable polymers have an extensive range of applications such as pharmaceutical, biomedical, horticulture, agriculture, consumer electronics, automotive, textiles and packing, the latter being perhaps one of the most common application (11). To date, many biodegradable polymers are available namely, poly(lactic acid) or polylactides (PLA), polycaprolactone (PCL), poly(butylene adipate terephthalate) (PBAT), polyhydroxybutyrate (PHB), polyhydroxyalkanoates (PHAs), polyesteramide (PEA) (12),(4). Nonetheless, in some cases its high cost of production prevents them from being considered as substitutes for traditional polymers (13). An attractive alternative is the development of biomaterials from natural, low cost materials because of their extensive availability and renewable character. In this sense, starch-based materials constitute an interesting approach to obtain environmental-friendly materials with potential massive use (13).

Starch is a polysaccharide coming from tubers, roots and cereals, which has an important role as food ingredient but it is starting to be used at industry, i.e. paper and board sector, industrial binder sector, pharmaceutical sector and textile industry (14) . Native starches experiment high degradation rates, though there are shortcomings associated to its poor mechanical properties and processability problems (15) (16) (17). Therefore, some methods to fulfill all industry´s requirements have been studied in order to improve functional properties e.g. modification (i.e. physical, chemical, enzymatic and genetical modifications) (18), plasticization and starch blends with biodegradable polymers, which seems to be the most promising innovation in order to enhance mechanical and thermal properties (19) (17). Besides, different types of nanoreinforcements like nanofibers, nanocrystals and starch nanocrystals are also been investigated during the last decade (20).

Referring to packaging films , it is know that they have to show specific characteristics of strength, rigidity and permeability, which can’t be reached by a material composed entirely of starch , in spite of that the starch blends with biodegradable polymers allows the material to achieve the requirements mentioned above (21).In general terms the blending of biodegradable polymers allows to reduce the cost of the materials and take control of the properties and rate of biodegradability desire at the final product (22). Therefore, the aim of this review is to summarize succinctly the current state of art of the topics related to starch blends with biodegradable polymers in packaging materials.


The biodegradable polymers are classified in two important groups according to their origin: biologically derived or natural polymers and synthetics polymers(23). Natural biopolymers come from living organisms , meaning that they are available in large quantities from renewable sources , on the other hand the synthetic polymers are man-made which means they are produced by non renewable sources like petroleum, coal or natural gas (24),(3). However, there is not a clear cut line separating these groups, e.g. poly (glycol acid) can be obtained by synthetic process produced from oil-derived starting materials or by fermentation process due to live organisms (25) . This paper will consider “synthetic” as man-made.

2.1 Natural polymers

2.1.1 Biodegradable polymers obtained by through fermentation

Mainly these group is related with polyesters and neutral polysaccharides produced by microorganisms when they have the access to a feed reserve of carbon and energy source (6). Nowadays the market presents an important advanced in the research of polyhydroxyalkanoates (PHAs) , that is a group of hydroxybutyric acid and hydroxyvaleric acid. (PHA)s are high molecular weight and the main chain can presents n-alkyl substitutions. In general terms these polymers has a slow rate of biodegradability (on the order of years) ,are biocompatible and thermoplastics (25).

The most studied compounds of this group is poly(3-hydroxyburtyrate) (PHB), obtained as a result of fermentation by the bacterium Alcaligenes eutrophus, and also poly(hydroxybutyrate-co-hydroxyvalerate) (PHBV ) (3).

The latest research of PHB production, opens the possibility to obtained it from water hyacinth which is one of the most notorious aquatic weeds. One of the most important advantages of this compound is its heat tolerance with a melting point around 175°C , which allows application in packaging films (26).

2.1.2 Biodegradable polymers from chemically modified natural products

In this group highlights the polysaccharides, polymers formed by many sugar units (glucose and fructose) ; the join of these monomers results in the formation of polymeric material (25). In this group we can find the cellulose and the starch , which have been the most studied materials because of their potential of replace the oil based- polymers at in large scale and low cost (27).

Starch is a cheap abundant material, nevertheless ,the main problem associated with starch is their poor mechanical properties and water solubility, that is why some developments proposed techniques like plasticization, or blends in order to produce competitive commercial commodities (28).

Cellulose is a consistent material that comes from vegetable sources, in comparison with starch , it is relative resistance to biodegradation, however cellulose can be degraded at aerobic or anaerobic conditions (6).

2.2 Synthetic polymers

These group contemplate materials that come from petroleum sources nevertheless after a process of polymerization their become in a biodegradable polymers following the EN 13432 rule for the biodegradability(29) These type of biopolymers in general are biologically inert and present predictable properties. Furthermore, they can be produce in mass (23), (2).

Polyesters are the most representative polymers of this group; in turn they can be classified in aliphatic and aromatics. Corresponding to the aliphatic polyesters group can be mentioned for example poly(caprolactone) (PCL), poly(lactic acid) (PLA), poly(butylenes succinate) (PBS), poly(butylenes succinate adipate) (PBSA), poly(glycolic acid) (PGA), and poly(vinilalcohol) (PVA), the main problem associated with them is their melting point around 60°C , which excluded them for some applications (29). Aromatic polyesters include polyesters containing aromatic rings or cyclic ether moieties (30).

PLA is one of the most studied compounds because of its high biodegradability, biocompatibility , good processability and relatively low cost (31). Industrially, companies like Chronopol ,Feberweb , Cargill Dow LLC and Mitsui Chemicals, produces and commercialists PLA (6). The synthesis of this polymers occurs at many step but mainly consists in the production of lactic acid by bacterial fermentation and the follows the polymerization(26). In order to improve the properties of PLA it has been blend with several hydriphhilic polymers, i.e. poly(ε-caprolactone) (PCL), poly(vinyl alcohol) (PVA), poly(ethylene glycol) (PEG), pluronic [triblockcopolymersof PEG and poly(propylene oxide), hyaluronicacid, and poly(vinyl acetate) (PVAc) (32).

PCL is obtained by chemical synthesis from crude oil, commercially has been used extendly in packaing industry for make compostable bags, nonetheless a material based only in PCL is not economically rentable, this is why nowadays PCL is mixed with an amount of natural biopolymers (6).

PVA has excellent gas barrier properties, high strength , tear , adhesive, flexibility , water absortion, and boding characteristics (33). Industrially is used in manufacturing of biodegradable films and in the production of adhesives or paper coatings (6).



Starch is usually plasticized before the blending process (34), (28). By plasticization a thermoplastic starch (TPS) can be obtained , which is principally characterized by the destructurization of the semi-crystalline structure of native starch(35). Depending on the nature of the plasticizer added to the starch, the final properties of the thermoplastic starch (TPS) can differ. In general terms plasticizers produce an increase of the flexibility and fluidity by reducing the strong molecular chain interactions; additionally, there is an increase of permeability against moisture, also a reduction of density and viscosity on account of increasing both free volume and chain movements (36) (37) (38). Although not all of these properties can be necessarily obtained at the same time and unfortunately TPS still remains being a very hydrophilic material (28).

Blending TPS with biodegradable polymers is one of the most promising advancements for possible applications in food packaging (39). In general terms, blending aims to reduce the production cost, improve the barrier properties, mechanical properties, dimensional stability, decrease the hydrophilic character of starch, and increases the biodegradability(34), (28). When macromolecules (e.g. PVA, PLA, PCL, PHB, PBSA, etc) are blended with TPS or native starch, they form a complex with amylase giving as a result a starch blend; it is important to state that amylopectin doesn’t not interact, remaining in its amorphous state (40).

3.1 Starch/PVA

Gelatinization is a common method to blend starch with PVA; other methods might not be useful due to the close gap between the thermo-degradation temperature and the melting temperature (38). One important fact is the compatibility they share (because both components are polyols), it enables them to form a continuous phase at blending(39). Referring to the biodegradability, starch and PVA are biodegradables in several microbial environments, nevertheless the biodegradability of PVA depends on its degree of hydrolysis and its molecular weight. (39),(41).

The presence of PVA in the blend, increase mechanical strength, water resistance, weatherability, (38). In this blend, both PVA and starch can be plasticized into a thermoplastic material, commonly using the casting method (42) and glycerol in aqueous medium as a plasticizer.

3.2 Starch/PLA

Owing to the low miscibility between TPS and PLA, the addition of some substances is required, i.e. compatibilizers, amphiphilic molecules, or coupling agents accompanied of good melt-blending techniques (34). Poly(hydroxyester ether), methylenediphenyl diisocyanate (MDI), PLA-graft-(maleic anhydride), PLA-graft-(acrylic acid)10 and PLA-graft-starch and poly(vinyl alcohol) have been used as a compatibilizers in this blend (43). Wang et al. (9) mentioned that there is not a major change at transition temperature (Tg) with the addition of starch at PLA, and both the tensile strength and elongation of the blend decrease (9).

3.3 Starch/PCL

PCL is a hydrophobic biodegradable polyester, hence at the blending stage, between it and the starch, an undesirable phase separation is occurs (44). In order to increase the compatibility between both composites, the addition of an interfacial agent or compatibilizer is necessary. Sugih et al. (44) assessed the behavior of two interfacial agents, PCL-g-glycidyl methacrylate (PCL-g-GMA) and PCL-g-diethyl maleate (PCL-g-DEM) in PCL/starch blend, meanwhile Singh et al. (45) mention the introduction of poly(ethylene glycol) (PEG) into PCL to improve interfacial properties.

Benefits obtained in this blend are evident; adding PCL allows to overcome the weakness of pure TPS and starch, also the reduction of the crystallinity of PCL favored the enzymatic degradation at the same time (46), (47).

3.4 Starch/PHB-HV

Reis et al. (48) assesses blends of polyhydroxybutyrate-hydroxyvalerate (PHB-HV) with maize starch at different contents prepared by casting, but the blends showed a lack of interfacial adhesion between starch and PHB-HV, and heterogeneous dispersion of starch granules over the PHB-HV matrix. This shortcoming could be reduced by precoating the starch with poly(ethylene oxide) (PEO), improving the adhesion between PEO –PHBV(9).

3.5 Starch/PBSA

The addition of starch to PBS helps to improve its flexibility and reduces its biodegradation time; this way is possible to expand its applications in packaging and flushable hygiene products (9), (49).

3.6 Ternary Blends

Liao and Wu (32), studied ternary blends between PCL, PLA and starch (with acrylic acid grafted PLA70PCL30 as compatibilizer) to overcome the shortcomings of brittle and processing properties; and also to reduce the overall cost.

Rahmah at al. (50) present a research related with a hybrid blend compounded by low density polyethylene (LLDPE) , PVA and starch, where the mechanical and thermal properties are evaluated.


4.1 Hydrogels

Hydrogels are defined like a hydrophilic polymer web able to soak up large quantities of water (51). Xia et al.(52) and Zhai et al. (51) assessed the preparation of starch-PVA hydrogels and their properties.

4.2 By Casting

Mainly consist in an aqueous suspension where either starch and biodegradable polymers are mixed and then it is spread on a hot anti-adhesive coated maintained at specific temperature where the water evaporation occurs, leaving as a result the blend film(53). In spite of the simplicity of this method, it can be used only as lab scale process (20). Buscar info adicional del por qué??

4.3 Gelatinization/crystallization

This technique is used when the thermal degradation temperature of the biodegradable polymer is near to its melting temperature, and mainly consist in the gelatinization of starch and polymer in the presence of cross-linking agent or plasticizer and water (9). The gelatinization is referred to the lost of the semi-crystallinity of starch granules due to the water at specific temperature inherent to the type of starch (54). Explicar desde el punto de vista de la gelatinization.

4.4 Reactive blending

Blending/extrusion usually takes place at specific conditions that promotes the chemical interaction between the functional groups corresponding both the compatibilizer and the biopolymers (55). Chemically , reactive blending implies the covalent bonding by Van der Walls forces between starch and the biopolymer (40).

4.5 Irradiation/cross-linking

Recently, irradiation is one of the techniques used to promote the chemical reaction between the polymer molecules (cross-linking). In this sense, Mubarak et al. (56), present a study of a PVA/ starch blend cured by UV- radiation. UV- irradiation produces the modification of the surface properties, Sionkowska (57) studied its effect of PVA surface .

There are different types of irradiation e.g. UV irradiation, γ-irradiation, electron beam irradiation and ultrasonic treatment. The first one requires the presence of photo sensitizers (e.g. benzoic acid family) to induce changes at the substrate; the second one is a ionic , non heating environmentally – friendly cross-linking agent of starch that improve functional properties; the third one consist in an excitation technique caused by the generation of radicals product of the breaking of the H=C bonds that mainly induce the compatibility between polymers, and the last one consist in the use of sounds waves beyond the audible frequency range (>20kHz) which is useful to improve the chemical activity (58).

It is important to keep in mind that irradiation is not only a blending technique, it can also be used as a modification starch process. Gani et al . (59) studied the effect of γ-irradiation on functional and morphological bean starch properties , resulting in the reduction of :amylose content , swelling index , pasting properties , syneresis; and in the increase of solubility index, transmittance and water absorption.

Que dependiendo del tipo de procesamiento puede ocurrir la modif de uno o más componentes de las mezclas.


Food packing materials need to exhibit specific characteristics or properties, mentioned e.g. optical properties, resistance, moldability and barrier to light, water, Vilpoux and Avérous (60). all of them requires an specific technique to be quantified, so, this review will describe some of the most relevant ones.

Decir que solo se va a describir para que sirve y no como se hace


5.1 Differential Scanning Calorimetry (DSC)

This technique allows to analyze the crystalline structure of the polymeric blend, by showing the enthalpy changes occurred while the melting (22). The thermal study mainly permits to identify the glass transition (Tg) and the melting temperatures (Tm), and then compare them with a 100 % crystalline standard in order to determinate the degree of crystallinity (61), (46). In addition DCS, is useful to determinate the miscibility degree between composites of the blend; therefore if its only one single phase, it would be directly related with a single glass transition temperature (22).

5.2 Scanning electronic microscopy (SEM)

Especificar q la fractura se relaciona con la mezcla

This technique allows to observe the surface morphology including possible fractures, Arrieta et al. (62) mentioned the use of this technique at their studies, to observed the microstructure at the film by covering the film with a layer of gold in order to attenuate the reflectance.

5.3 X-ray diffraction

This technique allow to analyze the crystalline structure of the sample, Belibi et al. (63) used this method and talks about the Bragg´s law like a source to calculate the distance between the planes of crystal based on diffraction angles.


This test is basically based on the analysis of the Young´s module and the capacity and the elongation at break performed on mechanical tensile tester or a dynamometer (63). The ageing and the storage conditions (e.g. temperature and relative humidity of storage´s environment) of sample is determinant at this test, hence, it is important to maintain the same conditions during a several period until make it (47).

At this kind of test the generation of a stress-strain curve is determinant to evaluate the variations in at young´s modules and yield point depending of the percentage of substances at the blend (61).

Hacer énfasis en tracción


Introducir con un párrafo

6.1 Nano composites- fillers in starch matrix

The addition of nanocomposites to reinforce the starch blends is one of the latest advances , this nanocomposites can be made by using inorganic or natural materials and can be defined like thermoplastics polymers with a charge between 2-8 % of nano scale inclusions (64). Ojo revisar ref

Nano fillers can be presented in different ways: nanoparticles (spherical or polyhedral), nanotubes, and nanolayers , in all cases exhibiting a large superficial area that improves de adhesion between the composites of polymer blend (65).

Xie et al. (66) shows a completely review about different types of nano fillers in plasticized starch-based matrix, the mainly are phyllosilicates (montmorillonite, hectorite, sepiolite, etc.), polysaccharide nanofillers (nanowhiskers/nanoparticles from cellulose, starch, chitin, and chitosan), carbonaceous nanofillers (carbon nanotubes, graphite oxide, and carbon black), and many more. This work concludes that the most utilized nano fillers are phyllosilicates due to their availability, low price, and high aspect ratio. On the other hand polysaccharide nano-fillers required acid hydrolysis at their preparation, reason why their use is non ecological friendly.

Principal properties affected by the addition of nano fillers in starch-based materials are: the improve of thermal stability, increase of biodegradation rate an oxygen barrier and the reduction of hydrophilicity(63), (66).

6.2 Starch- based nanocrystals

Starch nanocrystals are prepared with native starch granules brought under hydrolysis during an extended time but not exceeding the gelatinization temperature; this procedure causes the hydrolyzation of the amorphous regions releasing the crystalline lamellae (58). Nowadays this type of nano- filler have take force because of its low cost (abundance of starch) , renewability and eco friendly characteristics (58).

Le Corre et al. (67) reviewed the use of starch nanocrystals as reinforces of elastomer-based matrix , showing a positive reinforcing effect evidenced by the increase of both stress at break and relaxed storage modulus.

The main aspects that highlight by the addition of starch-based nano-biocomposites are the increase of values of strength at break an Tg, but also their disadvantages associated e.g. the increment of water absorption and decomposition temperature (66).

Poner aplicaciones


1. Kutz, Myer. Handbook of Environmental Degradation of Materials. Waltham : Elsevier, 2012.

2. Chemical syntheses of biodegradable polymers. Okada, Masahiko. Aichi, Japan : s.n., 2002, Progress in Polymer Science, Vol. 27, págs. 87-133.

3. Biodegradable Polymers. Vroman, Isabelle y Tighzert, Lan. 2, 2009, Materials, Vol. 2, págs. 307-344.

4. Recycling of bioplastics, their blends and biocomposites: A review. Soroudi, Azadeh y Jakubowicz, Ignacy. 2013, European Polymer Journal .

5. Starch Based Biofilms for Green Packaging. Ali, Roshafima R., y otros. [ed.] Engineering and Technology World Academy of Science. Skudai, Malaysia : s.n., 24 de 10 de 2012, Vol. 6, págs. 508-512.

6. Biodegradable Plastics from Renewable Sources. Flieger, M., y otros. 1, Praga : s.n., 2003, Folia Microbiol, Vol. 48, págs. 27-44.

7. Biodegradability and Mechanical Properties of Poly(vinyl alcohol)-Based Blend Plastics Through extrusion Method. Kopčilová, Martina, y otros. 2012, J Polym Environ, Vol. 21, págs. 88-94.

8. Introduction of Environmentally Degradable Parameters. Guo, Wenbin, y otros. 5, Tianjin : s.n., 2012, Vol. 7.

9. Properties of Starch Blends with Biodegradable Polymers. Wang, Xin-Li, Yang, Ke-Ke y Wang, Yu-Zhong. 3, Chengdu, China : s.n., 2003, Journal of Macromolecular Sciencie, Vol. C43, págs. 385-409.

10. Chemical syntheses of biodegradable polymers. 1, Aichi, Japan : s.n., 2002, Progress in Polymer Science, Vol. 27, págs. 87-133.

11. Eurpean Bioplastics. [En línea]

12. Biodegradable Multiphase Systems Based on. Avérous, Luc. 3, Strasbourg, France : s.n., 2004, JOURNAL OF MACROMOLECULAR SCIENCE, Vol. C44, págs. 231-274.

13. Natural fiber-reinforced thermoplastic starch composites obtained by melt processing. Gironès, J., y otros. 2012, Composites Science and Technology, Vol. 72, págs. 858-863.

14. Laycock, Bronwyn G. y Halley, Peter J. Starch Applications: State of Market and New Trends. [ed.] P. Halley y Luc Averous. Starch Polymers From Genetic Engineering to Green Applications. Queensland, Australia : Elsevier, 2014, 14, págs. 381-419. DOI: 10.1016/B978-0-444-53730-0.00026-9.

15. Effects of plasticizers on the structure and properties of starch–clay. Tang, Xiaozhi, Alavi, Sajid y Herald, Thomas. [ed.] J.F. Kennedy y J.R. Mitchell. 3, Manhattan : s.n., April de 2008, Carbohydrate Polymers, Vol. 74, págs. 552-558. doi:10.1016/j.carbpol.2008.04.022.

16. Xu, Xuan, Visser, Richard G.F. y Trindale, Luisa M. Starch Modification by Biotechnology: State of Art and Perspectives. [ed.] Peter Halley y Luc Avérous. Starch Polymers from genetic engineering to green applications. s.l. : Newnes, 2014, 4, págs. 79-102.

17. Preparation and Properties of Plasticized Starch/Multiwalled Carbon Nanotubes Composites. Cao, Xiaodong, y otros. 2, July de 2007, Journal of Applied Polymer Sciencie, Vol. 106, págs. 1431-1437.

18. Various techniques for the modification of starch and the applications of its derivates. Neelman, Kavlani, Vijay, Sharma y Lalit, Singh. 5, Bareilly, India : s.n., may de 2012, International Research Journal of Pharmacy, Vol. 3.

19. Bioplastic: A Better Alternative For Sustainable Future. Marjadi, Darshan y Dharaiya, Nishith. 2, Gujarat, India : s.n., 2011, Vol. 2, págs. 159-163.

20. Visakh, P.M., y otros. Starch-Based bionanocomposites: processing and properties. [ed.] Habibi Youssef y Lucia Lucian. Polysaccharide Building Blocks: A Sustainable Approach to the Development of Renewable Biomaterials. s.l. : Wiley, 2012, 11, págs. 289-308. DOI: 10.1002/9781118229484.ch11.

21. Preparation and Characterization of Starch/PVA Blend for. Parvin, Fahmida, y otros. Dhaka, Bangladesh : Trans Tech Publications, Switzerland, 2010, Advanced Materials Research, Vols. 123-125, págs. 351-354.

22. A Review of Biodegradable Polymers:Uses, Current Developments in the Synthesis and Characterization of Biodegradable Polyesters, Blends of Biodegradable Polymers and Recent Advances in Biodegradation Studies. Amass, Wendy, Amass, Allan y Tighe, Brian. Birmingham, UK. : s.n., 1998, Polymer International, Vol. 47, págs. 89-144.

23. Biodegradable polymers as biomaterials. Nair, Lakshmi y Laurencin, Cato. Virginia, USA. : s.n., 2007, Progress in Polymer Science, Vol. 32, págs. 762-798.

24. Foaming of Synthetic and Natural Biodegradable Polymers. Marrazo, Carlo, Di Maio, Ernesto y Iannace, Salvatore. 2007, Journal of Cellular Plastics, Vol. 43, págs. 123-133.

25. Biodegradable Polymers. Zhang, Zheng, y otros. Piscataway, New Jersey : s.n.

26. Study of Bio-plastics As Green & Sustainable Alternative to Plastics. Laxmana Reddy, R., Sanjeevani Reddy, V. y Anusha Gupta, G. 5, Pradesh, India : s.n., 2013, International Journal of Emerging Technology and Advanced Engineering, Vol. 3, págs. 82-89.

27. Chemical modification of starch by reactive extrusion. Moad, Graeme. Clayton South, Australia : s.n., 2011, Progress in Polymer Science, Vol. 36, págs. 218-237. doi:10.1016/j.progpolymsci.2010.11.002.

28. Starch-based biodegradable blends: morphology and interface properties. Schwach, Emmanuelle y Avérous, Luc. Reims- Strasbourg, France : Society of Chemical Industry, 2004, Polymer International, Vol. 53, págs. 2115-2124. DOI: 10.1002/pi.1636.

29. Bioplásticos. Remar, Red de energía y Medio Ambiente. 2, Navarra, España : s.n., 2011.

30. Chemical syntheses of biodegradable polymers. Okada, Masahiko. Aichi, Japan : s.n., 2002, Progress in Polymer Science, Vol. 27, págs. 87-133.

31. New heat-resistant. Kazem, Mohammad, y otros. Pisa;Modena;Barcelona : s.n., 2013, Bioplastics Magazine, Vol. 8.

32. Preparation and characterization of ternary blends composed of polylactide, poly(E-caprolactone) and starch. Liao, Hsin-Tzu y Wu, Chin-San. 1-2, Taiwan, China : s.n., 2009, Materials Science and Engineering A, Vol. 515, págs. 207-214.

33. Synthesis, characterization and antibacterial activity of biodegradable starch/PVA composite films reinforced with cellulosic fibre. Priya, Bhanu, y otros. Pradesh, India : s.n., 2014, Carbohydrate Polymers, Vol. 109, págs. 171-179.

34. Properties of thermoplastic starch from cassava bagasse and cassava starch and their blends with poly (lactic acid). Teixeira, Eliangela de M., y otros. San Carlos, Brazil- Albany, USA : Elsevier, 2012, Industrial Crops and Products, Vol. 37, págs. 61-68. doi:10.1016/j.indcrop.2011.11.036.

35. Effects of relative humidity and ionic liquids on the water content and glass transition of plasticized starch. Bendaoud, Amine y Chalamet, Yvan. Saint-Etienne, France : s.n., 2013, Carbohydrate Polymers, Vol. 97, págs. 665-675.

36. Byun, Youngjae, Zhang, Yachuan y Geng, Xin. Plasticization and Polymer Morphology. [ed.] Jung H. Han. Innovations in Food Packaging. 2. s.l. : Academic Press, 2014, 5, págs. 87-108.

37. Natural-based plasticizers and biopolymer films: A review. Adeodato Vieira, Melissa Gurgel, y otros. 3, Campinas : s.n., march de 2011, European Polymer Journal, Vol. 47, págs. 254-263. doi:10.1016/j.eurpolymj.2010.12.011.

38. Structure and properties of urea-plasticized starch films with different urea contents. Wang, Jia-li, Cheng, Fei y Zhu, Pu-xin. Chengdu, China : s.n., 30 de January de 2014, Carbohydrate Polymers, Vol. 101, págs. 1109-1115.

39. Starch-based completely biodegradable polymer materials. Lu, D.R., Xiao, C.M. y Xu, S.J. 6, Quanzhou, China : s.n., 2009, eXPRESS Polymer Letters, Vol. 3, págs. 366-375. DOI: 10.3144/expresspolymlett.2009.46.

40. Product overview and market projection of emerging bio-based plastics. Shen, Li, Haufe, Juliane y Patel, Martin K. 4, June de 2009, Bioproducts and Biorefining,, págs. 25-40.

41. Biodegradation Studies of Polyvinyl Alcohol/Corn Starch Blend Films in Solid and Solution Media. Azahari, N. A., Othman, N. y Ismail, H. 2, Pulau Pinang,Malaysia : s.n., 2011, Journal of Physical Science, Vol. 22, págs. 15-31.

42. Effect of a Complex Plasticizer on the Structure and Properties of the Thermoplastic PVA/Starch Blends. Zhou, Xiang Yang, y otros. Guangzhou, China : s.n., 2009, Polymer-Plastics Technology and Engineering, Vol. 48, págs. 489-495. DOI: 10.1080/03602550902824275.

43. Preparation and characterization of thermoplastic starch/PLA blends by one-step reactive extrusion. Wang, Ning, Yu, Jiugao y Ma, Xiaofei. Tianjin, China : s.n., 2007, Polymer International, Vol. 56, págs. 1440-1447.

44. Synthesis and Properties of Reactive Interfacial Agents for Polycaprolactone-Starch Blends. Sugih, Asaf K., y otros. Groningen, The Netherlands : Wiley Periodicals, Inc., 2009, Journal ofAppliedPolymer Science, Vol. 114, págs. 2315-2326. DOI 10.1002/app.30712.

45. Biodegradation of poly(o-caprolactone)/starch blends and composites in composting and culture environments: the effect of compatibilization on the inherent biodegradability of the host polymer. Singh, R.P., y otros. 17, 2003, Carbohydrate Research, Vol. 33, págs. 1759-1769. DOI:10.1016/S0008-6215(03)00236-2.

46. Thermal properties and enzymatic degradation of blends of poly(3-caprolactone) with starches. Rosa, D.S., Lopes, D.R. y Calil, M.R. 6, Itatiba, Brazil : s.n., 2005, Vol. 4, págs. 756-761. DOI:10.1016/j.polymertesting.2005.03.014.

47. Properties of thermoplastic blends: starch–polycaprolactone. Avérous, L., y otros. 11, 2000, Polymer, Vol. 41, págs. 4157-4167.

48. Characterization of polyhydroxybutyrate-hydroxyvalerate (PHB-HV)/maize starch blend films. Reis, K.C., y otros. 4, 2008, Journal of Food Engineering, Vol. 89, págs. 361-369.

49. Current progress on bio-based polymers and their future trends. Babu, Ramesh P, O’Connor, Kevin y Seeram, Ramakrishna. 8, 2013, Progress in Biomaterials, Vol. 2.

50. Mechanical and Thermal Properties of Hybrid Blends of LLDPE/Starch/PVA. Rahmah, M., Farhan, M. y Akidah, N.M.Y. 8, 2013, International Journal of Chemical, Nuclear, Metallurgical and Materials Engineering, Vol. 7, págs. 292-296.

51. Syntheses of PVA/Starch grafted hydrogels by iradiation. Zhai, Maolin, y otros. Beijing : s.n., 2002, Carbohydrate Polymers, Vol. 50, págs. 295-303.

52. Controlled preparation of physical cross-linked starch-g-PVA hydrogel. Xiao, Congming y Yang, Meiling. Quanzhou : s.n., 2006, Vol. 64, págs. 37-40. DOI:10.1016/j.carbpol.2005.10.020.

53. Properties of starch based blends. Part 2. Influence of poly vinyl alcohol addition and photocrosslinking on starch based materials mechanical properties. Follain, N., y otros.

54. Meré Marcos, Javier. Estudio del procesado de un polímero termoplástico basado en el almidón de patata amigable con el medio ambiente. Madrid, España : Unversidad Carlos III de Madrid , 2009.

55. Preparation of PHBV/Starch Blends by Reactive Blending and Their Characterization. Avella, M. y Errico, M.E. Arco Felice, Italy : s.n., 2000, Journal of Applied Polymer Science, Vol. 77, págs. 232-236.

56. Preparation and characterization of ultra violet (UV) radiation cured bio-degradable films of sago starch/PVA blend. Khan, Mubarak A., y otros. Dhaka, Bangladesh : s.n., 2006, Carbohydrate Polymers, Vol. 63, págs. 500-506. DOI:10.1016/j.carbpol.2005.10.019.

57. Surface properties of UV-irradiated poly(vinyl alcohol) films containing small amount of collagen. Sionkowska, Alina, y otros. Torun, Poland : s.n., 2009, Applied Surface Science, Vol. 225, págs. 4135-4139. DOI:10.1016/j.apsusc.2008.10.108.

58. Wittaya, Thawien. Rice Starch-Based Biodegradable Films: Properties Enhancement . [ed.] Ayman Amer Eissa. Structure and Function of Food Engineering. 2012, 5.

59. Modification of bean starch by g-irradiation: Effect on functional and morphological properties. Gani, Adil, y otros. Kashmir, India : s.n., 2012, LWT – Food Science and Technology, Vol. 49, págs. 162-169. DOI:10.1016/j.lwt.2012.04.028.

60. Vilpoux, Olivier y Avérous, Luc. Starch- Based plastics. Technology, use and potencialities of Latin American starchy tubers. 2004, Vol. 3, 18, págs. 521-553.

61. Mechanical Properties of Poly(e-caprolactone) and Poly(lactic acid) Blends. Simoes, C. L., Viana, J. C. y Cunha, A. M. Guimaraes, Portugal : s.n., 2009, Journal of Applied Polymer Science, Vol. 112, págs. 345-352.

62. Electrically Conductive Bioplastics from Cassava Starch. Arrieta, Alvaro A., y otros. 6, Córdoba- Medellín , Colombia : s.n., 2011, J. Braz. Chem. Soc., Vol. 22, págs. 1170-1176.

63. Tensile and water barrier properties of cassava starch composite films reinforced by synthetic zeolite and beidellite. Belibi, Pierre C., y otros. Yaoundé, Cameroon : s.n., 2013, Journal of Food Engineering, Vol. 115, págs. 339-346.

64. Physicochemical properties of starch–CMC–nanoclay biodegradable films. Almasi, Hadi, Ghanbarzadeh, Badak y Entezami, Ali. Tabriz, Iran : s.n., 2010, International Journal of Biological Macromolecules, Vol. 46, págs. 1-5. DOI:10.1016/j.ijbiomac.2009.10.001.

65. Ahmed, Jasmin, y otros, [ed.]. Starch-Based Polymeric Materials and Nanocomposites: Chemistry, Processing, and Applications. s.l. : CRC Press, 2012. 978-1-4398-5116-6.

66. Starch-based nano-biocomposites. Xie, Fengwei, y otros. Brisbane, Australia : s.n., 2013, Progress in Polymer Science, Vol. 38, págs. 1590-1628.

67. Dufresne, Alain, Thomas, Sabu y Pothan, Laly, [ed.]. Biopolymer Nanocomposites: Processing, Properties, and Applications. s.l. : John Wiley & Sons, 2013. 978-1-118-21835-8.


Averrhoa carambola L – therapeutic properties: college admission essay help



The inflammation is a biological complex response of the vascular tissue towards the harmful stimuli caused by pathogens, damaged cells as well as the irritants. The role of the inflammation in pain is directly straightforward as it accompanied by pain. The chemicals from the body white blood cells will released into the blood or the affected tissues to protect the body from any foreign substances when the inflammation occurs. The release of chemicals thus increases the blood flow to the area of injury or infection which the resulting in redness and warmth. The inflammation is normally acute as it begins as the body start to fight the antigens and ends when the immune system stops producing the chemicals. The chronic inflammation whereby the body continues to produce the chemicals that causes inflammation. (Gollapalli et al, 2012).

Figure 1.1: The inflammation process

The cyclooxygenase pathway of arachidonic acid metabolism which produces prostaglandins are potentially inhibited by the most of the anti inflammatory drugs. Cyclooxygenase is an enzyme that responsible for the production of prostaglandins will be divided into three stage such as COX-1 and COX-2 as well as Lipooxygenase. Prostaglandins which created from common precursor molecule cyclooxygenase actually is a key hormones that used to carry local messages, deliver and strengthen pain signals as well as induce inflammation. Thus, the anti inflammatory agents are needed to treat inflammatory disease which leads to the utilization of herbal plants extract to possess anti-inflammatory activity. The herbal medicine from the herbal plants apparently used widely by the community in order to treat mild and chronic ailments. Plants have been popular sources of drugs that many of the currently available drugs that exist in the market is actually derived from the herbal plants extracts (P. Dasgupta et al, 2013).

Figure 1.2: The arachidonic acid metabolism in inflammation

The modern allopathic system, which has developed many costly diagnostic and sophisticated methodologies that at the times have made it quite outrageous and beyond the reach of human being. Nowadays, many modern synthetic drugs may harm more than they help in curing the diseases by its serious effects. Presently, the examples of drugs such as narcotics drugs like opioids as well as non-narcotics drugs with hydrocortisone are widely used for the management of inflammatory and pain conditions. All of these drugs are well known for its toxic and side effects. It is clearly stated that these group of nonsteroidal anti-inflammatory drugs or NSAIDs produce intestinal tract ulcers with the potential of internal bleeding due to the long-term used. In contrast, the traditional medicines that make use of plants are much more favoured in terms of being safer without harmful effects which then comparatively less expensive than many allopathic medicines that available in the market.

The Averrhoa carambola L or locally known as star fruit actually is a product from the Carambola tree under the Oxalidaceae family is one of the herbal plants that have ability to treat mild and chronic ailments. The fruit mainly found in Brazil, Australia, Southeast Asia including Malaysia. Generally, the Averrhoa carambola L fruits are green when small and unripe. When the fruit matured and ripe, it will turn to yellow or orange in colour. The fruits are crunchy, having a crisp texture and when cut in a cross-section are star shaped that resembles its name. The flesh is light yellow to yellow, translucent and very juicy without fiber. The odour of the fruits resembles oxalic acid and their taste varies from very sour to mild sweet or sweet.


Figure 1.3: The Averrhoa carambola L fruit

Scientific name Averrhoa carambola L

Kingdom Plantae

Subkingdom Tracheobionta

Superdivision Spermatophyta

Division Magnoliophyta

Class Magnoliopsida

Subclass Rosidae

Order Geraniales

Family Oxalidaceae

Genus Averrhoa Adans

Species carambola

Table 1.1: Taxonomical Classification of Averrhoa carambola L

Sanskrit Karmaranga

English Starfruit or Chinese Gooseberry

Hindi Kamrakh or Karmal

Malay Belimbing

Tamil Thambaratham or Tamarattai

Filipino Balimbing or saranate

Indonesian Belimbing

Gujarati Kamrakh

Table 1.2: The Different Vernacular names of Averrhoa carambola L

The Averrhoa carambola L are available in September and October and again in December and January in India. The Averrhoa carambolas L in Malaysia, they are produced all the year but some of the trees have fruited heavily in November and December and again in March and April. There may even be three crops. The weather conditions account for much of the seasonal variability. The fruits naturally fall to the ground when fully ripe. The fruits should be hand-picked while pale-green with just a touch of yellow for marketing and shipping purposes.

A survey of literature revealed that the anti-inflammatory activity of this plant has an immense of its ethnomedicinal value. The properties of Averrhoa carambola L fruits have been accredites based on the present of its tannins, flavonoids, alkaloids, fixed oils and fats as well as saponins. (Biswa et al, 2012). The previous research was found some of the ability of the Averrhoa carambola L show the effectiveness to act as analgesic, anthelminthic, hypotensive, anti ulcer, anti microbial as well as antioxidants by using the leaves, bark, fruit and stem extracts part due to the possible isolation of active phytoconstituents that present such as flavonoid, saponins as well as tannins inside the Averrhoa carambola L. (P.Dasgupta et al, 2013). The Averrhoa carambola L have the ability to treat mild and chronic ailments due to the present of the active phytoconstituents that responsible for the existence of the activity as mentioned (Farizka et al, 2015).

Some of the phytochemical studies have recorded but still it needs to be more progressed. The further investigations are needed to explore every possibility of individual bioactive compounds which responsible for the mode of actions as well as the pharmacological effects. However, there are still less information is available regarding the phytoanalytical studies, clinical study together with the toxicity study of this plant. The availability of primary information can be carried out such as clinical trials, phytoanalytical studies, toxicity evaluation and safety assessments for the further studies. Moreover, the plant is still under the pre clinically evaluated to other some extent. In addition, if these profess are clinically and scientifically assessed then it can continuously provide good remedies and help mankind in order to treat various ailments.

Plants have played a significant role in human health care since the ancient times. Traditional plants exerts great role in discovery of new drugs. Majority of human population worldwide is getting affected by inflammation related disorders. The herbal medicines work in a way that depends on an orchestral approach unlike the modern allopathic drugs which are single active components that target one specific pathway. There are few of plants that showed as source of anti-inflammatory. The medicinal plants have been used widely as crude material and source of wide variety of biologically active compounds for many centuries or as pure compounds for treating various disease conditions as well as play an important role in the development of potent therapeutic agents.

The plants that showed the anti-inflammatory potential such as Achillea millefolium L which is a perennial herb native to Europe and highly recognized in traditional medicine for its anti-inflammatory properties. The plant has been traditionally used externally for treatment of wounds, burns, swollen and irritated skin. The studies have shown two classes of secondary metabolites such as phenolics and isoprenoids that contribute mainly to the anti-inflammatory properties. The topical anti-inflammatory activity of the sesquiterpenes is caused by inhibition of arachidonic acid metabolism. The crude plant extract and the flavonoids inhibit human neutrophil elastase as well as the matrix metalloproteinases, which are associated with anti-inflammatory process in vitro studies (Benedek B et al, 2007)

The another plants that showed anti-inflammatory potential is Aconitum heterophyllum which is commonly known as Ativisha in Ayurveda. It is widely used for the treatment of diseases of nervous system, rheumatism, digestive system as well as fever. The ethanolic extract of root of Aconitum heterophyllum contains alkaloids, glycosides, flavonoids and sterols. It has been reported that plants with these chemical classes of compounds possess potent anti-inflammatory effects through inhibition of prostaglandin pathways. The administration of Aconitum heterophyllum extract has been observed to inhibit the weight of wet cotton pellet in a dose dependent manner while the higher dose of the plant exhibited inhibition of inflammation very close to the inhibitory effect of diclofenac sodium. It has been reported that ethanolic root extract of Aconitum heterophyllum has potential to inhibit the sub-acute inflammation by interruption of the arachidonic acid metabolism (Santosh Verma et al, 2010)

In conjunction with the anti-inflammatory potential, the Adhatoda vasica L is an indigenous herb belonging to family Acanthaceae and the plant has been used in the indigenous system of medicine in worldwide as herbal remedy for treating cold, whooping cough, chronic bronchitis, cough, asthma, sedative expectorant, antispasmodic, anthelmintic as well as rheumatic painful inflammatory swellings. The drug is employed in different forms such as fresh juice, decoction, infusion and powder. It is also given as alcoholic extract and liquid extract or syrup. This plant contains alkaloids, flavonoids, terpenes, sugars, tannins as well as the glycosides. (Prajapati ND et al, 2003) The anti-inflammatory potential of ethanolic extract has been determined by using carrageenan-induced paw edema assay together with the formalin-induced paw edema assay in albino rats. The ethanolic extract of Adhatoda vasica L produced dose dependent inhibition of the carrageenan and formalin-induced paw edema (Wahid A Mulla et al, 2010)

The Lycopodium clavatum commonly known as club moss has been reported to be used in the wound healing effect. The four extracts prepared with ethyl acetate, chloroform, petroleum ether and methanol as well as the alkaloidal fraction from the aerial parts of the Lycopodium clavatum helped increase in capillary permeability assessment using the acetic acid induced method in mice. The only chloroform extract and the alkaloid fraction displayed as marked anti-inflammatory effect as compared to the Indomethacin as standard drug (Orhan I et al., 2007)

In addition of the anti-inflammatory potential activity, Ricinus communis Linn found almost everywhere in the tropical and subtropical regions of the world. The free radical scavenging and anti-inflammatory activities of the methanolic extract of Ricinus communis root was studied in Wistar albino rats. The methanolic extract exhibited significant anti-inflammatory activity in carrageenan-induced hind paw edema model. The methanolic extract showed significant free radical scavenging activity by inhibiting lipid peroxidation while the observed pharmacological activity may be due to the presence of phytochemicals like alkaloids, flavonoids and tannins in the plant extract (Ilavarasan R et al, 2006)

The Cassia fistula tree is one of the example for anti-inflammatory plant that most widespread in the forests of India. The whole plant dominated medicinal properties which useful in the treatment of inflammatory diseases, rheumatism, skin diseases, anorexia and jaundice as well. The Cassia fistula bark extracts exhibited the significant anti-inflammatory effect in the acute as well as in the chronic anti-inflammatory model of inflammation in rats itself. The Reactive Oxygen Species (ROS) generated are associated with the pathogenesis of various diseases such as atherosclerosis, diabetes and aging process. The ROS play an important role in pathogenesis of inflammatory diseases whereby the flavonoids and bio-flavonoids are main constituents that responsible for the anti-inflammatory activity of Cassia fistula itself (Venkataraman S et al, 2005)

The leaves and bark of Thespesia populnea are used to produce oil for the treatment of fracture wounds and as an anti-inflammatory. The poultice applied to ulcers and boils mostly in the southern India and Sri Lanka. The ethanolic extract of Thespesia populnea shows the significant anti-inflammatory activity in both acute and chronic models. The phytochemical studies indicated that the ethanolic extract of bark contains alkaloids, carbohydrates, proteins, tannins, phenols, flavonoids, saponins and terpenes (Vasudevan M et al, 2007)

The outcome of the results for Averrhoa carambola L for other pharmacological activities are very encouraging and indicate that this plant should be studied more extensively to confirm the reproducibility of these results as well as to reveal other potential therapeutic effects. The previous research investigated the potential anti inflammatory properties only for the leaves part of Averrhoa carambola L. By keeping this in view, the main objectives of the present study is to study the in vivo anti inflammatory activity of Averrhoa carambola L ethanolic fruit extract to put forward a scope to develop an effective drug for the treatment of related diseases as the treatments currently available for treating inflammation are not well tolerated and are often ineffective. Apart from that, this new research basically is to find a new source of anti inflammatory activity of Averrhoa carambola L ethanolic fruit extract which is effective, efficient and has a safer toxicity profile.



Cabrini et al., (2010) reported that the ethanolic extracts from Averrhoa carambola L leaf together with its ethyl acetate, hexane and butanol fractions as well as two isolated flavonoids which are useful in cellular migration and reducing croton oil induced ear edema in mice has been reported. The results for traditional use of this Averrhoa carambola L for skin inflammatory disorders have been justified.

Chang et al., (2000) investigated that the intoxication of Averrhoa carambola L based on the 20 patients. The 19 patients were suffered from uremic hemodialysis whether last one patients having advanced chronic renal failure without dialysis. Based on their report, eight patients did not survive despite haemodialysis intervention including the patient with advanced chronic renal failure. There is no report of Averrhoa carambola L fruit toxicity towards the patient with normal renal function. In contrast, the consumption of Averrhoa carambola L fruit will cause high mortality even after dialysis with renal failure patients.

Neto et al., (1998) reported that six of the uremic patients in a dialysis program who were apparently intoxicated after ingestion of 2-3 fruits which an equivalent of 150-200 ml of the fruit juice. The six patients developed a variety of manifestation that ranged from hiccups, nausea, agitation, insomnia and mental confusion. Apparently one case of death due to hypotension and seizure. Beside that, he also investigated to characterize the hypothetical neurotoxin in the fruit, an extract when injected in rats which then provoked the persistent convulsions.

Shui G et al., (2004) investigated that the polyphenolic antioxidants that present in the juice and the residue extract of Averrhoa carambola L fruit was analyzed by using the liquid chromatography and mass spectroscopy. The mainly antioxidants which are attributed to the phenolic compounds characterized as epicatechin, L ascorbic acid, gallic acid as well as proanthocyanidins. The higher antioxidants activity was found much higher than the extracted juice in the Averrhoa carambola L residue.

Carolina et al., (2005) conducted a study in chromatographic isolation of the convulsant fraction from the aqueous extract of the Averrhoa carambola L. The effects of the neurotoxin fraction AcTx that given to experimental animals such as rats and mice showed behavioral changes acting on Gamino butyric acid or known as GABA receptors. The excitatory neurotoxins probably GABAergicantagonist which may be responsible for seizures in renal patients and animal models.

Narain et al, (2001) reported the composition of the Averrhoa carambola L fruits during maturation, the pH of the fruit increased together with the advance in maturity whereby the pH values were 3.44, 2.71 and 2.40 for ripe, half-ripe fruits and green mature fruits respectively. The increased in calcium contents were observed at the ripe stage (4.83 + 0.27 mg/100 g of edible fruit) and it was significantly different than the fruits in green mature (3.55 + 0.85 mg/100 g of edible fruit) or half-ripe stages (4.83 + 0.27 mg/100g of edible fruit). The titrable acidity, reducing sugars and tannins contents of the fruits vary significantly among the fruits at the different stages of maturity.

Sripanidkulchai B et al., (2002) reported that the Averrhoa carambola L stem was found to exhibited the antibacterial activity through the process by inhibiting the Staphylococcus aureus and Klebsiella sp as reported. The minimal bactericidal concentrations or known as MBC gives out the value of 15.62 mg/ml and 125 mg/ml respectively as indicated.

Mia Masum Md et al., (2007) investigated that the methanolic extract and its carbon tetrachloride, petroleum ether,chloroform and as well as the Averrhoa carambola L aqueous soluble fractions of its bark part was found to inhibited the growth of the various positive and negative gram of bacteria.

Herderich et al., (1992) indicated the identifications of glycosidically bound constituents from star fruit or known as Averrhoa carambola L using HRGC and HRGC-MS techniques. The constituents were obtained from the extracts by the Amberlite XAD-2 adsorption followed by methanol elution. The compounds identified were the ionone derivatives, namely 4-hydroxy-β-ionol, 3- hydroxy-β-ionol, 4-oxo-β-ionol, 3-hydroxy-β-ionone, 3-oxo-α-ionol, 3-oxo-retro-α-ionol (2 isomers), 3-oxo- 4,5-dihydro-α-ionol, 3-oxo-7,8-dihydro-α-ionol (blumenol C), 3,5-dihydroxy-megastigma-6,7-diene-9- one (grasshopper ketone), 3-hydroxy-β-damascone, 3- hydroxy-5,6-epoxy-β-ionone, 3-hydroxy-5,6-epoxy-β- ionol, 3,4-dihydro-3-hydroxyactinidol, vomifoliol (blumenol A), 4,5-dihydrovomifoliol and 7,8- dihydrovomifoliol (Blumenol B). The several of these new constituents are easily degraded upon heat-treatment at natural pH condition of the fruit pulp that lead to the rationalizing the formation of a number of C13-aroma compound which have recently been reported as star fruit volatiles as well.

Shah et al., (2011) reported that the Averrhoa carambola L leaves aqueous extract was reported the anthelmintic assay at the various concentrations such as 10 mg/ml, 50 mg/ml as well as 100 mg/ml using albendazole as the reference standard. The Averrhoa carambola L leaves was found to showed in a significant anthelminthic activity in dose dependent manner.

Ahmed et al., (2012) stated that the Averrhoa carambola L fruits showed analgesic ativity by writhing test and radiant heat tail flick test. The results was found that the Averrhoa carambola L exhibited the significant peripheral and central analgesic in the acetic acid induced writhing model in Swiss Albino mice at the doses of 200 mg/kg and 400 mg/kg which then showed inhibition of writhing of 37.13% and 42.76% respectively.

Genasekara LCA et al., (2011) reported that the hypoglycemic of Averrhoa carambola L ripe fruit pulp significantly decreased the blood glucose levels through the treatment of 8 weeks which was conducted in a healthy Sprague dawley male rats and normal rats. The treatment showed decreased blood glucose level in Sprague dawley male rats compared to that normal rats.

The World health Organisation (WHO), (2003) estimates that about 80% of the population that living in the developing countries relies almost exclusively on the traditional medicine as for their primary healthcare needs. WHO has listed over that there are 21000 plant species used around the world for medicinal purposes. There are about 2500 plant species belonging to more than 100 genera are being used in indigenous systems of medicine in India.

Kupeli et al., (2007) stated that the Daphne pontica Linn consisted of flavonoids constituents like daphnodorins were isolated from the roots of which was reported to have anti tumour activity. Several of Daphne species have been used against inflammatory disorders. The Daphne pontica have been used widely for the treatment of rheumatic pain as well as inflammatory ailments. The extracts inhibits the production of prostaglandin as well as interleukin for inflammation purposes.

Chau et al., (2004) stated that the Averrhoa carambola L possessed the potential hypoglycemic effects due to the presence of several insoluble fibers rich fractions including insoluble dietary fibers, water insoluble solid as well as alcohol insoluble solid which isolated from the pomace of the fruit itself. The fibers rich fractions helps to absorb glucose, reduce amylase activity and delayed the release of glucose from the starch. The fibers rich fraction implied that they might help in controlling post prandial glucose.

Sen T et al., (1991) stated that the anti-inflammatory activity of the methanolic fraction of a chloroform extract of Pluchea indica roots was investigated and evaluated. The extract showed significant inhibitory activity against carrageenan, serotonin, hyaluronidase, histamine and sodium urate which induced the pedal inflammation as well as inhibited carrageenan induced paw edema and cotton pellet induced granuloma formation.

Soncini R et al., (2011) reported that the hypotensive effects for aqueous extract Averrhoa carambola L leaves and was found underlying mechanisms in the isolated aorta of rat. The effects of the Averrhoa carambola L in aqueous extracts shown on the mean arterial pressure which was conducted in vivo of the anesthesized rats. The study showed that the aqueous extract of Averrhoa carambola L leaves induced the dose dependent hypotension in normotensive rats up to 12.5 – 50.0 mg/kg which administered intravenously.

Zhang et al., (2007) stated that the Averrhoa carambola L juice in Human Liver Microsomes shown an inhibitory effects towards the seven CYP isoforms namely the CYP1A2, CYP2A6, CYP2D6, CYP2C8, CYP2C9, CYP2E1 as well as CYP2A6. Among all the seven CYP stated, it was found that the CYP2A6 was highly inhibited than other isoforms by the Averrhoa carambola L itself.

Goncalves ST et al., (2006) reported that the Averrhoa carambola L leaves have the potential to act as anti ulcerogenic. It showed that the water alcohol extracts of Averrhoa carambola L leaves gives the significant as well as dose dependent anti ulcer effects against gastric mucosa injury which induced by ethanol acid up to 400 mg/kg, 800 mg/kg and 1200 mg/kg but the the protective action was produced at the highest doses. The partial anti ulcer activity could be done due to the Averrhoa carambola L extracts contains of mucilage, triterpenoids and flavonoids as well.

Vasoconcelos et al., (2006) stated that the Averrhoa carambola L leaves extracts urged some changes in electrophysioogical in a normal guinea pig heart. The extracts induced many kinds of atrio ventricular blocks, increased the QT interval, depressed the cardiac rate as well as increased the QRS complex duration in six rats.

Muir CK et al., (1980) reported the toxicity on Averrhoa carambola L fruit extracts. The extracts produced convulsions when injected into the peritoneal cavity up to the exceeding dose which is 8 g/kg.

Li et al., (2012) studied the biotransformation of dihydro-epi-deoxyarteannuin B by using suspension-cultured cells of Averrhoa carambola L. One novel sesquiterpene, 7α-hydroxy-dihydro-epideoxyarte-annuin B, and one known sesquiterpene, 3-α-hydroxy-dihydro-epideoxyarteannuin B, were obtained upon the addition of dihydro-epi-deoxyarteannuin B. The study concluded that cultured cells of Averrhoa carambola L have the capacity to hydroxylate sesquiterpene compounds in a region and stereoselective manner. The inhibitory effects of 7α-hydroxy-dihydro-epideoxyarteannuin B and 3-α-hydroxy-dihydro-epideoxyarteannuin B on proliferation of K562 and HeLa cell lines were (59.29±0.99, 84.04±0.27 μmol/mL) and (40.63±1.45, 41.54±0.82 μmol/mL) respectively.

Rao YK et al., (2006) reported on his studies that the four compounds of Phyllanthus polyphyllus Linn consists of one benzenoid and three arylnaphalide lignans which isolated from the whole plant showed growth inhibitory effect on production of NO and cytokines (TNF-α and IL-12). Since TNF-α and IL-12 were known as the main a pro-inflammatory cytokines secreted that during the early phase of acute and chronic inflammatory diseases for example asthma, rheumatoid arthritis and septic shock. The use of Phyllanthus polyphyllus Linn as anti-inflammatory remedy in traditional medicine may be attributed by benzenoid and arylnaphalide lignans compounds.

Silva et al., (2008) indicated that the anti-inflammatory potential of leaves of hydroalcholoc extract of Piper ovatum was evaluated and investigated. Whereby the carrageenan-induced pleurisy in rats and croton oil-induced ear edema in mice were used as a model. The results indicate that the amide fractions piperlonguminine and piperovatine showed the greatest inhibitory activity of topical inflammation induced by croton oil method.

Chen L et al., (2008) highlighted that the fruit rinds of Garcinia mangostana have been used as a traditional medicine for the treatment of skin infections and trauma. The xanthones, α- and γ-mangostins are major bioactive componds which found in the fruit hulls of mangosteen. The xanthones exhibits their biological effects by blocking inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2). It was stated also that two mangostins decrease prostaglandins (PGE2) levels through the inhibition of COX-2 activity and NO production. It is concluded that α-mangostin shows a more potent inhibition of PGE2 release than either histamine or serotonin.

Macready., (2012) stated that the flavonoids best known for their antioxidant as well as anti inflammatory. Beside that it is also helps to support of the cardiovascular and nervous systems. The previous research on flavonoids showed that it helps to block the production of messaging molecules that promotes inflammation. The activity of flavonoids was able to inhibits the enzymes of lipooxygenase (LOX) and cyclooxygenase (COX) in medical terms.

Chandra S et al., (1995) investigated that the ethanolic root extract of Swertia chirata have the potential especially for the pharmacological screening of anti-inflammatory activities conducted in animal models. The anti-inflammatory activity was using the carrageenan induced rat paw edema model which then taken rat paw edema model induced by carrageenan. The result revealed that the extract was found to reduce significantly (p <0.001). The formation of edema at the 400 mg/kg dose level and showed 57.81% (p <0.001) inhibition of edema volume at the end of 3rd hour which resulting the ethanolic extract of Swertia chirata helped to reduced the inflammation.

Kumar et al., (2013) reported that the various parts of Averrhoa carambola L used as a folk remedy for many symptoms including inflammation. The properties of Averrhoa carambola L fruits accredited with tannins, flavonoids and saponins which showed significant pharmacological activities. So it is necessary to perform further investigation to isolate the pharmacological active compounds that can be used in the production of novel drugs for various types of diseases.

Orwa et al., (2009) investigated that the Malaysians take fresh leaves of Averrhoa carambola L or fermented as a treatment for venereal diseases. A leaf decoction was taken to relieve the rectal inflammation. The fruits itself used to treat coughs, biliousness and beri-beri. The syrup that prepared from the fruits itself helps to alleviate internal haemorrhoids.

Margen et al., 1992 reported that the star fruit, Averrhoa carambola (Oxalidaceae) is found in America, Brazil, Australia, South-East Asia including Malaysia, Southern China, Taiwan and India. A. carambola tree is usually measured up from 3 to 5 m in height and can reach until a maximum height of 10 m with a finely fissured light brown bark and 15 to 20 cm long leaves. It holds up large indehiscent yellowish-green berry fruit of 5 to 8 cm long along with a characteristic shape resembling a five pointed star whereby each cell of the fruit contains five arillate seeds.

Gidwani BK et al, (2009) investigated the aerial parts of many species of Lantanaare camara Linn widely used in folk remedies like cancer and tumours. A tea prepared from leaves and flowers were taken against fever, stomachache as well as influenza. The other uses of plant shows antimalarial, anti-bacterial and anti-diarrhoeal activities. From the studies it has been reported that the aqueous extract of Lantana camara leaves is highly effective and safe for the treatment of hemorrhoids. It has been reported also that the aqueous extract of Lantana camara leaves has promising analgesic, anti-inflammatory and also anti-hemorrhoidal activities.

Martin et al, (1993) stated that the first toxic effects for Averrhoa carambola L to human in a case study whereby an intractable hiccup happened to eight patients with regular programme of haemodialysis after the ingestion of the fruits. Apparently after the ingestion of Averrhoa carambola L fruit, the hiccups that occurring in patients with dialysis had not seen a threat until 1998.

Ferreira et al, (2008) stated that the effects of the hydroalcoholic extracts of the leaves of Averrhoa carambola L on fasting blood glucose. The hydroalcoholic extracts treated animals showed significantly lower fasting blood glucose while the livers from hydroalcoholic extracts showed higher glucose production from L-alanine. The hydroalcoholic extracts tretamnet did not affect the glucose uptake in soleus muscles which inferred by the incorporation og glucose to glycogen and lactate production. Hence, the study suggests that the declined of fasting blood glucose promoted by the treatment with the hydroalcoholic extract of Averrhoa carambola L was not mediated by an inhibition of the hepatic gluconeogenesis or with an increased of glucose uptake by muscles.

Asmawi MZ et al., (1993) revised that the Emblica officinalis has been used widely for the anti-inflammatory and antipyretic activities. The anti-inflammatory activity was found in the water fraction of methanol extract of plant leaves. The effects of fraction were tested on the synthesis of mediators of inflammation such as leukotriene B4, platelet activating factor (PAF) and thromboxane as well. The water fraction of methanol extract inhibited migration of human PMNs in relatively at the very low concentrations.

Luximon-Ramma et al, (2003) reviewed that the antioxidant activities of Averrhoa carambola L were found to have strong correlations between antioxidant activity and total phenolics and proanthocyanidins contents. The flavonoids was seemed to contribute less to the antioxidant potential of the fruit and very poor correlation was observed between ascorbate content as well as antioxidant activity. The antioxidant capacity of the fruit was concluded to be a substantial source of phenolic antioxidants that may exhibit the potent health benefits.

Balasubramaniam et al., (2005) studied the isolated β- Galactosidase (EC. from Averrhoa carambola L fruit and fractionated it using a combination of ion exchange and gel filtration chromatography into four isoforms, β-Galactosidase I, II, III and IV. The β- galactosidases exhibited the molecular weights 84, 77, 58 and 130 kDa, respectively. β-Galactosidase I, was the most prominent isoform. The purified β- galactosidase I was highly active in hydrolyzing (1 → 4) β-linked spruce and a mixture of (1 → 3)β- and (1 → 6) β-linked gum arabic galactans. The β- galactosidase I also exhibited the capacity to depolymerize and solubilize structurally intact pectins as well as to modify alkaline-soluble hemicelluloses whereby indicating in part changes that occur during ripening.

Chavan M et al., (2009) investigated the caryophyllene oxide which was isolated from an unsaponified petroleum ether extract of the bark of the Annona squamosal plant and has been studied for its anti-inflammatory effect activity. The dose taken of Caryophyllene oxide of 12.5 and 25mg/kg body weight while unsaponified petroleum ether extract at a dose of 50 mg/kg body weight. The responsible activities of caryophyllene oxide were given significant effect against the inflammation

Wu et al, (2009) reported that the potential hypocholesterolemic activity of the different insoluble fibers that prepared from Averrhoa carambola L with or without micronization processing. After the micronozation, the cation-exchange and water holding capacities of the Averrhoa carambola L pectic polysaccharide-rich insoluble fibers were effectively increased from 8.5 to 22.4 mL/g. The Averrhoa carambola L microsized the insoluble fibers which then reduced the concentrations of serum triglyceride by and the total cholesterol with the value of 15.6% and 15.7% respectively by means of enhancing the excretion of cholesterol and bile acids in faeces. The study suggests a new approach of micronization of the fruits which may help to improve physiological functions of food fibers in the fiber-rich functional food application.


Briquetting for minimization of waste.


The industries from Nasik are currently generating tonnes of oil contaminated cotton waste annually which seems to double in coming years. Small scale industries are throwing away or either burning it whereas medium and large scale industries are sending them for disposal hence they have to bare high disposal cost. The disposal practices carried out being land filling and incineration are causing adverse impact on environment. The oil contaminated waste being hazardous, the present investigation is to be focused on recycling waste in order to reduce environmental impact as well as solve the disposal issue. Briquetting is one such means which can help in minimization of waste. Briquetting involves collecting combustible materials that are not usable due to lack of density, compressing them into a solid fuel of a convenient shape that can be burnt like wood or charcoal. Its analysis is done and hence it can be served as a fuel for industrial purposes like heating, burning in boilers etc.


Oil Contaminated Cotton Waste, recycling, disposal, briquetting, and fuel.




The increase in industrialization in India has also lead to increase in generation of industrial wastes. The major area of concern wrt industrial waste management is hazardous waste generated during various processes in industries. Due to variable processes, large quantities and types of hazardous waste are generated in industries. As per CPCBs report, it was estimated that in 2010 there were about 41,523 hazardous waste generating industries in India. The quantity of hazardous waste generated from these industries was about 7.90 million per year with increase of 27% over its last year generation and the similar or greater intensive trend was expected in future. Along with the generation, existing disposal facilities are overburdened to take additional load. Thus hazardous wastes are an emerging concern to a country like India. With increasing stringency in environmental legal requirements the hazardous waste disposal operation is becoming difficult and costly day by day.

The traditional disposal methods- land filling and incineration used in hazardous waste management exerts tremendous pressure on environment typically the ecosystem. These methods are responsible for generation of large quantity of green house gases that are responsible for global warming and increase overall carbon footprint. Also, incineration plants incorporate use of varied natural resources such as petroleum, natural gas, electricity etc that create burden on existing resources.

Thus, exploring recycling opportunity for industrial waste is need of time.

This study thus has chosen one major industrial hazardous waste viz. Oil contaminated cotton waste and will check the recycling opportunity of the same.

The focus is made on recycling of oil contaminated cotton waste generated from automobile industries of Nasik, a district located in state of Maharashtra, India.


Nasik, a fast growing city is situated in Maharashtra India. The city has evolved drastically in industrial field since its inception. The city is vibrant and active on industrial upfront and has provided employment to many people and experiences immigrants from all over the country. It has major 5 MIDC areas namely Satpur, Ambad, Gonde, Dindori and Sinnar. Satpur and Ambad are situated in the city whereas Gonde, Dindori and Sinnar are surrounded by the city. Major category of industries belong to automobile (engineering) category and hence Nasik has one of the biggest sectors for automotive industry, manufacturing 2 wheelers, 3 wheelers and 4 wheelers and heavy vehicles.

Mahindra and Mahindra Ltd, Mahindra Sona Ltd, Bosch India, Thyssen Krupp Automotive Engines Ltd, are major automotive industries flourishing in this city. Thus their presence has attracted large number of vendors and hence Nasik is being considered as a hub of automobile manufacturing industries. Many vendors like Lear Automotive, JBM Auto Pvt Ltd, Shareen Auto Pvt Ltd, and Supreme Autoshell Pvt Ltd are manufacturing different automotive components for the bigger industries. Thus there are many small, micro, medium and large industries manufacturing vehicles parts, assembly and other parts contributing a large share of the total industries in MIDC area.

As manufacturing of automobiles takes place, the process requires large amount of oil like cutting oils, lubricating oils, engine oil, motor oils, spent oils, quenching oils etc. Also there are service centers for maintenance of vehicles and its allied parts throughout the city. Cotton cloth is used in cleaning those oils from machines, spills etc thus leading to generation of oil contaminated cotton waste. With the increase in demand of cotton waste, the question of disposal of cotton waste is of big concern.

Also used cotton waste comes under Hazardous waste category as mentioned by CPCB. Thus management of cotton waste should be done properly as far as safety is concerned.

To know the exact number of automobile industries as well as their contribution to oil contaminated cotton waste, a survey was conducted in MIDC area.


Data was obtained from Nashik Industries Manufacturing Association (NIMA) directory.

Part I

Data about number of industries in Nasik MIDC areas and they come under which category were obtained.

The data was summed up in percentage as shown below:

It was observed that Engineering sector comprised of 44 %. This was the highest among all other categories.

Engineering sector included automotive components, machining job work, heat treatment, forgings, abrasives, clamping equipments, tool room, rolling mills etc.

Figure No 1.1: Different categories of industries in Nasik

Part II

Out of the Engineering sector, bifurcation of sub components was made.

The data was summed up in percentage as shown below:

Automobile component was the highest (54 %) amongst others.

Automobile component specifically included fabrication, press components, allied automotive parts, and service centers.

Figure No 1.2: Sub components of Engineering Industries

Hence from the above data, it was clear that Nasik has major share of automobile sector and hence it was evident that these industries would generate oil contaminated cotton waste.

Part III

Out of these automobile industries a data survey was conducted for large and medium category industries.


Name of Company

Name of Concerned person

Concerned Department


Question Answer

1. Type of Industry

2. Manufacturing process

3. No of shifts in which the company is working

4. Quantity of oil soaked cotton waste generated per month

5. Disposal method

6.Any other data

Table No 1.1: Questionnaire

A questionnaire was made to find out generation of oil contaminated cotton waste from industries. Different industries were visited and inquiry about quantity of waste and its disposal was done. Majority of the industries who generated cotton waste belonged to automobile category. The data was recorded daily and monthly quantum of waste generated was provided.

Following is a summary of data collected:

The waste generated was basically oil contaminated cotton cloth, cotton gloves, rags etc.

MSDS was obtained for various oils used in different purposes.

Data regarding issuance of cloth before use (uncontaminated) and after use (contaminated) was recorded.

Line supervisors thoroughly checked the processes to avoid any case of spills and leaks.

The concerned EHS representative supervises and ensures the fulfillment of the hazardous waste management objective which is to identify, collect, handle, store and dispose of hazardous waste materials as per legal requirements.

The hazardous waste generated in the department is stored in red colored bins fixed at each department.

At regular intervals (weekly or monthly) waste is transferred to the scrap yard where the waste is placed in HDPE bags.

The monthly data consumption was noted for reference purpose.

At usually end of the month, the authorized hazardous waste collection party who is a member of CHWTSDF comes and collects the waste.

The oil contaminated cotton waste is sent either to scrap dealers for sale or to MWML- Mumbai Waste Management Limited at Taloja or to Maharashtra Enviro Power Ltd Ranjangaon Pune.

There is no on site recycling or disposal method of oil contaminated cotton waste in plant. A third party is responsible for its disposal.

After compilation of data it was noticed that large amount of waste was generated in the premises depending upon the size of the industry and the call for minimizing of waste should be taken.

Figure No 1.3: Oil Contaminated Cotton Waste generation per month

From the above figure, it was observed that large scale industries generated above 500 kg / month, medium scale industries generated between 50- 500 kg / month whereas small scale industries generated less than 50 kg/ month.

Figure No 1.4: A Pictorial view of Oil Contaminated Cotton Waste at an industry’s premises.




Hazardous Waste Management:

According to CPCB, hazardous waste is defined (HW) as any substance, whether in solid, liquid or gaseous form, which has no further use and due to physical, chemical, reactive, toxic, flammable, explosive, corrosive, radioactive or infectious characteristics causes danger or is likely to cause danger to health or environment, whether alone or when in contact with other wastes or environment, and should be considered as such when generated, handled, stored, transported, treated and disposed off. This definition includes any product that releases hazardous substance at the end of its life, if indiscriminately disposed off. HWs can be classified into – (i) Solid wastes (ii) Liquid wastes (iii) Gaseous wastes (iv) Sludge wastes from various anthropogenic sources An efficient Hazardous Waste Management protocol needs to be executed; other-wise it may cause land, surface and ground water pollution

Any product that releases hazardous substance at the end of its life, if indiscriminately disposed off is known as hazardous waste

Hazardous Waste Generation Scenario in India:

The hazardous waste generated in the country per annum is estimated to be around 4.4 million tonnes while according to Organization for Economic Cooperation and Development (OECD) derived from correlating hazardous waste generation and economic activities, nearly five million tonnes of hazardous waste are being produced in the country annually. This estimate of around 4.4 million MTA is based on the 18 categories of wastes which appeared in the HWM Rules first published in 1989.Out of this, 38.3% is recyclable, 4.3% is incinerable and the remaining 57.4% is disposable in secured landfills. Twelve States of the country (Maharashtra, Gujarat, Tamil Nadu, Orissa, Madhya Pradesh, Assam, Uttar Pradesh, West Bengal, Kerala, Andhra Pradesh, Karnataka and Rajasthan) account for 97% of total hazardous waste generation. The top four waste generating states are Maharashtra, Gujarat, Andhra Pradesh and Tamil Nadu. Whereas states such as Himachal Pradesh, Jammu & Kashmir, and all the North Eastern States excepting Assam generate less than 20,000 MT per annum.

Legislative Framework

Ministry of Environment & Forests (MoEF) promulgated Hazardous Waste (Management &

Handling) Rules on 28 July 1989 under the provisions of the Environment (Protection) Act, 1986. In September 2008, the said rules were repealed and new rules entitled “Hazardous Waste (Management, Handling and Transboundary Movement) Rules, 2008” (here after referred as HW (M, H & TM) Rules were notified. These rules were further amended in the year 2009 & 2010. According to the HW (M, H & TM) Rules, any waste, which by virtue of any of its physical, chemical, reactive, toxic, flammable, explosive or corrosive characteristics causes danger or is likely to cause danger to health or environment, whether alone or when in contact with other wastes or substances has been defined as „hazardous wastes_ and includes wastes generated mainly from the 36 industrial processes referred under Schedule – I of the said Rules. In addition, some wastes become hazardous by virtue of concentration limits as well as hazardous characteristics listed under Schedule – II of the said Rules. Based on the data provided by the State Pollution Control Boards

(SPCBs) and Pollution Control Committees (PCCs), Central Pollution Control Board (CPCB) has compiled state-wise inventory of hazardous waste generating industries The hierarchy in management of hazardous waste is to reduce, reuse, recycle and re-process and final option of disposal of wastes having no potential for value addition, in disposal facilities in an environmentally sound manner. The disposal facilities may be having only a secured land fill (SLF) or may be having incinerator alone for organic wastes or combination of secured landfill & incinerator. At present, there are 26 common Hazardous Waste Treatment, Storage and Disposal Facilities (TSDFs) in operation spread across the Country in 12 States namely Andhra Pradesh, Gujarat, Himachal Pradesh, Karnataka, Kerala, Madhya Pradesh, Maharashtra, Punjab, Rajasthan, Tamil Nadu, Uttar Pradesh and West Bengal as well as in UT namely Daman, Diu, Dadra & Nagar Haveli. 35 new sites for development of TSDF have been notified by the respective State Governments and these are at different stages of development.

The rules for Hazardous Waste Handling, Storage, Transport, Treatment and disposal are given in the chapter.

History of Biomass Briquetting in India

Since the beginning of the 1980s, three different types of briquetting technologies were introduced into India – PARU, Screw Extruder and Piston Press. PARU was a Korean based Briquetting machine. Between 1982 and 1986 seventy entrepreneurs bought the technology. Many of these plants became non functional within 3 months to 2 years of start up, and there are now none in operation. The high failure rate occurred due to the licensees’ using inferior materials in the construction of the equipment (to increase their profit margins) and altering design without consulting the developer. Lack of operating instructions, insufficient training of operators, and inadequate maintenance and management also contributed to the failure.

Entrepreneurs in south India imported twenty screw extruders from Taiwan. Although the briquettes were well accepted by the customers, there was excessive wear in the press due to the use of rice husk (a particularly abrasive material) as the feedstock. (Clancy, 2001)

The Screw Extruder is considered to be more appropriate to the Indian power supply situation since the down time associated with power disruption is significantly less than that for a piston press (half our compared to four hours). The disadvantage of this type of press is the higher investment costs compared to the piston press and the need for skilled welding to repair the screw. The piston press is the technology that has been most widely used on commercial basis in India with any degree of success. The technology was first introduced in

India in 1981 with the importation of a piston press produced by a Swiss company, Fred

Haussmann Corporation. Although a few more Haussmann presses were imported, there was no major importation since the costs were prohibitive. However, a number of manufacturers saw an opportunity of producing a product with a good market potential. In 1993, thirty five plants were identified using this indigenously manufactured equipment.

Research work carried out in the field of briquetting:

Sriram N, Sikdar D , Sunil Mahesh Kumar Shetty ( Nov 2014) presented on briquetting of cotton waste. In this research, cotton waste was used from Gomti Industry, Bangalore for making briquette and to get efficient energy by burning it. Solid waste from flour mill was used as binder. Here the compositions, compressive strength, calorific value, moisture content, thermal efficiency, proximate analysis of briquettes were analyzed.

Ch. A. I. Raju, M. Satya, U. Praveena and K. Ramya Jyothi (March 2014) studied the development of fuel briquettes using locally available waste. Teak leaves, sugarcane waste and cloth waste were taken to form briquettes. Flour was used as the binder. Briquettes were analyzed for its proximate and ultimate analysis. Comparison of the results showed that cloth waste briquette had high moisture content, low ash content and high calorific value. Teak leaves briquette had low moisture content, high ash content whereas sugarcane waste briquettes were the most stable amongst all. They can be recommended in small scale industries.

Madhurjya Saikia, Deben Baruah (2013), used teak leaves, rice straw, banana leaves for wet briquetting. Wet briquetting method was used to form these briquettes. Physical parameters like Briquette Durability Index (BDI), Impact Resistance Index (IRI) and calorific value was calculated. The results showed that durability increased with pressure whereas impact resistance was constant for all the briquettes.

V.R. Briwatkar, Y.P. Khandetod, A.G. Mohod and K.G. Dhande, studied the thermal properties of biomass briquetting fuel. Here mango leaves, acacia leaves, saw dust and dry cow dung was used. Proximate analysis of briquettes, degree of densification and thermal properties were studied. The results showed that the combination of 25:25:40 was of better quality fuel amongst others.

In similar way, varied amount of research is done on briquettes using different raw materials and its properties have been analyzed. These papers are cited in the references section




A briquette (or briquet) is a compressed block of coal or other combustible biomass material such as charcoal, sawdust, wood chips, peat, or paper used for fuel and kindling to start a fire. The term comes from the French language and is related to brick.

Briquetting is the process of converting low bulk density material into high density and energy concentrated fuel briquettes.

Biomass densification represents a set of technologies for the conversion of biomass into a fuel. The technology is also known as briquetting and it improves the handling characteristics of the materials for transport, storing etc. This technology can help in expanding the use of biomass in energy production, since densification improves the volumetric calorific value of a fuel, reduces the cost of transport and can help in improving the fuel situation in rural areas. Briquetting is one of several agglomeration techniques which are broadly characterized as densification technologies. Agglomeration of residues is done with the purpose of making them denser for their use in energy production. Raw materials for briquetting include waste from wood industries, loose biomass and other combustible waste products.

On the basis of compaction, the briquetting technologies can be divided into:

High pressure compaction

Medium pressure compaction with a heating device

Low pressure compaction with a binder.

Depending upon the type of equipments used briquetting technologies can be divided into:

Piston Press Densification

It comes under high pressure compaction. There are two types of piston press: The die and punch technology and the other one is hydraulic press.

In the die and punch technology or also known as ram and die technology, biomass is punched into a die by a reciprocating ram with a very high pressure thus compressing the mass to obtain the briquette. The standard size of the briquette produced is of 60 mm, diameter. The power required by a machine of capacity 700 kg/hr is 25 kW.

The principle of operation is same as that of mechanical piston press. The main difference is that energy to the piston is transmitted from an electric motor via a high-pressure hydraulic system. The hydraulic press process consists of first compacting the biomass in the vertical direction and then again in the horizontal direction. The standard briquette weight is 5 kg and its dimensions are: 450 mm x 160 mm x 80 mm. The power required is 37 kW for 1800 kg/h of briquetting. This technology can accept raw material with moisture content up to 22%. The process of oil hydraulics allows a speed of 7 cycles/minute (cpm) against 270 cpm for the die and punch process. The slowness of operation helps to reduce the wear rate of the parts. The ram moves approximately 270 times per minute in this process.

2. Screw Presses Densification

The compaction ratio of screw presses ranges from 2.5:1 to 6:1 or can be even more. In this process, the biomass is extruded continuously by one or more screws through a taper die which is heated externally to reduce the friction. Due to the application of high pressures, the temperature rises fluidizing the lignin present in the biomass which acts as a binder. The outer surface of the briquettes obtained through this process is carbonized and has a hole in the centre which promotes better combustion. Standard size of the briquette is 60 mm diameter. Screw press can produce denser and stronger briquettes than piston press.

3. Roller Press Densification

In a briquetting roller press, the feedstock falls in between two rollers, rotating in opposite directions where the feedstock gets compacted into pillow-shaped briquettes. Briquetting biomass usually requires a binder. This type of machine is used for briquetting carbonized biomass to produce charcoal briquettes.

4. Pelletizing Densification

Pelletizing is closely related to briquetting except that it uses smaller dies (approximately 30 mm) so that the smaller products are called pellets. The pelletizer has a number of dies arranged as holes bored on a thick steel disk or ring and the material is forced into the dies by means of two or three rollers. There are two types of pellet presses: flat/disk and ring types. They produce cylindrical briquettes between 5mm and 30mm in diameter and of variable length. They have good mechanical strength and combustion characteristics. Pellets are suitable as a fuel for industrial applications where automatic feeding is required.

They can produce up to 1000 kg of pellets per hour but require huge capital investments and energy input requirements.

5. Manual Presses and Low pressure Densification

These machines are specifically designed for the purpose or adapted from existing implements used for other purposes. Manual clay brick making presses form one good example. They are used both for raw biomass feedstock or charcoal. Advantages of low-pressure briquetting are low capital costs, low operating costs and low skilled labor required to operate the technology. Low-pressure techniques are particularly suitable for briquetting green plant waste such as coir or bagasse (sugar-cane residue). The wet material is shaped under low pressure in simple block presses or extrusion presses. The resulting briquette has a higher density than the original material but still requires drying before it can be used. The dried briquette has little mechanical strength which may crumble easily.

Applications of Briquetting:

Briquettes have numerous applications like industrial and domestic use.

They are often used as a development intervention to replace firewood, charcoal, or other solid fuels. This is because with the current fuel shortage and ever rising prices, consumers are looking for affordable alternative fuels and briquettes fill this gap for:

Domestic uses like cooking and water heating.

Heating productive processes such as tobacco curing, fruits, tea drying, poultry rearing, distilleries, bakeries etc.

Clay products manufacturing in brick kilns, tile making, pot firing, etc

Fuel for gasifiers to generate electricity

Powering boilers to generate steam

In textile process houses for dyeing, bleaching etc.

Advantages of Briquetting:

Dependency on conventional energy sources like wood, coal is reduced

They are easy to handle, transport and store.

They are uniform in size and quality.

The process helps to solve the residual disposal problem.

Briquettes are cheaper than all conventional energy sources like coal, lignite etc which cannot be replenished.

There is no sulphur in briquettes. Thus atmospheric emissions and corrosion are prevented.

They have a consistent quality, high burning efficiency, and are ideally sized for complete combustion.

Raw materials are easily available, hence costs are reduced

Employment for many people.

Limitations of Briquetting:

In monsoon regions or when there is humid weather, briquettes may loosen and crack

Briquettes can be used only as solid fuel unlike liquid fuel which is used in internal combustion engines.

Combustion characteristics can be poorer depending upon the raw materials




This project deals from preparation of mixture required for briquettes formation to analysis of briquettes thus formed.

The project was conducted at GangoTree Eco Technologies Pvt Ltd, ManikBaug, in Pune.

1) Raw materials used in formation of briquette:

Oil Contaminated Cotton Waste is the primary material used which was obtained from the automobile industries of Nasik and Pune. Binder used is waste flour obtained from a nearby flour mill. Dried powder forms of different seeds are used as filler which was available at Gangotree itself.

2) Preparation of mixture:

Oil Contaminated Cotton Waste was first made free from any solid particles like metal pieces, springs, nuts etc. that might inhibit the process of briquette formation

Then the waste was roughly chopped into fine pieces of approx. 4- 5 mm using the waste shredder available at GangoTree.

The mixture was prepared using cotton waste, binder and filler. Briquettes were made taking 100%, 90%, 80%, 70%, 60%, 50%, 40%, and 30% of cotton waste.

The composition of binder was kept constant whereas that of fillers was increased.

3) Briquettes Formation:

Briquetting machine was made available on the site.

The machine was pre heated for 5 minutes at 1000 C. After 5 minutes, the already weighed mixture is placed into the mould of height 16 cm and diameter 10 cm.

The mixture was compressed manually using the piston such that it reaches maximum compression. External heat was provided for better densification for about 10 minutes at 800 C. The machine was then allowed to cool for 5 minutes.

The bottom plate was removed and the briquette was unmolded. Whole process is shown below.

Process Diagram for briquetting:

Figure No 4.1: Briquetting Process

The whole process is shown in the form of photographs below:

Plate No 4.1: Raw Materials Collection Plate No 4.2: Shredding of cotton waste

Plate No 4.3: Preparation of Mixture Plate No 4.4: Briquette Formation (Wet)

Plate No 4.5: Briquetting Machine

Plate No 4.6: Sun Drying of Briquettes Plate No 4.7: Dried Briquettes

4) Analysis of Briquettes:

Briquettes were analyzed for its physical, combustion, proximate, ultimate and gaseous emission properties.

The procedure and details are discussed in this chapter whereas the results are discussed in the next chapter

Different properties of briquettes were analyzed:

1. Durability Index:

Durability Index was determined using Vibration Test. Sample was placed on vibration machine for 10 minutes. Initial weight before placing and final weight after placing was noted.

Durability index is calculated by:-

DI= (final wt. / initial wt.)* 100


Index value above 90 is considered to be good for transportation and handling purposes

2. Shatter indices

The briquette was dropped ten times on a concrete floor from a height of 1m. Weight of briquetted before and after shattering was noted. The percent loss of material was calculated. The shatter indices of the briquette were calculated as below, (Madhava, 2012)

Percent weight loss =

% shatter resistance = 100- % weight loss


w1 = weight of briquette before shattering, g

w2 = weight of briquette after shattering, g

These tests were used for determining the hardness of the briquettes.


Shatter resistance above 90% is considered to be good for transportation and handling purposes.

3. Bulk Density:

Bulk Density was carried out by using a cylindrical shaped container of 1000 ml. The container was weighed empty for its mass determination. Then it was filled with the briquette and was weighed again.

Bulk Density = (Mass of briquette sample (kg))/(Volume of measuring cylinder (m3))

4. Percentage Moisture Content (PMC):

Moisture Content is the amount of water in the briquettes. This determines the quality of briquette. Lower moisture content shows high calorific value. The weight of briquette after formation of the briquette and final weight of briquette after drying it for one day were recorded.

Moisture Content =


W1 = weight of crucible, g

W2 = weight of crucible + sample, g

W3 = weight of crucible + sample, after heating, g

5. Proximate Analysis:

This is the standard procedure which depicts the bulk components that make up any fuel.

I) Percentage Volatile Matter (PVM):

5 grams of sample was taken in a crucible which was covered with a lid. This crucible was placed in Muffle furnace for 10 minutes at 5500C.

High volatile matter indicates highly reactive fuel thus high burning rate.

PVM = [(W1-W2) / (W1-W0)] *100


W0= Weight of empty crucible, g

W1= Weight of crucible + sample, g

W2= Weight of crucible + sample after 10 minutes, g

II) Percentage Ash Content (PAC):

5 grams of sample was taken in a crucible. This crucible was placed in Muffle furnace for 4 hours at 5500C.

Low Ash Content indicates better utilization as fuel.

PAC = [(W2-W0) / (W1-W0)] *100


W0= Weight of empty crucible, g

W1= Weight of crucible+ sample, g

W2= Weight of crucible + sample, g

III) Percentage Fixed Carbon:

It is the carbon left after volatile matters are driven off.

PFC = 100- (PVM+PAC)


PVM = Percentage Volatile Matte

PAC = Percentage Ash Content

6. Heating Value:

Heating Value is the energy released as heat when the briquette will undergo complete combustion with oxygen under standard conditions.

Heating Value = 2.326 (147.6 C+ 144 V)


C = Percentage Fixed Carbon

V = Percentage Volatile Matter

7. Calorific Value:

It is very important characteristic of a fuel and indicates amount of heat that develops from the mass (weight) in its complete combustion with oxygen in a standardized calorimeter. Thus it is defined as amount of heat liberated during complete combustion of unit mass of biomass usually expressed in Kcal/ kg.

The calorific value of the fuel is calculated by using Bomb calorimeter.

A known mass (1.0 g) of the given fuel is to be taken in crucible and the crucible is supported over a ring. A fine magnesium wire touching the fuel sample is stretched across the electrodes. The bomb lid is tightly screwed and bomb is filled with oxygen at 25 atm pressure. The bomb is then lowered into copper calorimeter, containing a known mass of water. The stirrer is operated and initial temperature of water is noted. The electrodes are then connected to 6-volt battery. The sample is burned and heat is liberated. Uniform mixing is continued until a maximum amount of temperature has attained.

Heat liberated by the fuel = Heat absorbed by water, apparatus, etc.

The calorific value of the briquetted fuel is determined by using equation as below:

Calorific value (Kcal/kg) = ((W+w)(T_2-T_1))/X-E


W = weight of water in calorimeter (kg)

w = water equivalent of the apparatus

T2= initial temperature of water (0C)

T1 = final temperature of water (0C)

X = weight of fuel sample taken (kg)

E= Correction factor (for fuse wire and cotton thread), (cal)

8. Ultimate Analysis:

The Carbon (C), Hydrogen (H), Oxygen (O), Sulphur (S) and Nitrogen (N) determination in biomass represents ultimate analysis often referred as Elemental Analysis. It helps in determining the quantity of air required for combustion and the volume and composition of the combustion gases of fuel. Standard methods are used for calculation of percentage Carbon, Hydrogen, Nitrogen and Sulphur. Elemental analysis was conducted at Accurate Analysers Pvt Ltd, Nasik- a NABL Accredited Lab

whereas Percentage Oxygen content is calculated as

%O = 100 – %( C + H + N + S + Ash)

9. Gaseous Emission Analysis:

The briquettes were burnt in a cooking stove and analysis of TPM, SO2, NO2 and CO was done. Work place Air monitoring was done to calculate the above parameters. Briquettes were burnt for 8 hours (as per standards). Handy Sampler was used to calculate TPM and gaseous emissions like SO2, NO2 whereas ORSAT apparatus was used to calculate CO.

This test was conducted at Accurate Analysers Pvt Ltd, Nasik.

Photographs of equipments used in analysis are shown below:

Plate No 4.8: Vibration Test Plate No 4.9: Universal Oven

Plate No 4.10: Crucibles

Plate No 4.11: Muffle Furnace Plate No 4.12: Bomb Calorimeter

Plate No 4.13: Desiccators Plate No 4.14: Weighing Balance

Plate No 4.15: Gaseous Sampler Plate No 4.16: ORSAT Apparatus

Plate No 4.17: Experimental Setup for Emission Analysis


Influence of transfer of finances in regards to political leanings: essay help online

1. Introduction

The globalisation aspects of modern society have transformed the economies of societies all over the world, with the resultant environment providing corporate entities and entire government with access to resources outside their borders. The globalised economic environment has created a means for these entities to cooperate with other actors outside these national boundaries, as is the case with foreign direct investments (FDIs) and inter-government loans. In the current global environment, there is a concern regarding the bargaining power that the financial obligations of countries provide in both political and economic fields (Sun et al. 2016). Financial aid to countries has proven to produce economic improvements in the recipient countries (Petras 2017), but are also concerns regarding the goals of these investments as well as the nature of the expected returns Lumumba-Kasongo, T. 2011; Bond, P. and Garcia, A. 2015. While prior researchers have analysed the case of China’s investments in Africa (Scoones et al. 2016), it is also interesting to understand the influence that China’s developmental loans have in a country’s immediate environment. This research seeks to analyse the influence that these developmental loans have had in the Asia-Pacific region, with a specific focus on the political outcomes of these financial inputs.

While close strategic alliances between countries is not a new phenomenon, there remain reservations amongst analysts regarding the influence that the transfer of finances or other resources have in regards to political leanings (Sun et al. 2016). As evidenced in past economic interactions such as in the investments that the United Kingdom (UK), Russia, Brazil, United States (US), and China, there are distinct correlations between financial loans and policies governing trade and democratisation (Keukeleire and Hoojimaaijers 2014). The agreements that these countries make after the distribution of these financial resources have been identified as opportunity-seeking in their capacity to mobilise the recipient country’s resources through the subsequent bilateral relationships between these nations (Rondinelli 2013). As a result, these alliances are secured with strategic investments by the donor countries, with the resultant benefits including free trade agreements, development partnerships, and bilateral agreements that have economic and political importance on a national scale. However, aside from securing these relationships, Cooley (2015) also find that developmental loans also have linkages to policies that increase the donor’s access to the recipient’s natural resources while also promoting exportation to the donor nation.

Currently, there exists a wealth of evidence of the economic influence that developmental loans and foreign direct investments have on the recipient countries (Shaw 2016; Sun et al. 2016; Petras 2017). While there is a distinct focus on the impact of actions by western countries, the Asia Pacific region and China’s participation in the development of the region is an issue that requires additional scholarly scrutiny. The global influence of this economic powerhouse has been felt in many African countries and received the requisite attention, but it is also necessary to understand the actions that China’s investments closer to home have on the region’s political landscape. One key consideration is that the regional nature of these relationships also can influence other policies such as migration and infrastructural arrangements (Yeh and Wharton 2016). The proposed research thereby focuses on the Asia Pacific region as the key target for the analysis, with the primary variables being the developmental loans that China provides to the countries in the region as well as the subsequent political outcomes of these investments.

2. Literature Review

As highlighted in the introduction, developmental aid can not only influence the economic landscapes of the donor and recipient countries but also change the political alliances between nations as well. While China is not historically a participant in the global developmental aid framework, the country has made great strides in its investments in developing countries in the Asia Pacific and African regions. However, compared to other countries that have a long past of contributing loans in the global environment, there is an increased urgency to understand the political influence that China stands to gain from becoming a donor economy. Moreover, there is also need to improve the knowledge base on this issue considering the adversarial approaches that past researchers have used when analysing the economic moves that China makes. Looking at the country as not only a strong economic force but also as a single entity in the Asian peninsula will thereby be the purpose of this literature review, which will enable the identification of changes in political influence resulting from Chinese developmental loans to countries in the Asia-Pacific region.

History of China’s Participation in Developmental Finance

According to Zhou and Ziong (2017), China has had a presence in the Asian Development Bank and the African Development Bank Group since the mid-1980s, providing the country with a platform for reducing poverty and increasing development in African and Asia Pacific nations. The foundations of the developmental loans that China provides to the recipient countries is meant to provide a foundation of respect and friendship with China, with the aim being the strengthening of ties between these countries rather than a master-slave relationship (Jayasuriya 2015). However, it is also difficult to ignore the fact that China is an economic powerhouse, and has vast financial resources that it can utilise as a means for seizing more opportunities and increasing the regions over which it can exert its influence. Moreover, China also offers a lucrative investment source for countries that abstain from seeking similar financial inputs from western countries (Zhou and Ziong 2017). Notably, this makes the China developmental loans a target for scrutiny due to the country’s willingness to invest in countries in which the western developmental agencies prefer not to invest.

China’s investment-related participation in the Asia Pacific region dates back to the formation of the Asian Infrastructure Investment Bank (AIIB), which was developed to provide financing for airports, railways, roads, and other infrastructure projects in the region’s nations (Cai 2016). According to Petras (2017), the continued investment in these sectors of the economy has a gradual effect of increasing the country’s ability to continually influence favourable compromises in the countries receiving its financial investments. One of the key challenges that political analysts perceive regarding China’s influence is the developmental loans that the country has received from Japan in the past, with investments in the 1979-1983 period alone totalling $1.4 billion (Thuan 2017). Even as these investments can be seen as providing Japan with influence over China, it is also necessary to acknowledge China’s need for infrastructure development and the uncovering of natural resources as a primary goal of the loans. The resultant relationship saw China minimise Japan’s political influence over the country while also propelling China’s development forward on its path to becoming an economic powerhouse.

The issue of China increasing its influence in the Asia Pacific region has been debated since the country announced its loan intentions to the tune of $492 million in 2006 as well as its later additions of $1 billion in 2013 (Dornan and Brant 2014). While there has been historical competition between China and Taiwan in providing aid in the region, China’s increasingly active role in providing both financial and workforce resources to improve infrastructure and industry in the Pacific region. Countries such as Cambodia have a direct military, diplomatic, and political ties to China through its role as a developmental and military benefactor as a result of the over $3 billion that Cambodia has received as developmental grants since the 1990s (Thuan 2017). Other countries such as Sri Lanka also have a similar need for China’s funds, with the aim being infrastructure development in a country that has benefitted by gaining key deep-water and transnational transport networks. In these cases, it is evident to find grounds for China’s capacity to influence policy directives with these Asian countries owing to the ties developed through bilateral economic relationships.

Evident Goals of China’s Developmental Funding

As Cai (2016) highlights, China’s goal as a development partner is not to influence issues such as democratisation in the recipient countries but rather, the development of a foundation of a business that can propel both donor and recipient economies forwards. As a result, there is minimal evidence of China’s dependence on analyses of matters such as regime type as a decision-influencing factor when determining the destination of the country’s developmental loans. Yeh and Wharton (2016) note that the infrastructure-oriented investments that boost the recipient country’s income status while also making it a key partner to China’s economy through the resultant bilateral agreements between these countries. This act of increasing the economic bargaining power of recipient countries provides China with additional support for its national agenda while also connecting the recipients with initiatives such as the 21st Century Maritime Silk Road and the Silk Road Economic Belt (Ye 2015). As shown in figure 1, surveys of the country’s perceptions of the Silk Road initiative’s contributions indicate an increase in positive results regarding its contributions to China’s political bargaining position, economy, and national security.

Figure 1: Review of China’s perceptions on the Silk Road between January 2014 and May 2015 (Source: Ye 2015)

While the financial elements of China’s developmental goals are an essential detail in the focus on the country’s influence on recipient nations, it is also imperative to consider the grounds on which China bases its investments. Llanto et al. (2015) note that China’s approach differs from those of other BRICS members in that it has minimal expectations regarding the ability to spur rapid economic growth without resorting to the liquidation of state control. On the contrary, the developmental agenda of China’s loans is focused on building loyalty by providing evidentiary improvements in economic growth rather than the practical seizure of vital resources for long-term benefits to the donor economy. According to Llanto et al. (2015), the emphasis that China places on infrastructure as a source of economic development is rooted in a developmental approach that seeks to reduce poverty without interfering in the governance of recipient nations as highlighted in Figure 2. As a result, there is minimal pressure on countries receiving China’s developmental loans to switch their alliances for them to benefit fully from the availability of economies which can provide export revenue while also minimising political allegiances between these countries.

Figure 2: Model framework for infrastructure as a poverty reduction mechanism (Source: Llanto et al. 2015)

With the absence of political demands, China’s loans appear more lucrative to Asia Pacific countries when compared to the perceivably patronising demands made by western investor nations such as Australia (Heilmann et al. 2014). Notably, this might also be attributed to the fact that China’s investments are more infrastructure-related while those made by western members of BRICS are primarily targeted towards influencing economic and political reforms in the recipient countries (Shaw 2016). As a result, while BRICS investments increase the amount of indebtedness that nations have to their donors, China’s investments come with less stringent requirements such as conditions for outsourcing decisions to include Chinese labour and companies. Evidently, this allows countries that are at the limits of their debts to acquire these concessional loans with limited restrictions on non-performance on the loans during periods of debt distress within these recipient nations (Cai 2016). However, Bader (2015) also notes that this lack of attention to changes in policy frameworks makes the loans that China makes subject to high-level corruption, financial mismanagement, and frustration of local business efforts as a direct consequence of China’s approach to delivering financial aid.

Military Influence of China’s Developmental Loans to Asia Pacific Countries

Even as China continues increasing its outward-facing economic developments, it is also essential to acknowledge its role as a military power with influential positions such as its military presence in the South China Sea. Llanto et al. (2015) notes that even as China provides financial aid in the Pacific Islands region, there is also limited pressure for the countries that receive this foreign aid to also support China’s positions through its increased diplomatic influence over them. As a result, there is minimal expectation that China seeks to replace other key security partners such as the United States, Australia, and Japan in the region through extensive defence cooperation. Nonetheless, the US State Department is highlighted as perceiving an enhancement in China’s strategic position in the Asia Pacific region as a threat to Washington’s influence over the region and as a source of imbalance to the existing ties between the US and Asia Pacific countries (Saunders 2013). The lack of a military agenda is thereby perceived as an advantage for Chinese-sourced developmental loans when compared to the more reform-oriented approaches of western elements of the BRICS donor countries.

The South China Sea dispute is among the key issues causing concerns amongst analysts of China’s investments in vital resources such as the Hambantota port in Sri Lanka, which the latter country was unable to complete making payments for and subsequently provided China with managerial control (Campbell 2017). The resultant deal provides China with a lucrative military position from which it can increase its capacity to cover the South China Sea and gain support from the supplicant countries in its position on the issue. In comparison, the restrictions in the loan agreements that other BRICS countries sign with recipient nations are meant to ensure that the rule of law is applied in the use and repatriation of the loaned funds. Therefore, even as China’s actions can be perceived as strategic, Yeh and Wharton (2016) note that it is vital to also consider that the loans that the country provides do not include any restrictions on the acquisition of loans from other nations and organisations. Lamour (2017) also considers that these notions of Asia Pacific countries as vulnerable to manipulation are outdated since they pit the recipient nations as unwilling participants in the global economy.

Political Influence of China’s Developmental Loans to Asia Pacific Countries

From a political perspective, there is an apparent need for China to avoid isolation from the neighbouring Asia Pacific region due to the maritime resources that these strategically positioned nations have influence over. However, as Yeh and Wharton (2016) argue, the lack of conditional measures attached to the loans that China provides to its development partners is also key in maintaining their allegiance in the event of hostile economic or military action from other large economies such as the US. The strength of the economic ties between these nations and China thereby becomes a point of inducement to pre-empt conflict and minimise China’s exposure to pressure from outside forces. The result is that rather than siding with China when these countries negotiate with the US for instance, their response is a lack of complicity in taking any action against China’s interests and thus limiting the hold that the US has on the region (Goh 2014). However, there is a thematic of direct competition with US contributions as highlighted in Table 1, which further increases the tools of diplomatic persuasion that China can bring to the table when seeking to improve the quality if its political influence in the region.

Asian Region US Exports US Imports Total Chinese Exports Chinese imports Total

Northeast $216,078 $567,966 $478,044 $409,679 $381,550 $791,229

Southeast $91,609 $123,892 $215,501 $139,109 $155,616 $294,725

Southwest $20,475 $35,205 $55,681 $44,198 $21,627 $65,825

Asia Pacific $328,162 $727,064 $1,055,226 $592,986 $640,516 $1,486,288

Total $1,287,442 $2,103,641 $3,391,083 $1,429,000 $1,132,000 $2,561,000

% to Asia 25.5% 34.6% 31.1% 41.5% 56.6% 58%

Table 1: Comparison of Chinese and US trade with Asia Pacific nations in 2008 (Source: Saunders 2013)

Overall, there is an evident thematic of China’s provision of developmental loans and the bilateral agreements into which it enters with recipient countries as a means for investing the financial resources available in the donor economy as a result of its producer elements. However, it is impossible to deny that the reduced restrictions that China places on the use of funds also minimizes the risk of being perceived as a less favourable option when compared to other BRICS alternatives (Heilmann et al. 2014). Nonetheless, the gains experienced in the recipient countries is also in question in places such as Cambodia where the nation’s stance in delaying agreements with the US erodes Cambodia’s reputation on an international scale while also providing China with increased political leverage in the region (Shambaugh 2015). The imbalanced dependence that such countries have on China as a partner also increases with the lack of effective diversification of developmental funds to include other international investors. Consequently, the resultant environment is one where China has a dominant presence in the Asia Pacific region due to its financial contributions even as it lacks an evident political target for these investments.

While the key goal of China’s developmental loans may not be the establishment of influence over the Asia Pacific region, there is still an evident pattern of intensification of strategic rivalry with the US as both countries seek to partner with the region’s nations. However, China’s approach of providing fewer restrictions for its loans improves its appeal as a source of developmental funds for the region while also providing it with key bargaining power over the diplomatic and political environment of the region (Goh 2014). Here, China is perceived as a fair source of the capital necessary for economic development, making it a destination for nations seeking funds that are not attached to hegemonic projects as is the case with BRICS countries. As China seeks to minimize the influence that it gains through these direct investments, it also obtains a capacity to influence the political leanings of recipient countries through its partnerships (Saunders 2013). Therefore, China’s influence on the political environment in the Asia Pacific region is a direct influence of its developmental loans to countries in the region even as it avoids the direct imposition of its policies and ideologies on these recipient nations.

Gaps in the Literature

One key outcome of the literature review is the identification of a thematic of non-restrictive terms for Chinese loans in the Asia Pacific compared to investments from other BRICS countries. However, while there is expansive literary evidence of the political influence that BRICS developmental loans have in this region due to the existence of policy change and regulatory requirements, the less stringent nature of China’s terms limits the breadth of correlations to political goals. The capacity to maintain control over strategically located infrastructure assets such as sea and airports built using its developmental loans nonetheless indicates that there is an underlying link to military and political influence obtained indirectly through these financial investments in Asia Pacific countries. Moreover, while the literature also analyses the statistical elements of BRICS investments, the correlations to political gains for the US and other western countries remain the focus of scholarly research. Therefore, there is a need for improvements to the knowledge base regarding the soft power that China and Asia Pacific countries gain or lose through China’s developmental loans within the region.

3. Research Aims, Objectives, and Questions

Aims and Objectives

The proposed research aims to quantify the impact that China’s development loans to Asia Pacific countries have had on the political environment within this region. The researcher will utilise the following objectives to guide the research exercise:

1. To analyse the trends of China’s developmental loans in the Asia Pacific region

2. To identify the policy changes that recipient countries have made as a result of receiving developmental aid from China

Research Questions

While the research will focus on China as the donor country, it will also be essential to include comparisons of investments from other BRICS countries to provide a comparative view of the possible political outcomes resulting from increases and decreases in Chinese investment in the Asia Pacific region. To ensure full coverage of the research aims and objectives, the research will utilise the following research questions to benchmark its progress:

1. What perceivable political impact have China’s developmental investments in the Asia Pacific region had?

2. How have China’s financial loans in the Asia Pacific region compared to those of other BRICS countries over the years?

3. What policy changes have been effected in the recipient countries as a result of China’s developmental loans?

4. Methodology

The complexity of the issue under analysis necessitates the utilisation of a research methodology and philosophy that will allow for the comprehensive analysis of the data available in the field. For the proposed study, the researcher seeks to use a pragmatic methodology to facilitate an effective review of the political influence of China’s developmental loans in the Asia Pacific region. The mixed methods approach was selected due to its inclusion of elements from both qualitative and quantitative methodologies, thereby enhancing the study’s capacity to utilise both theoretical and empirical data in the analysis (Yin 2013). Moreover, Creswell (2013) also notes that this methodology can improve researchers’ capacity to understand ongoing processes, which makes it a beneficial tool for perceiving relationships between analysed phenomena. The use of developmental funds to influence political environments is widely documented as highlighted in the literature review and provides evidence of thematic links between donor-recipient bilateral agreements and strategic political and economic advantages for donors. As a result, content analysis of extant literature will be essential in developing a theoretical understanding of the relationships between the analysed variables and enable the identification of factors whose correlations necessitate further review.

According to Creswell (2013), the grounded theory allows researchers to perceive the meta-theoretical linkages that exist in analysed data sets, thereby providing foundations for the development of theoretical understandings based on actionable information. Given that the proposed research will base the quantitative analysis on data sets from the content analysis, it will thereby be essential to formulate a priori knowledge from the content analysis regarding the necessary analytical approach. For this research, it is essential for the inclusion of an ontological analysis of the literature to ensure the establishment of a derivative understanding of the concepts introduced through the content analysis. Bryman (2015) argues that ontological research philosophies provide a means for researchers to analyse concepts alongside and abstracted from their parent classes, thereby improving their capacity to make objective conclusions on the identified data sets. This approach provides the research with a firm understanding of the theoretical linkages between the variables and ensures that the data that populates the analysis stage of the research and maximises the utility of the study’s findings.

5. Data Sources and Empirical Support

The descriptive evidence will be drawn from World Bank, International Monetary Fund (IMF), Asian Infrastructure Investment Bank (AIIB), and scholarly sources for figures of the investments that BRICS countries made in the Asia Pacific region between 1990 and 2015. The review will seek to include up to 40 Asia Pacific countries for a total of a 1000 country-year analysis, with the amounts that these countries receive from China and its western BRICS counterparts acting as the dependent variable. The primary independent variable is the political leanings of the receiving country with regards to its leanings towards hostile or friendly relationships with China, BRICS members, and neighbouring nations. The robustness of this data will be based on whether the patterns identified in individual countries will persist across other Asia Pacific nations to which China or its competing BRICS partners direct their developmental aid. The thematic review of the content analysis will also provide backing for these findings by indicating their conformity or lack thereof to the actual political environment in the region. This approach will provide a means for collating both qualitative and quantitative data for a more effective analysis of an issue that is of great relevance to the global community.

6. Data Analysis Strategy

An analysis of the variables involved highlights a complexity in quantifying the political environment outside the qualitative context. However, DuGoff et al. (2014) note that the continued interactions of state actors can result in patterns that increase the complexity of randomising selection, thereby necessitating the use of propensity scores to effectively capture these interactions. The research will estimate the quality of predictors such as per capita GDP, previous developmental aid from China and BRICS countries, the official stance on issues such as control over the South China Sea, IMF participation, and existing disputes as measures of the correlation to subsequent investments. Effective predictors will have to report alpha levels of 0.25 and above to be considered as statistically significant, with the qualifying variables being considered for a subsequent regression analysis. The matching of similar observations will also allow for a more comprehensive analysis by providing a measure of the influence that the independent variable has on the dependent variable when all other control variables remain constant (Harrell 2015). Consequently, identifying the statistical significance of the demographic and socio-political environments in Asia Pacific countries that have received Chinese aid will prove essential in quantifying the relationship between developmental aid and political outcomes in the region.

7. Expected Outcomes of the Research

One of the key outcomes expected from this research is empirical evidence of the political changes or lack thereof that the countries in the Asia Pacific region have experienced as a result of China’s investment in the region. The research is expected to confirm the hypothesis of an increased affinity for policies that foster bilateral relationships with China. Moreover, the researcher also expects that the study will also uncover an increased affinity for these recipient countries to cooperate more closely with China when making decisions regarding vital issues such as governance of the South China Sea. Although the military elements of developmental aid also factor into the issue of politics, the researcher expects that these are provided with the same intentions of improving ties rather than influencing political outcomes for the recipient countries. In this manner, the researcher expects that the study will offer insight into the issue while also having limited divergence from prior studies that indicate an increased affinity for China’s development loans in the Asia Pacific region. Therefore, it is expected that this research will provide a comprehensive comparison of political outcomes in the region over time while also contributing to the literature on China’s contributions in its immediate environment.

8. Implications of the Research

The literature analysis reveals a wealth of evidence regarding the role of China in improving infrastructure in Asia Pacific countries through developmental loans that it disburses across the region. However, there is limited scholarly evidence of the political intentions and outcomes of these investments, with Yeh and Wharton (2016) noting that the majority of literature has negative leanings towards China’s seemingly benevolent contributions to its neighbours. While cases that provide China with direct access to political assets are well documented and limited to extreme cases, such as the transfer of assets developed using Chinese funds as a means for repaying debt (Thuan 2017). As a result, this research has the potential to highlight the political outcomes for countries that can conform to China’s bilateral loan agreements. With the lack of strict regulations on loan provision and utilisation in China’s developmental aid programs, it will be interesting to perceive the political changes that result from the reduced dependence on restriction-oriented loans from other BRICS countries and organisations. Therefore, this research can make a valid contribution to the current knowledge base regarding shifting political environments in the Asia Pacific region as well as what role China plays in influencing these outcomes.


Race, class, and gender – importance in sport: writing essay help

A good place to start with a discussion about race, class, and gender and its importance in sport is to talk about the way that sociologists or social theorists broadly talk about these issues. There seems to be a recurring theme with anyone who writes theoretically about these issues. Social theorists are regularly critiqued for not including enough of one social inequality or the other, lacking nuance in gender, race, class, ability, sexuality, and so on. It is possible, in fact probable, that all theories on these subjects could be critiqued in this way. The field of social inequalities is simply too broad. It covers such vast distances of experiences, ideologies, cultures, institutions, and structures that some argue it is impossible to have singular “grand theories” (not necessarily in the historical sense, but simply ones that attempt to holistically cover things) that can accomplish such a task.

The way I hope to accomplish this task throughout this section, as well as other sections in this area, is by doing what Patricia Hill Collins calls a strategy of “dynamic centering”. This strategy of studying social inequalities involves “foregrounding selected themes and ideas while moving others to the background” (2008:68). In her case, this means emphasizing different aspects of oppression and resistance in different ways, at different times. The benefit to dynamic centering of ideas is that it allows the author, as well as the reader, to more closely examine particular types of social inequality. Patricia Hill Collins is best known for her work on intersectional research, and readily acknowledges that it, too, is generally “partial”. The comparative nature of looking at race and class, or race and gender allow us to understand the similarities and differences in those works (Collins 2008).

As a final thought, Collins calms criticism so eloquently by exclaiming:

“There is a rush to tidy up the messiness of always having to say race and class and gender and sexuality and ethnicity and age and nationality and ability by searching for overarching terms that will capture this complexity. The term “difference” tries to do this kind of heavy lifting, typically unsuccessfully. If we are not careful, the term “intersectionality” runs the same risk of trying to explain everything yet ending up saying nothing.” (Collins 2008)

This section begins by discussing the relevant race, class, and gender theories separately are pertinent to sport. But throughout there is an attempt to interject the intersectional ideas of what is missing in those theories, or how they relate. This is not to say that it will be able to address every form of intersecting or overlapping oppression. That, knowing limitations, is not possible and probably not a fruitful endeavor.

This essay will, however, address the major theories as well as related works to understanding sport through race, class, and gender. For the purposes of clarity, I will split those “theoretical camps” along those lines. After the discussion of each separately, there will be a section to address intersectional research and its fit into sport. Finally, the conclusion will address insights from the sociology of sport and how those are more useful to broader understandings of race, class, and gender.

In general, there is a theoretical consistency with studying race, class, and gender in sport. As a major cultural and economic institution, sport is generally one of the most widely understood and simultaneously one of the most theoretically underdeveloped areas of sociology (Carrington 2013). Cases have been made by sociologists of sport (whether they derive from different backgrounds is another case) as well as journalists that sports indeed do matter (Carrington 2012). One needs to look no further than our own university to see recent examples of why sport can be an important cultural institution.

In February 2014 Michael Sam, an All-American defensive lineman for the Tigers came out as gay, and was the first openly gay man drafted into the major American sports: Football, Baseball, Basketball and Hockey (Connelly 2014; Wagoner 2014). A year later, Missouri football players joined in on campus protests by the group Concerned Student 1950 over the “racist, sexist, homophobic, etc., incidents that have dynamically disrupted the learning experience” of students on campus (Tracy and Southall 2015). It also dovetailed the protests that had happened over racial injustices that had happened two hours west, in Ferguson where another unarmed black man was shot by police. The protest by the players appeared to be the tipping point of the protest, as it resulted in the firing of the Chancellor as well as the President. The players wielded the most power available to them as they threatened to boycott the next football game against BYU, a move that would have cost the University one million dollars.

The events over the past few years at the University are just one of many microcosms where sport is increasingly relevant and political. With the current protests of the police brutality and racial injustice by NFL players, started by Colin Kaepernick, sport has come into the limelight for its focus on inequality. All of this is to say that sport is an key aspect of society, and worth investigating further. Overall, sport can be better understood by using the breadth of literature and theory that exists outside of itself. But there is also a reciprocal nature to this question, as the literature on inequalities could greatly benefit by studying sport, and adopting understandings from the studies located there.

Theories and Studies of Race, Class and Gender that are important to sport.

A field as wide as “social inequalities” could be a large enough umbrella to fit nearly any sociological study. Therefore, it is a somewhat difficult task to pick out just a small number of sociological theories that would directly benefit the subfield of sport. Some studies that I think are worth exploring and investigating with more space and time would include critical race theories (CRT) that can be very important in understanding racial dynamics. Here, I will focus on a few “branches” of the larger social inequalities “tree”, that would be worth adopting further into sport.


Race may be the most theoretically developed area of sport. As the first Sport Sociologist, Dr. Harry Edwards wrote what is considered to be one of the first Sport studies, The Revolt of the Black Athlete (1969) as well as the first Sociology of Sport textbook (1973). Edwards was also directly linked to helping create the idea for the 1968 Olympic protest by John Carlos and Tommy Smith, who’s picture is now known worldwide. Dr. Edwards writings on race were truly transformative and ahead of their time, changing understandings about race and structural inequalities facing African Americans. Although the term was not yet created, his work would likely now be considered intersectional based on its ideas centered around masculinity, race, and class.

As far as race theories (or theorists) that help us understand the Sociology of Sport, a good place to start with Michael Omi and Howard Winant’s Racial Formation theory. Omi and Winant have written multiple updates to their 1994 text, helping clarify the theory and including more relevant examples. They define racial formation as “the sociohistorical process by which racial identities are created, lived out, transformed, and destroyed” (Omi and Winant 2014:109). Historically, the black/white binary has dominated the way people think and talk about race (Bonilla-Silva 2014). Racial formation complicates this idea by understanding race as a process.

One of Omi and Winant’s key concepts of their theory is what they call “racial projects”. Racial projects are a space in which social structures and cultural representations clash. Many theoretical paradigms in race (but also class and gender) are primarily focus on either a) structural phenomena that are unable to account for cultural patterns, meanings, and identities or b) systems of culture, identity and signification. Frequently, theorists are uncomfortable with the ambiguity and murky nature of operating within those two boundaries (the implications of this will be discussed further in a later essay). It is in this space where racial projects exist.

The authors define racial project as “simultaneously an interpretation, representation, or explanation of racial identities and meanings, and an effort to organize and distribute resources (economic, political, cultural)”(Omi and Winant 2014:125). Racial projects can occur on both the large scale as well as the small scale, and can be carried out by anyone regardless of their social position.

Included in these ideas of racial projects are things as small as the decision to wear dreadlocks and as large as voting rights laws or civil rights movements (Omi and Winant 2014:125). Using this definition, we should consider the actions and discourse over social media between Jeremy Lin and Kenyon Martin. Lin is the first American NBA player of Taiwanese descent to play in the NBA. His story as a whole has been widely studied in sociology. Kenyon Martin is a retired African American NBA player. To summarize the recent issue, Lin decided to premiere in the most recent NBA season with dreadlocks, and was criticized by Martin for wanting to “be black” saying “Do I need to remind this damn boy that his last name Lin?” to which Lin responded with:

“At the end of the day, I appreciate that I have dreads and you have Chinese tattoos [because] I think its a sign of respect. And I think as minorities, the more that we appreciate each other’s cultures, the more we influence mainstream society” (Begley 2017)

Both the initial act of having dreadlocks, as well as both responses could be different types of racial projects. Lin challenged the system and well as the cultural signification and history that is deeply embedded in dreadlocks. Martin had a racial project of his own, that sought to reaffirm the structures and cultural significance of hair choice. Lin’s response was yet another, one that sought to subvert the system to reclassify the understanding of cultural appropriation and its ties to race.

Although the Lin/Martin example is a micro-instance, it is reflective of how sport can recapitulate our ideas about race through racial projects. Similarly, one could argue that Jack Johnson beating “great white hope”, Joe Louis fighting Italian Primo Carnera in 1935, and certainly Jesse Owens in the 1936 Olympics would be termed “sporting racial projects”.

This idea of “sporting racial projects”. This idea is scarcely developed. Carrington discusses the invention of the natural black athlete as a “global sporting racial project” that was an attempt to “other” blackness into sub-humanized category (2010). In Carrington’s own words “Sports help to make race make sense and sport then works to reshape race” (2010:66). Although he provides the basic definition and connection to the idea of sport, he merely scratches the surface of the possibilities and importance of racial projects to sport.

All of the above previously discussed University of Missouri examples are sporting racial projects. Many of them have pushed unconscious ideas about race to the foreground of discourse. Michael Sam and the implications for race and sexuality. He is simultaneously subverting ideas about sexuality and masculinity, while reaffirming ideas about blackness and athletics. The football team protesting is a racial project challenging the power structure of white dominated and white centered institutions (both sport and the university). Simultaneously, the backlash by white boosters, administration, and fans, run “counter” projects that overlap and compete. Their emphasis on colorblindness operates between the structural and cultural level.

Racial formation theory in general and racial projects specifically are a useful theoretical tool to understand the dynamics of sport and how it intersects with race. But there has been critiques from many different areas of how “useful” a theory it actually is. Feagin and Elias (2013) critique racial formation theory for not being explicit enough with their critique of the racial framework as they see it. Feagin has especially built his career on what he coins “systemic racism theory” (Feagin 2013, 2014). It directly confronts the hierarchical nature of racial oppression in the United States, and implicates whites in the process. The theory includes more grounded ideas than that of racial formation, by discussing the different levels on inequality and how whites use power to oppress racial minorities (although many times Feagin is using a black/white essentialism). To Feagin and Elias, there is not enough critical theory for racial formation to “work” as a theory. Ironically enough, Feagin and Elias’ systemic racism theory is also critiqued, as it privileges race and does not include enough theory of gender and sexuality as a component for oppression (Harvey Wingfield 2013)

The arguments that have been levied against racial formation theory are valid. There is a substantial lack of critical theory implied in racial formation theory. Omi and Winant (2013) have argued that their theory still works, as its goal is not to attempt to pin down the racial classification system as it currently exists. It’s much more ambitious goal is to be able to speak to race as it operates across time and space. Still, it is debatable that it succeeds in doing so.

It would be particularly helpful to revisit the idea regularly with current events that weren’t discussed. Colin Kaepernick, Jemele Hill, Trump, NFL Owners, and the recent World Series racism have all happened in the last couple weeks. Racial formation can help us better historically place these ideas and what they mean, as well as understanding social movements that occur within and around the “field” of sport (in a Bourdieusian sense).

Other contemporary theories of race that are useful in understanding sport include the aforementioned Feagin theory on systemic racism, and Bonilla Silva’s theory of racialized social systems and its implications for colorblind ideologies (Bonilla-Silva 1997, 2014).


With class and stratification being a core tenant of sociology, it would be impossible to list every theory and branch, or even every school of thought here. So I will not address, even though there is most definitely a pertinence, Marx or conflict approaches to class. Although their contributions to the general field of sociology are numerous and critical, I am of the opinion that for the study of sport there are more important theories.

Bourdieu (2011) says there are three guises of capital: economic, cultural, and social. Economic is directly convertible into money, or institutionalized as property. Cultural is convertible into economic capital, or institutionalized as education credentials. Social is connections, convertible into economic, institutionalized as title or nobility.

He goes on to argue that there are three forms of cultural capital. The embodied state which are long lasting dispositions of the mind and body. Parts of the embodied state include external wealth converted into part of the person (habitus). It can be obtained unconsciously like an accent. It is in some ways linked to biological capacity, its often misrepresented a legitimate competence rather than as capital. Finally, it derives a scarcity value from its position of cultural capital. The profits of this form of cultural capital is distinction. According to Bourdieu transmission of cultural capital through families is:

is “no doubt the best hidden form of hereditary transmission of capital, and it therefore receives proportionately greater weight in the system of reproduction strategies…” (p.245)

The second form of cultural capital is the objectified state. These can be appropriated materially (economic capital) or symbolically (cultural capital). It is defined in the relationship with cultural capital in its embodied form, and can be yielded as a weapon and a stake in the struggles which go on in the fields of cultural production.

Finally, the institutionalized state (education) is the final form of cultural capital. Bourdieu states that academic ability itself is a product of time and cultural capital. Viewing education as cultural capital helps us se educations role in the social structure. We could view education as a “certificate in cultural competence” (p.248). This finally makes it possible to compare “conversion rates” for cultural and economic capital.

As we have seen in earlier discussions and sections, cultural capital is key to understanding what a society values (distinction) as well as the different ways that culture operates with class. Building off of this, a couple theorists adopt Bourdieusian ideas and employ them in interesting ways.

Annette Lareau’s (2011) book Unequal Childhoods, describes two differences in the logic in childrearing. The first is centered on what the author calls “concerted cultivation,” which is characterized by viewing children as a project to be cultivated. Parents who subscribe to this method of child rearing seek opportunities for growth and take an active approach in the formation and development of their child. They “invite” and encourage the child to interact within the adult world and often treat them as “equals.”

The second logic in childrearing is centered on the “accomplishment of natural growth,” which views childhood as a somewhat natural and organic process that requires little adult intervention. Parents who subscribe to this logic are less “hands on” and maintain a separation between the adult world and the child world. While Lareau is quick to point out that the dominant social institutions that children come in contact with (namely school), value “concerted cultivation” and stress opportunities for parents to further this plight, there is a clear class distinction on the use of each method.

Concerted cultivation was by and large something that middle class parents subscribed to more frequently that lower and working class parents. Arguably, the access to resources that lend themselves well to the concept of “concerted cultivation” are more easily accessed for the middle class families in the study than the lower and working class families. To be clear, the opportunities for concerted cultivation were more readily available, not necessarily the actual event itself (often the events themselves take an enormous amount of time on the behalf of the family even going so far as being the center of the families social calendars). The interactions have many benefits for the child as children learn: ease in interacting with adults, viewing themselves as equals, developing their voice, larger vocabularies, negotiation skills, and time management skills. However, because their time is constantly regulated by adults, children often have trouble managing “unmanaged time,” are often disconnected from family members, lack interactions with children of different age groups, often feel “bored” and/or exhausted, and develop a sense of entitlement. The skill sets that are developed for children with regard to the “accomplishment of natural growth,” are quite different.

Parents who are committed to the “accomplishment of natural growth,” view their roles quite differently. Often from working class and poor backgrounds families’ concern for children is being able to meet their basic needs. Navigating children throughout the day and providing them their basic needs often takes an enormous amount of time and effort. These families often rely on a vast network of friend and kin relationships for resources (cars, bus passes, phone calls, clothing, etc.). As a result of the effort to provide for children (navigating the bus system, public aid, etc.), children often have close relationships to kin, are resourceful, create ties with children of different age groups, manage their own time, engage in creative activities and have quite a bit of autonomy. Therefore, these children have an emerging sense of restraint, yield to adult authority, and often have difficulty interacting within some social institutions (medical, school, etc.). While there is merit in the development of both skill sets, they are not equally valued by the dominant social institutions in society. Lareau, notes that the skills learned by “natural growth” while important are rendered somewhat “invisible” and these skills (creativity, respect for authority) are rarely valued/praised to the degree that skills (negotiation, language, time management) learned under concerted cultivation receive.

These ideas of concerted cultivation and natural growth are especially useful when trying to understand how class operates in relation to sport. In some cases, the parents are making huge personal sacrifices to give their children “values” (or cultural capital). Sport, commonly viewed as a positive area for socialization and growth, is just one of these cultural arenas that has to be “cultivated”.

Bourdieu combined with Lareau and Prudence Carter’s ideas on the “culture of power” in school (Discussed in another essay) all can be important to understand the field of sport. They are important in understanding the forms of culture that are privileged, and the importance of different types of capital other than economic forms.


Of the many areas of sociology that touch sport, none may be as developed or shaped by the subfield as much as the study of gender. Sport, as a physical activity, have traditionally been spaces where men can prove their physical superiority over each other. To this day, sport continue to be male dominated, male identified, and male centric (Coakley and Pike 2009)spaces that shape our ideas about masculinity and femininity. But much like race and class, it is also a place where ideologies are contested in a much grander form.

The first theory that comes to mind as central to studying sport and gender is hegemonic masculinity. Hegemonic masculinity is a theoretical concept first proposed from a field study of social inequality in Australia, which provided data of interwoven hierarchies of gender (and class) that were active projects in gender construction (Connell 1982; Kessler et al. 1982). The most widely cited, however, came from Connell(1987). According to that work, hegemonic masculinity is “understood as the pattern of practice that allowed men’s dominance over women to continue.” (Connell and Messerschmidt 2005:832).

This meant that there were multiple forms of masculinities that existed in the hierarchy. Some were hegemonic, others subordinated. This wasn’t an exercise in statistics. It’s not as if the form of hegemonic masculinity that they were studying was practiced by the most people. In fact, it may be a minority of people. What was significant was that other man had to position themselves in relation to this form of masculinity.

Sport may be the pinnacle of hegemonic masculine practice. Hegemonic masculinity, according to Connell and Messerschmidt (2005) could be achieved through culture, institutions, and persuasion. Some of the earliest adopters of this hegemonic masculinity framework were sociologists of sport. Michael Messner became renowned for his use of the topics on media representation of masculinity, and its connection to violence and homophobia (Messner 1992; Messner and Sabo 1990). Messner (1993) argues that forms of softer or sensitive masculinity are developing but don’t necessarily contribute to the emancipation of women. (Messner 1993: 725) Linked to this is Messner’s (2007) analysis of the changes in the public image of Arnold Schwarzenegger, e.g., illustrates what he calls an “ascendant hybrid masculinity” combining toughness with tenderness in ways that work to obscure – rather than challenge – systems of power and inequality.


The amalgamation of the above points leads logically leads us to theories of intersectionality. At different points reference has been made to intersections of race, class, and gender. But it truly may be a facile endeavor to try and discuss the relevance of any of those theories separately. Much of the research in the area of intersectionality tells us as much.

Davis (2008) argues that intersectionality has been a ‘buzzword’ of feminist theory ever since its inception. As Crenshaw (1991) writes, this is because an intersectional approach is crucial to addressing the experiences of ethnic minority women for two reasons; the first is ethnic minority women’s experiences and political struggles have been largely neglected by mainstream feminist movements and second, because anti-racism discourses have focused too heavily on the experiences of men, rendering invisible the experiences of women.

An example which demonstrates the impact of structural intersectionality is found in domestic violence cases where race and class formations make women of color’s experiences of rape, violence and remediation ‘qualitatively different from [those] of white women’ (Crenshaw 1991: 1245). Intersectional ideas are also linked closely to Patricia Hill Collins’ (1990)matrices of domination which is a paradigm that different forms of oppression are interconnected. Intersectional ideas can be inclusive of all of the above forms of inequality (race, class, and gender), and although it is considered a “buzzword” that risks losing meaning, it is important for my research especially in understanding race, masculinity, and class.

Sociology of Sport and Its Influence

Hopefully I have illuminated many of the important works on race, class, and gender. In many of the instances, I included the connections to sport, already. I would like to mention some of the other main works in sociology of sport that are critical to understanding inequality.

Much of Messner’s work on gender is essential. He has arguably done more than anyone in the area. For race, Dr. Harry Edwards classic studies are important to set up sport as an area to study inequalities. Of course, there are many race studies such as CLR James Beyond a Boundary that looks at colonialism and sport (cricket). Hartmann has written on Midnight Basketball and its implications for neoliberal society. Brooks and May both looked at race and basketball, and would be considered important contemporary contributions.

All of these readings have in common a challenging of ideas in race, class, and gender. Sport, as one of the largest cultural institutions in the US, will continue to be “contested terrain” for these intersecting and overlapping subject areas, and will continue to challenge and recapitulate our understandings of social theory and global inequality.


Portable pneumatic drill – project report


In work shops and automotive shops, there are frequent needs of tightening and smoothening of screws, drilling, boring, grinding machines.

Huge and complicated parts can not be well machined with the help of ordinary machine. In electrical drilling machine power consumption is too high whereas accuracy is not good. Drilling the hole in particular workpiece becomes a time consuming process and human effort is also large.

Hence by application of pneumatics human effort becomes less, accuracy becomes more precise and time is saved. In this project the pneumatic cylinder with piston which is operated by an air compressor will give the successive action to operate this drilling operation.

Design and fabrication of portable pneumatic drilling machine will be done using basic principles of pneumatics.

Chapter 1 Introduction

1.1 Introduction

Power tools must be fitted with guards and safety switches; they are extremely hazardous when used improperly. The types of power tools are determined by their power source: electric, pneumatic, liquid fuel, hydraulic, and powder-actuated.

Figure 1.1 Different type of drilling machine

1.2 Pneumatic System

Figure 1.2 Pneumatic system

There is an constant supply of air in atmosphere to produce compressed air. This compressed air can be generated by various sources such as compressor, air cylinder, etc. Moreover, the compressed air is not affected by distance, as it can easily be transmitted through pipes. After that with the help of pressure relief valve it can be directly supplied to the compressor.

A pneumatic system is a system that uses compressed air to transmit and control energy. Pneumatic systems are used in controlling train doors, automatic production lines, mechanical clamps, etc.

1.3 Drilling Machine

Figure 1.3 Drilling Machine

Drilling machines are mainly used to originate through or blind straight cylindrical holes in solid rigid bodies and/or enlarge pre-machining holes of different diameter.

Ranging from about 1 mm to 40 mm of varying length depending upon the requirement.

The diameter of the drill in different materials excepting very hard or very soft materials like rubber, polythene, rock etc.

1.4 The different types of drilling machines

1. Portable drilling machine (or) Hand drilling machine

2. Sensitive drilling machine (or) Bench drilling machine

3. Upright drilling machine

4. Radial drilling machine

5. Gang drilling machine

6. Multiple spindle drilling machine

7. Deep hole drilling machine

1.4.1Pneumatic Drilling Machine System

Pneumatic Drilling Machine is portable drill with features of explosive-proof, large torque, high rotate speed, light weight, suitable dimension, high efficiency, stable structure and convenient maintenance. Mainly used for drilling holes for exploring and discharging water and gas, also used for bolting tunnel side and drilling explosive holes on soft rock, coal and half-coal layer.

Chapter-2 Literature Review

2.1 Literature Review

A.Karthik, R.Krishnaraj, Nunnakarthik, R.Kumaresan, S.Karthik, R.Murali, ‘Single Axis Semi Automatic Drilling Machine with PLC Control”, 10.15680/IJIRSET.2015.0403009 [1] had represented hydraulic cylinder which is used to the drill the workpiece of the given size. In this model the solenoid valve is used which is used to control the the flow of the fluid. Limit switch is used which is used in adjusting the height of the workpiece to be drilled. And programmable logic controller is used to control the whole drilling operation hence accuracy and time is saved by this model.

Manish Kale, Prof. D. A. Mahajan, Prof. (Dr.) S. Y. Gajjal, ‘A Review Paper on Development of SPM for Drilling and Riveting Operation”, International Journal of Emerging Technology and Advanced Engineering, Volume 5, Issue 4, April 2015 [2] had represented the paper discuss the case study and comparison of productivity of component using conventional radial drilling machine and special purpose machine(SPM) for drilling and tapping operation. In this case study, the SPM used for 8 multi drilling operation (7 of ”6.75 and ”12), linear tapping operation of ”12 and angular tapping operation of ”5.1 of TATA cylinder block. In this paper the following studies are carried out 1. Time saved by component handling (loading and unloading), using hydraulic clamping, 2. Increase in productivity both qualitative and quantitative, 3. Less human intervention, indirectly reduction in operator fatigue, 4. Less rejection due to automatic controls, and 5. Increase the profit of company.

Mohammad Javad Rahimdel, Seyed Hadi Hosienie, ‘The Reliability and Maintainability Analysis of Pneumatic System of Rotary Drilling Machines”, Springer, 07 November 2013 [3] had Trend and serial correlation tests shown that the TBF data are iid, therefore, RP technique can be used for reliability modeling. The reliability of pneumatic system was calculated by the use of best-‘tted distribution. Data analysis and ‘nding the best-‘t distributions were done using Easy Fit 5.5 software. The Kolmogorov’Smirnov (K’S) test has been used for selecting the best distributions for reliability analysis. The results of data analysis with top six ‘tted and the best-‘tted distributions.

Prof. P.R. Sawant, Mr. R. A.Barawade, ‘Design and Development of Spm-A Case Study in Multi Drilling and Tapping Machine’, International Journal of Advanced Engineering Research and Studies, Vol. I, Issue II, January-March, 2012/55-57 [4] had represented the case study and comparison of productivity of component using conventional radial drilling machine and special purpose machine(SPM) for drilling and tapping operation. In this case study, the SPM used for 8 multi drilling operation (7 of ”6.75 and ”12), linear tapping operation of ”12 and angular tapping operation of ”5.1 of TATA cylinder block. In this paper the following studies are carried out

Time saved by component handling (loading and unloading), using hydraulic clamping,
Increase in productivity both qualitative and quantitative,
Less human intervention, indirectly reduction in operator fatigue,
Less rejection due to automatic controls, and
Increase the profit of company.

A.Sivasubramaniam, ‘Design of Pneumatic Operated Drill Jig for Cylindrical Component”, IJSR-International Journal of Scientific Research, Volume 3, Issue 3, March 2014 [5] had represented the growth of Manufacturing industry and its need for increased productivity is greatly enhanced by the nature of the industry, their possible work culture and most important thing is the use of improvised techniques and systems. The concept of increased productivity, reduced lead time, high quality and precision can be achieved by making some improvisation in available systems and techniques. In this paper we would deal with a design of pneumatic operated drill jig which can be universally used for a specific drill size. We have designed the jig especially for cylindrical components which involves drilling of hole of size 6mm and 12mm diameter. This design would greatly help in increased productivity of jobs in mass production.

Ogundele, O. J.,Osiyoku, D. A. Braimoh, J., and Yusuf, I., ‘Maintenance of an Air Compressor Used in Quarries”, Scholars Journal of Engineering and Technology (SJET), 2014; 2(4C):621-627 [6] had represented how the aie is mainted in all the system during its operating condition . Main moto of this model is the maintenance of drilling machine on which the drilling operations are carried out.

2.2 Problem Definition

The research survey was reflected different drilling system such as Single Axis Semi Automatic Drilling Machine, SPM for Drilling, Pneumatic System of Rotary Drilling Machines, Multi Drilling and Tapping Machine, pneumatic Operated Drill Jig for Cylindrical Component etc.
Some research paper indicated about design and analysis different type drilling machine system.
There was proposed work done on pneumatic drilling machine for human comfort.

2.3 Objectives

1. To after Study details of pneumatic drill machine operate it as simpler.

2. To maintain it easy by worker.

3. To operate as faster with higher torque.

4. To used by compressed air so low power consumption.

5. Operating cost as lowest as possible to maintain.

6. To reduce the disadvantages of electrical drill.

Chapter 3 Analysis of Mechanical Component of Pneumatic Drilling Machine

3.1 Pneumatic Control Component

3.1.1 Pneumatic cylinder

An pneumatic cylinder is a device which operates on the air and has input of compressed air i.e. pneumatic power is converted in to mechanical output power, by reducing the pressure of the air to that of the atmosphere.

a) Single acting cylinder

Single acting cylinder is the cylinder in which the piston acts on the single side of the cylinder.

b) Double acting cylinders:

Double acting cylinder acts on the both side of the cylinder. In this piston acts on the both side. Direction and flow control valves are used in the double acting cylinder.

3.2 Generally Used Materials

Table 3.1 Cylinder Tube Material

Light Duty Medium Duty Heavy Duty

Plastic brass tube brass tube

Hard drawn Aluminum tube Aluminum Casting steel tube

Hard drawn Brass tube Brass, Bronze, Iron or Casting, welded steel tube

Table 3.2 End Cover Material

Light Duty Medium Duty Heavy Duty

Aluminum stock (Fabricated) Aluminum stock (Fabricated) Casting

Brass stock (Fabricated) Brass stock (Fabricated)

Aluminum Casting Aluminum, Brass, iron or steel casting

Table 3.3 Piston Material

Light Duty Medium Duty Heavy Duty

Aluminum Casting Brass Aluminum Forgings,

Aluminum Casting

Bronze (Fabricated) Bronze

Iron Casting Brass, Bronze, Iron or

Steel Casting

Table 3.4 Mount Material

Light Duty Medium Duty Heavy Duty

Aluminum Casting Aluminum, Brass and steel casting High tensile

Steel Casting

Light Alloy (Fabricated) High tensile

Steel Fabrication

Table 3.5 Piston Rod Material

Light Duty Medium Duty Heavy Duty

Mild Steel ground and polished Generally preferred chrome plated

Stainless Steel Ground and Polished Less scratch resistant

3.3 Valves

3.3.1 Solenoid Valve

The directional valve is one of the irrefutable parts of a pneumatic system. Commonly known as DCV, this valve is used to control the direction of air flow in the pneumatic system. The directional valve does this by changing the position of its internal movable part.

A solenoid is a device that converts electrical energy into line motion and force. These are also used to operate a mechanical operation which in turn operated the valve mechanism. Solenoid valves can be push type or pull type. The push type solenoid valves are one in which the plunger is pushed when the solenoid is energized electrically. The pull type solenoid valves are one is which the plunger is pulled when the solenoid is energized.

3.3.2 Parts of a Solenoid Valve.

1. Coil:

The solenoid coil is made of copper. The layers of wire are separated by layers which are insulated. The coil is covered with a varnish which is best material that is not affected by solvents, moisture, etc.

2. Frame:

Solenoid frame is made of laminated sheets, it is magnetized when the current passes through the coil. Then coil attracts the plunger which is made of metal to move. The frame has provisions for attaching the mounting. They are usually bolted or welded to the frame.

3. Solenoid Plunger:

The solenoid plunger is formation of steel laminations which are riveted together under high pressure, so that there will be no movement of the lamination with respect to one another. At the top of the plunger a pin hole is placed for making a connection to some device. The plunger is moved by a which is magnetize in nature and in one direction and is usually returned by spring action.. In many applications it is necessary to use explosion proof solenoids.

3.4 Solenoid Valve (or) Cut Off Valve

It is used to control the direction of flow of liquid in case of hydraulics and air in case of pneumatics.

3.4.1 Flow control valve

In any fluid power circuit, flow control valve is used to control the speed of the actuator. The flow control can be achieved by varying the area of flow through which the air in passing. When area is increased, more quantity of air will be sent to actuator as a result its speed will increase. If the quantity of air entering into the actuator is reduced, the speed of the actuatoris reduced.

3.4.2 Pressure Control Valve

The main function of the pressure control valve is to limit (or) control the pressure required in a pneumatic circuit. Depending upon the method of controlling they are classified as

Pressure relief Valve2.

Pressure reducing Valve

3.5 Drilling Head

3.5.1 Barrel

It contains hollow cylinder and it is the part of the frilling head.

3.5.2 Shaft

It’s made up of mild steel. It is a straight rod having a step. It is supported by two bearings in the cylinder. The diameter of rod is 15 mm for a length of 180 mm and 13.5 mmdiameter for 22 mm length. The fan is fitted on the shaft through flanges. The fan is fixed to flanges and the flanges are fixed to the shaft through drilled holes.

3.5.3 Couplings

It is used to fasten the shaft with flanges and also transmit the motion.

3.5.4 Flanges

Its arrangement is made in such a way that the drilling hole and tool coincide with each other.

3.5.5 Vane

Due to rotation of vane motion is transmitted to the shaft..

3.6 Hoses

Hoses used in this pneumatic system. These hoses can withstand at a maximum pressure level of 10 N/m”. it is used to transmit air flow from compressor to the system.

3.6.1. Connectors

There are 3 connectors in our system. It is used to connect various valves such as flow , direction control valves , etc.

Chapter 4 CAD Modeling and Calculation of Pneumatic Drilling Machine

4.1 Pneumatic components and its specification

The pneumatic auto feed drilling machine consists of the following components:

1. Double acting pneumatic cylinder

2. Solenoid Valve

3. Flow control Valve

4. Connectors

5. Hoses

1. Double acting pneumatic cylinder

Technical Data

Stroke length : Cylinder stoker length = 80 mm

Piston diameter: 40 mm

Piston rod: 25 mm

Quantity: 2

Seals: Nitride material

End cones: Grey cast iron

Piston: 10 bar

Media: Atmospheric Air

Temperature: 0-95 ” C

Pressure : 5 N/m”

2. Solenoid Valve

Technical data

Size: 0.6355 x 10-2 m

Part size: G 0.6355x 10-2 m

Max pressure: 0-12 bar

Quantity: 1

3. Flow control Valve

Technical Data

Port size : 20 mm

Pressure: 0-8 bar

Media : Atmospheric Air

Quantity : 4

4. Connectors

Technical data

Max working pressure: 10 bar

Temperature: 0-115 ” C

Fluid media: Air

Material: Brass

5. Hose Pipes

Technical data

Max pressure: 9 bar

Outer diameter: 10mm

Inner diameter: 5 mm

4.2 General machine Specifications

Drill unit

Short capacity: 0.6355 x 10-2 m

Barrel diameter (ID): 50 mm = 50 x 10-3m

Clamping unit

Clamping: Auto clamping

Max Clamping Size: 110 mm = 0.11m

Pneumatic unit

Type of cylinder: Double acting cylinder

Type of valve: Flow control valve & solenoid valve & Direction control valve

Max air pressure: 9 bar

General unit

Size of machine (L x H) : 0.7100 m x 0.7100 m Weight : 15 kg

4.3 Design Calculations

Max pressure applied in the cylinder (p) : 10 bar

Area of cylinder (A) : (3.14 D”) / 4

: 1256 mm2

Force exerted in the piston (F) : Pressure x area of cylinder.

= 1000000 x 1.256

= 1256 kN

( for maximum pressure, not working pressure)

4.4 CAD Tool-Solid Work 2015

Solid Works 2015 is 3D mechanical design system built with adaptive technology and solid modeling capabilities.

The Solid Works 2015 software includes features for 3D modeling, information management, collaboration, and technical support that you can:

Create 3D models and 2D drawings.
Create features, parts, and subassemblies.
Manage thousands of parts and huge assemblies.
Use third-party applications, with an (API).
Collaborate with multiple designers in the modeling process.
Link to web tools to access resources, share data, and communicate with colleagues.
Use the integrated (DSS) for help as you work.
Better accuracy with appropriate analysis and design.

4.5 Working with assemblies

Turn off visibility of components. Access the parts we need and update graphics faster.
Use design representations.
Turn off adaptivity of parts and vice versa.
Assign different colors to components. Select colors from the Color list on the Standard toolbar.
Use the browser to find components.

Fig.4.1 Detail view drawing of base of pneumatic drilling machine

The structures of pneumatic drilling machine is divided in to two part one for X-axis which axis for work piece like rectangle block and another for Y-axis for movement of Z-axis which having a tool for 2 D hole cutting.

Fig.4.2 Detail view drawing of pneumatic drilling machine Structure

Fig.4.1 and 4.2 are reflected detail drawing of both structure of base and detailed view respectively.

Using part features creates all components of structure. All assemblies are created using various components (part) by constrained there relative motion.

Using part modeling environment to create structure.

First to make geometry of standard section pipe with respect their practical data to measure thickness of plate and amount of extruded part by using extrude command in feature operation.

Further using new sketch on base extruded component and draw sketch on existing extruded feature to identifying model width.

As shown in Figure 4.3 to 4.6, there are different orientations of Pneumatic Drilling Mahcine Structure such as isometric view, front view, top view and side view.

Fig.4.3 Isometric view of Pneumatic Drilling Machine Structure

Fig.4.4 Front view of Pneumatic Drilling Machine Structure

Fig.4.5 Top view of Pneumatic Drilling Machine Structure

Fig.4.6 Side view of Pneumatic Drilling Machine Structure

The compressed air from the air compressor is used as the basic force zone for this operation. One Single acting and double acting cylinder is used in this machine .The air from the compressor enters into the flow control Valve and then it comes in contact with direction control valve. Air enters in to the cask through one way and the two way of air enters to the solenoid valve. When air enters to the cylinder 1, due to pressure difference work is done on the cylinder and it is pressed and when air enters to the other cylinder due to pressure difference drilling operation takes place as the drilling head comes down and drills the work piece. After this operation the cylinder releases the head with the help of arm and drilling head comes to its original position.

4.7 Factors Determining the Choice Of Materials

The various factors which determine the choice of material are discussed below.

1. Properties:

The material selected must posses the necessary properties for the proposed application. The various requirements to be satisfied can be weight, surface finish, rigidity, ability to withstand effect from chemicals, service life, reliability, maintainability etc. The following three types of properties of materials affect their selection

a. Physical

b. Mechanical


The various physical properties concerned are melting point, Thermal Conductivity, Specific heat, coefficient of thermal expansion, specific gravity, electrical Conductivity, Magnetic purposes etc.The various Mechanical properties are strength in tensile,shear, bending, torsional and buckling load, fatigue resistance, impact resistance, elastic limit,endurance limit. The various properties concerned from the manufacturing point of view are.

Ability of cast
Ability of weld,
Ability of forging,
surface properties,

2. Manufacturing Case:

Sometimes the demand for lowest possible manufacturing cost or surface qualities obtainable by the application of suitable coating substances may demand the use of special materials.

3. Quality Required:

The quality required for the market selling point of view should be accurate and good enough to be sold. So, according the advanced technologies in the field of drilling, forging, casting the quality of raw material as well as finished product should be such that to take tough competition in the market.

4.Availability of Material:

Some materials may have shortage or in short supply. It then becomes mandatory for the designer to use some other material which may not be a perfect element for the material designed.The delivery of materials and the delivery date of product should also be set such that the deal can be made in time without obstacles.

5. Space Consideration:

Sometimes high strength materials have to be selected because the forces involved arehigh and the space limitations are there.

6. Cost:

As in any other problem, in selection of material the cost of material plays an important part and should not be ignored. Some times factors like scrap utilization, appearance, and non-maintenance of the designed part are involved in the selection of proper materials. The cost should be optimum so it wull be helpful to customer as well as manufacturer.

Table 4.1 List of Materials

Sr. No. Description Qty Material

1 Double acting pneumatic cylinder 1 Aluminum

2 Solenoid valve 2 Aluminum

3 Flow control valve 1 Aluminum

4 Drill head 1 C.I.

5 Control unit 1 Electronic

6 Pneumatic driller 1 M.S.

7 PU Tubes 5 meter Polureethene

8 Hose Collar 8 Brass

9 Reducer 8 Brass

10 Frame stand 1 M.S.

11 Fixed Plate 1 M.S.

12 Moving Plate 1 M.S.

13 Column Support 1 M.S.

Chapter 5 Detailed description of all components

5.1 Cylinder

It is used to generate linear motion in the whole equipment. It is used to generate the linear motion for holding the workpiece and it is used to generate the workpiece for drilling the workpiece. Following is the detailed drawing.

Fig. 5.1 Cylinder

5.2 L- Frame

It is used to clamp the wokpiece as well as it is used to hold the cylinders and entire drilling mechanism. The detailed drawing is shown in the figure.

Fig. 5.2 L-Frame

5.3 Connector

It is used to connect the pneumatic cylinder with the drill mechanism. Hence it compells drill to follow rotating mechanism. Following is the drawing of the connector:

Fig. 5.3 Connector

5.4 Piston Rod

It is used to do linear motion hence it is used to generate force inside the cylinder which is eventually used to generate linear motion. Following is the drawing of piston rod.

Fig. 5.4 piston Rod

5.5 Drill Bit

It is most important part of drilling machine as it is used to drill the workpiece. The drawing of the drill bit is as shown below:

Fig. 5.5 Drill Bit

Chapter 6 Project Management

6.1 Project Planning and Scheduling

As the design fulfills the drawbacks and limitations faced by electric drilling machine as a heavy weight. That heavy weight and human efforts lead us to the development of the project. With the principles of the pneumatics it helped us to create something innovative. As we had developed conceptual design in the 7th semester. We have developed the working model prototype in the 8th semester. At the beginning of the semester 8th we started collecting the components of drill. It took approximately 3 weeks to find components with proper specifications. After 3 weeks we started planning of fabrication of drilling machine. For proper fabrication we needed industry. Hence we started finding industry for the support of manufacturing the model. After that we started fabrication of drill.

6.2 Project development Approach

Project development approach consists of the limitations of electric drill. As per the drawbacks of electric drill we started to first find out the feasibility of hydralic drill. But it was carrying tremendous amount of weight. Hence we enlarged our planning to develop drill based on the pneumatic drill which consists of light weight. Hence it led us to develop the approach of project.

6.3 Project Scheduling and planning

1 Design Approach

2 Enlargement Of Design

3 Feasibility Of Design

4 Analysis Of Design

5 Optimization Of Design

6 Selection Of Materials

7 Optimization Of Materials

8 Searching Industry For Fabrication

9 Implimentation Of Manufacturing the Project

10 Final Preparation Of Report

11 Final Shape to Model

6.4 Risk Management

Risk is very significant parameter to be considered for making any project.There are certain amount of risks are involved as the project contains various factors such as Electricity, Human touch involvement and other unaccounted economic risks.The main aim of risk management is to reduce the risk to such extent the project touches the profit.

6.5 Risk Identification Analysis And Planning

Risk Identification is termed as the identification all the risks which are involved prior to make the project, during making the project and after making the project.As per the title of the Project the main risk involved is leackage of air as we came to know regarding this risk during fabrication of project.

To reduce the risk analysis was carried out by our team.This risk analysis was done on certain calculations which were done by hand calculations and on softwares.The reason behind the problem was foundnout and successfully soved.

To reduce the risk proper planning was carried out in order to check it for the Quality and Safety Purpose.

Chapter 7 Cost Analysis

Table 4.1 Cost of Components

Sr. No. Description Qty Material Cost ( Rs.)

1 Double acting pneumatic cylinder 2 Aluminum 3040

2 Solenoid valve 1 Aluminum 600

3 Flow control valve 4 Aluminum 1000

4 Drill head 1 C.I. 800

5 Control unit 1 Electronic 200

6 Pneumatic driller 1 M.S. 50

7 Hose Pipes 2 Polureethene 250

8 Hose Collar 2 Brass 250

9 Reducer 4 Brass 100

10 Frame stand 1 M.S. 600

11 Fixed Plate 1 M.S. 110

12 Moving Plate 1 M.S. 250

13 Column Support 1 M.S. 250

Total Cost 7500

Chapter 8 Feasibility Analysis

Feasibility analysis is termed as the feasibility of the whole project as it will be practically feasible in the industry as well as beneficial in long term and short term future.As our project was started in 7th semester we made only conceptual design but after studying all the prospects of the project we applied the principles of Pneumatics and with redesign and proper analysis we made the feasible design which will be helpful for industry as well as further modifications in the further project.Given below are the real model pictures which are practically feasible hence we include it under feasibility analysis.

Fig. 8.1 Top View

Fig 8.2 Isometric View

Fig 8.3 Side View

Fig. 8.4 Control Mechanism

Fig. 8.5 Switch Mechanism

Fig. 8.6 Front View

Chapter 9 Limitations

As every projects have the limitations our project has also one limitation of air leackage as well as the depth of the drill is limited to 10 mm because of the design considerations of the Cylinder and Drill head.

Chapter 10 Conclusion

The project carried out by us made an impressing task in the field of small scale industries and automobile maintenance shops. It is very usefully for the workers to carry out a number of operations in a single machine. This project has also reduced the cost involved in the concern

Bibliography &References


[1] A.Karthik, R.Krishnaraj, Nunnakarthik, R.Kumaresan, S.Karthik, R.Murali, ‘Single Axis Semi Automatic Drilling Machine with PLC Control”, 10.15680/IJIRSET.2015.0403009.

[2] Manish Kale, Prof. D. A. Mahajan, Prof. (Dr.) S. Y. Gajjal, ‘A Review Paper on Development of SPM for Drilling and Riveting Operation”, International Journal of Emerging Technology and Advanced Engineering, Volume 5, Issue 4, April 2015.

[3] Mohammad Javad Rahimdel, Seyed Hadi Hosienie, ‘The Reliability and Maintainability Analysis of Pneumatic System of Rotary Drilling Machines”, Springer, 07 November 2013.

[4] Prof. P.R. Sawant, Mr. R. A.Barawade, ‘Design and Development of Spm-A Case Study in Multi Drilling and Tapping Machine’, International Journal of Advanced Engineering Research and Studies, Vol. I, Issue II, January-March, 2012/55-57.

[5] A.Sivasubramaniam, ‘Design of Pneumatic Operated Drill Jig for Cylindrical Component”, IJSR-International Journal of Scientific Research, Volume 3, Issue 3, March 2014.

[6] Ogundele, O. J.,Osiyoku, D. A. Braimoh, J., and Yusuf, I., ‘Maintenance of an Air Compressor Used in Quarries”, Scholars Journal of Engineering and Technology (SJET), 2014; 2(4C):621-627.


Business accelerators and incubators: essay help free

Executive Summary

As an entrepreneur candidate and a recently established startup employee, I was always interested in how a startup survives the initial grow, perhaps the most risky phase of a startup’s life cycle. I face with business issues everyday. Unfortunately, we are not working with an accelerator. Since we are a small, in fact a micro company, I work as an accountant, HR manager and the sales manager. We face the lack of mentoring and the financial support everyday. Even though, there are lots of grants and scholarships for startups, especially for those that promote innovative products or services, the selection process is quite difficult than the one in accelerators, in my experience.

Over the past decades a wide variety of incubation mechanisms have been introduced by policy makers, private investors, corporates, universities, research institutes etc. to support and accelerate the creation of successful entrepreneurial companies (Pauwels et al., 2014). The subject I want to discuss is relatively different from the incubation mechanisms. Most common names for this mechanisms are startup accelerators or seed accelerators. A relatively new incubation model, seed accelerators, emerged out mid 2000-s as a response to the shortcomings of previous generation incubation models, which are primarily focused on providing office space and in-house business support services (Bruneel et al., 2012). I wanted to see and learn what key performance indicators for these accelerators are. The accelerator phenomenon has been cited across the globe as a key contributor to the rate of business startup success (Dempwolf, et al., 2016).

While the number of accelerators has been increasing rapidly, the roles and effectiveness of these programs are not very vivid. Nonetheless, local governments and founders of such programs often cite the motivation for their establishment and funding as the desire to transform their local economies through the establishment of a startup technology cluster in their region (Fehder & Hochberg, 2014). In this study, I will provide a review of the research literature on the definition and characteristics of start up accelerators; how they differ from other incubation models; their benefits for the overall startup ecosystem. While accelerators appear to be proliferating quickly, little is known regarding the value of these programs; how to define accelerator programs; the difference between accelerators, incubators, angel investors and co-working environments; and the importance of the various aspects of these programs to the ultimate success of their graduates, the local entrepreneurship ecosystems and the broader economy (Cohen & Hochberg, 2014).

In 2014 I had the chance to work as an intern in one of these Startup Accelerators, called Eleven. Eleven is based in Sofia, Bulgaria. At the end of my dissertation, I will try to explain some of the knowledge I gained from my internship experience in addition to the interview I conducted with Belizar …, business developer of Eleven.

Evolution of Incubators

Words “accelerators” and “incubators” are sometimes used similarly, creating confusion about the differences between the two. Both concepts were created in the US. An incubator can be thought as a beginning office for startups. They support entrepreneurs survive during the most fragile phases of their startups’ life cycle, the start and the growth. Incubators usually accept teams that have just started converting their ideas into a business model. Incubators, on this evolution, usually help these teams to achieve their primary goals by providing co-working space and mentorship. Even though it’s not mandatory, in some rare cases, they also provide seed investments in exchange of equity.

In its generic sense, the term incubator is broadly used for collaborative programmes which help people solve problems associated with launching a startup by providing a variety of organizations and initiatives, which strive to help entrepreneurs in developing business ideas from the start, to commercialization and eventually the launch and independent operation of new business ventures. According to a paper by Almubartaki & Al-Karaghouli & Busler (2010), the business incubation is a term describing business development process that is used to grow successful, and to create sustainable entrepreneurial ventures that will contribute to the economic developments of a healthy economy. The paper notes that successful incubation process is about supportive environment in which new ventures can develop and fulfil their potential growths as well as giving them access to a wide range of business development resources and tailored services. Business incubators play significant roles in seeding and developing new ventures and technology transfer with potential growth in most areas and sectors of the economy (Almubartaki, Al-Karaghouli & Busler, 2010).

Many agree on that the first business incubator was established by Joseph Mancuso in Batavia, New York, in 1959. Since the establishment of first incubators the incubation model has evolved and in 2006 there were approximately seven thousand incubators worldwide (Lewis et al 2001). Incubators typically provide their companies with programs, services and space for different amounts of times based on the company needs and their incubator graduation policies (Carvalho, 2016). The main purpose of a business incubator, is to create a favourable business environment for startup firms to compensate for the lack of financial, knowledge and networking resources they generally have (Commission, 2002). The startup firms in an incubator are in general provided with office space, shared equipment, administrative services and other business related services (bollingtoft, 2012). With changing business needs, the organizational structure, the operational sector and value added elements of business incubators have significantly changed.

Business incubators have proven to be an economic development tool for the communities they serve. According to the European Commission Enterprise Directorate-General’s Final Report on Benchmarking on Business Incubators in February, 2002, business incubators have two main functions:

1. The provision of physical space is central to the incubator model. Standard good practices now exist with regard to the most appropriate configuration of incubator space.

2. The value added of incubator operations lies increasingly in the type and quality of business support services provided to clients and developing this aspect of European incubator operations should be a key priority in the future.

Birth of Accelerators

With the increasing tendency towards technology and the support to SMEs(Small Medium Enterprise), there was a new window of opportunities for investors. With exponentially increasing number of new players joining to the startup ecosystem, venture capitalists needed to find a way to support, fund and invest in those companies.

There is no denial about how small businesses play a massive role in any country’s economy. They have a very large impact on the national GDP and they create countless jobs for people. They are the backbone of a country’s economy. With the technology available at our hands, everyday hundreds of new companies are set up. People are eager to create, to provide for others. Innovative ideas are being turned into businesses everyday. New sectors, new products and new services emerging out of nothing, making people’s life easier and contributing to a nation’s economy. However, there are also ideas that can’t be turned into a business or businesses that have to shut down because of insufficient funds. This is why local enterprise funds, grants, non-profit support groups for SMEs started to grow recently. Whether they are non-profit or not, we cannot deny that this is an excellent way to increase the wellbeing of overall society. One phenomena, called seed accelerators or startup accelerators, aimed at helping startups at the very early stage of their business.

According to the research Accelerating Startups: The Seed Accelerator Phenomenon by Susan G. Cohen and Yael V. Hochberg, published in March 2014, the first accelerator, Y Combinator, was founded by Paul Graham in 2005 in Cambridge, Massachusetts, and soon moved and established itself in Silicon Valley. The research states that in 2007, David Cohen and Brad Feld, two startup investors, set up set up Techstars in Boulder, Colorado, hoping to transform its startup ecosystem through the accelerator model. Since 2005 YC has funded 1,430 companies and almost 3,500 founders and the total market cap of all YC companies is over $85B (Mañalac, 2017). Y Combinator was the first accelerator to provide a small amount of seed investment money in exchange for a minor equity stake in startups participating in a three-month program with networking and advice from experienced entrepreneurs (Kohler, 2016).

Accelerator TechStars Y Combinator

Location Boulder, Boston, New York, Seattle Silicon Valley

Launched 2006 2005

Length of Program 3 months 3 months

Batch Size 9-12 teams 65 teams

Seed Funding per Team $6k -$18k $11K-$20K

Equity Stake Required 6% 2-­‐10%

Acceptance Rate 1% 3%

Table 1. Techstars and Y Combinator Source: Accelerating Success: A Study of Seed Accelerators and their Defining Characteristics by Barrehag, Fornell, Larrson, Mardstrom, Westergard & Wrackefeldt published in 2011.

After the birth of startup accelerator phenomena, the number of existing accelerators rapidly increased. Today, estimates of the number of accelerators range from 300+ to over 2000, spanning six continents (Cohen & Hochberg, 2014). Even though they have some similarity with their ancestors, incubators; the lack of mentoring and financial support in the incubation system makes startups loose money and time, which they could invest in building and improving their business instead of trying to raise funds and find the right way to take certain steps.

Launched in 2007, Seedcamp is considered by many to be the first “Y-Combinator Style” European accelerator (Brunet, Grof, & Izquierdo, The European Accele