Abstract
This study looks at the relation of attributes of the neighborhood and satisfaction with them to evaluate the overall neighborhood satisfaction. The concept of neighborhood has been severely blurred if not lost as a result of the development practices of the last several decades. So research must first come to a conclusion regarding how to define a neighborhood. Then it will reveal the concept of satisfaction and the term in neighborhood scale. Since Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood and dimensions of satisfaction consist different issues which refer to the aspects, characteristics, and features of the residential environment, so several factors that influence the neighborhood satisfaction will be introduced in various categories as the result of the essay.
Keywords: neighborhood, satisfaction, neighborhood satisfaction, dimensions of neighborhood satisfaction
Introduction
Neighborhoods are the localities in which people live and are an appropriate scale of analyzing local ways of living. They can have an enormous influence on our health, wellbeing, and quality of life (Hancock 1997; Barton 2000; Srinivasan, O’Fallon, and Dearry 2003; Barton, Grant and Guise 2003).
The urban neighborhoods were once thriving communities with a variety of residents. Although racial segregation was prevalent in the majority of neighborhoods, many communities offered economic diversity (Bright, 2000). In the industrial era, they can be characterized as early establishments of quaint villages or in some instances attractive old suburbs of the cities. As cities grew and annexed these communities to the cities, they continued to thrive as a homogeneous part of the city, resulting in a habitat of diverse choices and opportunities. However, as the economy changed they experienced decline and reduced attention. The phenomenon called suburbanization, and later ”edge cities”, made center cities less attractive, at least for living in the urban neighborhoods. As there were policies that created this situation, there were efforts to keep the interest in neighborhoods too. Despite the efforts for revitalization though, the neighborhoods continue to be in distress. The process of continued decline points out the deficiencies in the approaches and programs (Vyankatesh, 2004: 22-23).
Islamic Azad University, Salmas Branch
A good neighborhood is described as a healthy, quiet, widely accessible and safe community for its residents. Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction. A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”.
We aim to discover the factors that influence residents’ satisfaction with their neighborhoods. The basic question is as follows:
What neighborhood elements influence satisfaction and how do they do so in general?
Literature Review
Literature on Neighborhoods
Neighborhood Settings
Ebenezer Howard (1898) based his design of the Garden City on the neighborhood units, which were relatively self-sufficient units that merged. While Howard’s idea focused in the suburbs, Clarence Perry (1929) attempted it in the city. His neighborhood unit was a self-contained residential area bounded by major streets, with shopping districts in periphery and community center and an elementary school located at the center of the neighborhood unit. In 1966, Clarence Stein altered Perry’s ideal concept in Radburn’s design. It had an elementary school at the center and park spaces flowed through the neighborhood, but it was larger than Perry’s concept and introduced the residential street design with cul-de-sac to eliminate through traffic.
At World War II, the massive suburbs developed and the concepts of neighborhood as a basic unit in land development changed. Since 2000, New Urbanists have called for traditional neighborhood development (TND) and transit-oriented development (TOD) models. They propose a neighborhood unit with a center and a balanced mix of activities; and they gave priority to the creation of public space.
Defining Neighborhood
The literature on neighborhoods defines neighborhood in many ways. While there is little broad agreement on the concept of neighborhood, few geographers would contradict the idea that neighborhood is a function of the inter-relationships between people and the physical and social environments” Knox & Pinch, 2000, p. 8). Brower (1996) explains that its form is derived from a particular pattern of activities, the presence of a common visual motif, an area with continuous boundaries or a network of often traveled streets. Soja (1980, p. 211) coined the term sociospatial dialectic for this phenomenon where “people create and modify urban spaces while at the same time being conditioned in various ways by the spaces in which they live and work.” It seems that research uses multiple definitions of a neighborhood simultaneously to reflect the fact that neighborhood is not a static concept but rather a dynamic one (Talen & Shah, 2007)
Park states that “Proximity and neighborly contact are the basis for the simplest and most elementary form of association which we have in the organization of city life. Local interests and associations breed local sentiment, and, under a system which makes residence the basis for participation in the government, the neighborhood becomes the basis of political control … it is the smallest local unit … The neighborhood exists without formal organization” (Park, 1925, p. 7).
Keller emphasizes on Boundaries, social character, unity or belonging, local facility use. He declines “The term neighbourhood … refers to distinctive areas into which larger spatial units may be subdivided such as gold coast and slums … middle class and working class areas. The distinctiveness of these areas stems from different sources whose independent contributions are difficult to assess: geographical boundaries, ethnic or cultural characteristics of the inhabitants, psychological unity among people who feel that they belong together, or concentrated use of an area’s facilities for shopping, leisure and learning… Neighborhoods containing all four elements are very rare in modern cities … geographical and personal boundaries do not always coincide” (Keller, 1968, p. 87).
While Wilkenson’s definition of neighborhood is based on its Place-orientated process, partial social relations, shared interest characteristics as he states “Community is not a place, but it is a place-orientated process. It is not the sum of social relationships in a population but it contributes to the wholeness of local social life. A community is a process of interrelated actions through which residents express their shared interest in the local society” (Wilkenson, 1989, p. 339), Kitagawa and Taeubeur emphasize on area history, name, local awareness, local organizations, and local business issues of the neighborhoods. They argue that “When community area boundaries were delimited… the objective was to define a set of sub-areas of the city each of which could be regarded as having a history of its own as a community, a name, an awareness on the part of its inhabitants of community interests, and a set of local businesses and organizations orientated to the local community” (Kitagawa and Taeubeur, 1963, p. xiii).
Glass believes physical and social characters both, take place in a Territorial group which he defines as neighborhood: “A neighbourhood is a distinct territorial group, distinct by virtue of the specific physical characteristics of the area and the specific social characteristics of the inhabitants” (Glass, 1948, p. 18).
Research commissions other than authors have their own neighborhood definition. As a result US National Research Commission on Neighborhoods and US National Research Council has revealed definitions as follows:
“A community consists of a population carrying on a collective life through a set of institutional arrangements. Common interests and norms of conduct are implied in this definition” (US National Research Commission on Neighborhoods, 1975, p. 2).
“In last analysis each neighborhood is what the inhabitants think it is. The only genuinely accurate delimitation of neighborhood is done by people who live there, work there, retire there, and take pride in themselves as well as their community” (US National Research Council, 1975, p. 2).
Forrest and Kearns (2004, p. 2126) argue the concept of neighborhood in an increasingly globalizing society and state impact of the information/technological age on neighborhood: “new virtuality in social networks and a greater fluidity and superficiality in social contact are further eroding the residual bonds of spatial proximity and kinship.”
Different definitions serve different interests, so that the neighborhood may be seen as a source of place-identity, an element of urban form, or a unit of decision making. This codependence between the spatial and social aspects of neighborhood is arguably one of the main reasons why the concept is so difficult to define.
Categorizing Neighborhood
Blowers conceptualizes neighborhood not as a static spatial entity but as existing along a continuum yielding five neighborhood types (Figure 1). Proceeding left to right in the continuum additional characteristics or dimensions are cumulatively added yielding more complex neighborhoods:
Figure 1 – The Neighborhood Continuum (Blowers 1973)
1. Arbitrary neighborhood: Blowers describes these neighborhoods as having “no integrating feature other than the space they occupy.” These districts have few homogeneous qualities and exhibit low social interaction (Blowers, 1973: p.55).
2. physical neighborhood: The boundaries of physical neighborhoods despite the arbitrary’s ill – defined boundaries are delineated by natural or built barriers such as major roads, railways, waterways or large tracts of non-residential land use (e.g. industrial parks, airports, etc.) The inhabitants residing within the boundaries of a physical neighborhood may share few characteristics in common. Blowers’ cautions that occupying the same physical area does not automatically imply a high degree of social interaction (Butler, 2008: 8).
3. Homogeneous neighborhood: These neighborhoods are the most familiar type of neighborhood in Blowers typology which has distinct spatial boundaries and the residents share common demographic, social or class characteristics.
4. Functional neighborhood: Blowers describes these neighborhoods as “functional areas are those within which activities such as shopping, education, worship, leisure, and recreation take place.” Like any functional region in geography, they are organized around a central node with surrounding linked to it through activities, service interchanges and associations (Blowers, 1973, p. 59).
5. Community neighborhood: Blowers sees the community neighborhood as “close-knit, socially homogeneous, territorially defined group engaging in primary contacts.” (Blowers, 1972, p. 60). Chaskin defines neighborhood as “clearly a spatial construction denoting a geographical unit in which residents share proximity and the circumstances that come with it… communities are units in which some set of connections is concentrated, either social connections (as in kin, friend or acquaintance networks), functional connections (as in the production, consumption, and transfer of goods and services), cultural connections (as in religion, tradition, or ethnic identity), or circumstantial connections (as in economic status or lifestyle)” (Chaskin, 1997, p. 522). Blower (1972, p, 61) contends that the community neighborhood can be seen as a culmination of the characteristics of the
environment, the socio-economic homogeneity of the population, and the functional interaction that takes place will contribute to the cohesiveness of the community neighborhood.” (Blower, 1972, p, 61)
Some researches demonstrate other classifications of neighborhoods. For instance Ladd, 1970; Lansing & Marans, 1969; Lansing et al., 1970; Marans, 1976; Zehner, 1971 introduce micro and macro neighborhoods based on walkability. They agree that a neighborhood should comprise a for the New preceding neighborhood types on the continuum by stating that “the distinctiveness of the geographical walkable distance . However, the actual walkable distance considered has varied from a quarter-mile to one mile from center to edge (Calthorpe, 1993; Choi et al., 1994; Colabianchi et al., 2007; Congress Urbanism, 2000; Hoehner et al., 2005; Hur & Chin, 1996; Jago, Baranowski, Zakeri, & Harris, 2005; Lund, 2003; Perry, 1939; Pikora et al., 2002; Stein, 1966; Talen & Shah, 2007; Western Australian Planning Commission, 2000). Micro-neighborhood is an area that a resident could see from his/her front door, that is, the five or six homes nearest to their house. Similarly, Appleyard (1981) used the term, home territory. He looked at residents’ conception of personal territory in three streets with different traffic hazard. The results showed that residents drew their territorial boundaries to a maximum of a street block (between intersections with approximately 6-10 buildings each side), and to a minimum their own apartment building. Research showed that the micro-neighborhood deals more with social relationships among neighbors than the physical environment.
In a slight adaptation of Suttles’ (1972) schema, we might say that the neighbourhood exists at three different scales (Table 1):
Table 1. Scales of Neighborhood
Scale Predominant function Mechanism(s)
Home area Psycho-social benefits
(for example, identity; belonging)
Familiarity
Community
Locality Residential activities
Social status and position Planning
Service provision
Housing market
Urban district or region Landscape of social and
economic opportunities
Employment connections
Leisure interests
Social networks
The smallest unit of neighbourhood, here referred to as the ‘home area’, is typically defined as an area of 5–10 minutes walk from one’s home. Here, we would expect the psycho-social purposes of neighbourhood to be strongest. As shown elsewhere (Kearns et al., 2000), the neighbourhood, in terms of the quality of environment and perceptions of co-residents, is an important element in the derivation of psycho-social benefits from the home. In terms of Brower’s (1996) outline of the ‘good neighbourhood’ , the home area can serve several functions, most notably those of relaxation and re-creation of self; making connections with others; fostering attachment and belonging; and demonstrating or reflecting one’s own values.
Some neighbourhoods and localities (in addition to individuals and groups) can be seen to be subject to discrimination and social exclusion as places and communities (Madanipour et al., 1998; Turok et al., 1999).
Once the urban region (the third level of neighbourhood in Table 1) is viewed as a landscape of social and economic opportunities with which some people are better engaged than others (for example, by reasons of employment, leisure activities or family connections), then the individual’ s expectations of the home area can be better understood (Kearns & Parkinson, 2001: 2104-2105).
Not only researchers have described several categories for neighborhoods but also different stratifications of neighborhood consumers have been developed. Four distinct types of user potentially reap benefits from the consumption of neighbourhood: households, businesses, property owners and local government. Households consume neighbourhood through the act of occupying a residential unit and using the surrounding private and public spaces, thereby gaining some degree of satisfaction or quality of residential life. Businesses consume neighbourhood through the act of occupying a non-residential structure (store, office, factory), thereby gaining a certain flow of net revenues or profits associated with that venue. Property owners consume neighbourhood by extracting rents and/or capital gains from the land and buildings owned in that location. Local governments consume neighbourhood by extracting tax revenues, typically from owners based on the assessed values of residential and non-residential properties (Galster, 2001: 2113)
Literature on satisfaction
Mesch and Manor (1998) define satisfaction as the evaluation of features of the physical and social environment.
Canter and Rees have argued that people interact with the environment at different levels— from the bedroom to the neighborhood and to the entire city. In their model of housing satisfaction, Canter and Rees (1982) referred to these levels of environment as levels of environmental interaction and defined them as scales of the environment that have a hierarchical order. They specified different levels at which people may experience satisfaction such as the house and the neighborhood. They also argued that the experience of satisfaction is similar and yet distinct at different levels of the environment. Similarly, Oseland (1990) and Gifford (1997, p. 200) stressed that other responses such as the experience of space and privacy also vary in different rooms in a home. Oseland’s study supported the hypothesis that users’ conceptualization of space depends on the location of the space. Some models of residential satisfaction (Weidemann & Anderson, 1985; and Francescato, Wiedemann, & Anderson, 1989) have also suggested that it is important to consider different levels of environment in the study of satisfaction.
Some studies, however, have examined how residential satisfaction varies at different levels of the environment (Paris & Kangari, 2005; Mccrea, Stimson, & Western, 2005). Most of these studies have examined residential satisfaction at two or three levels, namely the housing unit and the neighborhood level. For example, Mccrea et al., (2005) examined residential satisfaction at three levels; the housing unit, the neighborhood, and the wider metropolitan region. Although the manner in which levels of environment have been defined in these studies has depended on the context of the research and on the interest of the researcher, the most common levels of environment have been the housing unit and the neighborhood (Amole, 2009:867).
Discussion
Neighborhood satisfaction
What is a good neighborhood? A common answer could describe it as a healthy, quiet, widely accessible and safe community for its residents, wherever they may live, in the suburbs or in the city. However Brower believes a good neighborhood is not an ideal neighborhood, but it is a place with minimum problems and defects (Brower, 1996). Practically, a neighborhood is defined by the psychology of its 4 types of consumers which includes households, businesses, property owners and local government as described above. The boundaries drawn are often based on these and other factors such as history, politics, geography and economics.
Whether there is a relative homogeneity in socioeconomic character, historic conditions such as annexations, political boundaries of wards and councils, or whether the place is divided by natural geographic features or by rails, streets etc all counts in deciding the ‘goodness’ of the neighborhood (Vyankatesh, 2004:20).
Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction (Amerigo, 2002; Amerigo & Aragones, 1997; Carvalho et al., 1997; Francescato, 2002; Hur & Morrow-Jones, 2008; Lipsetz, 2001; Marans, 1976; Marans & Rodgers, 1975; Mesch & Manor, 1998; Weidemann & Anderson, 1985). They have used a variety of terms such as, residential satisfaction, community satisfaction, or satisfaction with residential communities for it (Amerigo & Aragones, 1997; Cook, 1988; Lee, 2002; Lee et al., 2008; Marans & Rodgers, 1975; Miller et al., 1980; Zehner, 1971). (Hur, 2008a: 8)
High neighborhood satisfaction increases households’ sense of community and vice versa (Brower, 2003; Mesch & Manor, 1998). Studies often mention that residential and neighborhood satisfaction also influences people’s intentions to move (Brower, 2003; Droettboom, McAllister, Kaiser, & Butler, 1971; Kasl & Harburg, 1972; Lee, Oropesa, & Kanan, 1994; Nathanson, Newman, Moen, & Hiltabiddle, 1976; Newman & Duncan, 1979; Quigley & Weinberg, 1977). High satisfaction among residents encourages them to stay on and induces others to move in, and low satisfaction with the neighborhood environment urges current residents to move out. Marans and Rodgers (1975) and Marans and Spreckelmeyer (1981) find that the relationship between neighborhood satisfaction, decisions to move, and quality of life is a sequential process, with neighborhood satisfaction predicting mobility and mobility affecting quality of life (Hur, 2008b: 620).
Francescato et al. (1989) noted that “the construct of residential satisfaction can be conceived as a complex, multidimensional, global appraisal combining cognitive, affective, and cognitive facets, thus fulfilling the criteria for defining it as an attitude. (p.189)”
Dimensions of Neighborhood Satisfaction
Dimensions of satisfaction are similar at the different levels of the environment. The term “dimensions of satisfaction” refers to the aspects, characteristics, and features of the residential environment (such as design aspects, social characteristics, facilities provided, or management issues) to which the users respond in relation to satisfaction (Francescato, 2002). This is important because it would inform researchers about the important dimensions and relevant research questions at different levels of the environment.
A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”, however ambiguous that term may be, depicted in countless expressions or terms of public and private services, sewer, water, police schools, neighbors, entertainment facilities etc., (Ahlbrandt and Brophy, 1975). Availability of housing of a desirable choice is yet another factor influencing the choice of neighborhood. The desired lot sizes and architectural styles play their role in the choice. These livability features hold a key to the future viability of a neighborhood (Vyankatesh, 2004: 22).
Residents in neighborhoods where most homeowners are satisfied with their neighborhoods focus on different aspects of their neighborhoods than those where most are dissatisfied; and finally, we hypothesize that the two neighborhood groups differ in terms of features that affect neighborhood satisfaction.
The findings of neighborhood satisfaction research are sometimes contradictory because of the compound nature of “satisfaction.”
Since neighborhood characteristics vary; there are spatial differences in satisfaction across areas. Also length of residence, amount of social interaction, satisfaction with traffic, and satisfaction with appearance or aesthetics are important variables in neighborhood satisfaction. Thus complex characteristics of neighborhood satisfaction have been pointed in our research:
Where Residents Live
Research found different circumstances affecting neighborhood satisfaction depending on where the residents live (Cook, 1988; Hur & Morrow-Jones, 2008; Zehner, 1971). For example, Zehner (1971) examined residents’ neighborhood satisfaction in new towns and less planned areas. New town residents were more likely to mention attributes of the larger area, the physical factors; and the less planned town residents were focused on the micro-residential features with emphases on the social characteristic of the neighborhood (Hur, 2008a: 17).
Socio-Demographic Characteristics
There were a number of studies that indicate the importance of sociodemographic characteristics on neighborhood satisfaction. They have found positive influences of longer tenure in the neighborhood (Bardo, 1984; Galster, 1987; Lipsetz, 2001; Potter & Cantarero, 2006; Speare, 1974), and homeownership (Lipsetz, 2001). Young, educated, and wealthy urban residents were found to be more satisfied than others (Miller et al., 1980). St. John (1984a, 1984b, 1987) found no evidence of racial differences in neighborhood evaluation, but Morrow-Jones, Wenning, and Li (2005) found that Satisfaction with a community’s racial homogeneity is another predictor of residential satisfaction.
Social Factors in Neighborhood
Social and psychological ties to a place such as having friends or family living nearby (Brower, 2003; Lipsetz, 2001; Speare, 1974) were an important social factor in neighborhood satisfaction. Brower (2003) finds having friends and relatives living nearby is a factor that increases neighborhood satisfaction; Lipsetz (2000), on the other hand, finds that it has a largely negative effect on urbanites’ satisfaction but has no effect on that of suburbanites’.
The findings agree that residents were satisfied when they considered their neighbors as friendly, trusting, and supportive. People reported satisfaction was higher when they reported talking to their neighbors often and supporting each other formally and informally, especially for the residents who have lived in the neighborhood longer (Potter & Cantarero, 2006).
In addition to the positive social interactions factors, the other factors that decrease neighborhood satisfaction include crime rate, social incivilities such as harassing neighbors, teenagers hanging out, noise, fighting, and arguing.
Physical Factors in Neighborhood
I. Physical environmental characteristics
Planners can more directly shape the neighborhood physical features and policy can apply the physical features effectively. Although planners support the importance of physical characteristics, residents consider social factors more important in judging a neighborhood (Lansing & Marans, 1969).
Research often finds physical characteristics a strong influence on neighborhood satisfaction compared to social or economic characteristics (Sirgy & Cornwell, 2002). Neotraditonal and New Urbanist approaches focus on physical features as a media to decrease dependence on the automobile, foster pedestrian activity, and provide opportunities for interaction among residents (Marans & Rodgers, 1975; Rapoport, 1987).
There are several physical environmental features that research has considered. Some relate to neighborhood satisfaction and the others have connections to the factors that may link to neighborhood satisfaction. Hur (2008a) has categorized Physical environmental characteristics to 3 types:
1. Physical disorder (incivilities):
It promotes fear of crime, makes people want to leave the area, and diminishes residents’ overall neighborhood satisfaction. Physical incivilities can be grouped into three kinds:
• Fixed feature elements (such as, a vacant house and dilapidated building): Fixed feature elements “change rarely and slowly” (p. 88). Individual housing and the building lot are fixed-feature elements of the neighborhood.
• Semi-fixed feature elements (such as, graffiti and broken feature on public property): semi-fixed feature elements “can, and do, change fairly quickly and easily” (p. 89), which Rapoport says, “become particularly important in environmental meaning…where they tend to communicate more than fixed-feature elements” (p. 89).
• Non-fixed (movable) elements (such as, litter and abandoned cars): Rapoport (1982) also suggested non-fixed feature elements, which include people and their nonverbal behaviors (p. 96).
2. Defensible space features :
“Defensible Space” is a program that “restructures the physical layout of communities to allow residents to control the areas around their homes (U.S. Department of Housing and Urban Development, 1996, p.9).” This supports an action to foster territoriality, natural surveillance, a safe image, and a protected milieu:
• Foster territoriality: Territoriality, another defensible feature, involves territorial symbols such as yard barriers (G. Brown et al., 2004; Perkins et al., 1993), block watch signs, security alarm stickers, and evidence of dogs (Perkins et al., 1993). Although they may reduce crime and fear of crime, research has not looked at the connection to residents’ neighborhood satisfaction. Litter and graffiti, which are also incivilities, affect image and milieu.
• Natural surveillance: Natural surveillance involves windows facing the streets, and place to sit outside (front porches). If provide eyes on the street (B. Brown et al., 1998; MacDonald & Gifford, 1989; Perkins et al., 1992, 1993), give residents opportunities to have informal contacts with their neighbors to help formation of local ties (Bothwell, 1998; B. Brown et al., 1998; Plas & Lewis, 1996), and affects non-verbal messages of monitoring (Easterling, 1991; Taylor & Brower, 1985). Research reported that a less visible street from neighboring houses had more crime (G. Brown et al., 2004; Perkins et al., 1993) indicating the importance of surveillance system in neighborhood. Despite of its significance, Bothwell et al. (1998) was the only study looking at natural surveillance as an influence on neighborhood satisfaction. The study showed how public housing residents in Diggs town have become known to each other, restored the sense of belongingness, and built strong neighborhood satisfaction via front porches.
• A safe image: The safe image conveys an impression of a safe and invulnerable neighborhood. If the image is negative, “the project will be stigmatized and its residents castigated and victimized (Newman, 1972, p. 102).”
• A protected milieu: A safe milieu is a neighborhood situated in the middle of a wider crime-free area, which is thus insulated from the outside world by a moat of safety (Burke, 2005, p.202).
3. Built or natural characteristics:
The third type of physical environmental features includes the degree to which a place looks built or natural. Studies have measured residential density, land use and vegetation. Lansing et al. (1970) was the only study to look at density-related characteristics (e.g., frequency of hearing neighbors, and privacy in yard from neighbors) on neighborhood satisfaction. But those elements were more social than physical, and thus may only get at physical density in the neighborhood indirectly. Lee et al. (2008) found that residents’ neighborhood satisfaction was associated with natural landscape structure: tree patches in the neighborhood environment that were less fragmented, less isolated, and well connected positively influenced the neighborhood satisfaction. Some research has looked at the associations between multiple attributes. Ellis et al. (2006) looked at relationships between land use, vegetation, and neighborhood satisfaction. While the amount of nearby retail land use has a negative correlation with neighborhood satisfaction, they found that the amount of trees moderated the negative effect (Hur, 2008a: 19-22)
II. Perceived and evaluative physical environmental characteristics
One set of studies identifies physical appearance as the most important factor for increasing neighborhood satisfaction and quality of life (Kaplan, 1985; Langdon, 1988, 1997; Sirgy & Cornwell, 2002). Nasar’s (1988) survey of residents and visitors found that their visual preferences related to five likable features: naturalness, upkeep/civilities, openness, historic significance, and order. People liked the visual quality of areas that had those attributes and they disliked the visual quality of areas that did not have them. Newly arrived residents point out that physical appearance is the most important factor for residential satisfaction, but long-time residents mention stress factors (e.g., tension with neighbors, level of income of the neighborhood, inability to communicate with others, racial discrimination, crime, etc.) as the most important factors (Potter & Cantarero, 2006).
Emotional and temporal dimensions of the environmental experience
These are recognized as a component part of the people–environment relationship and therefore residential satisfaction. Residential satisfaction is indeed strongly associated with one’s attachment to the living space.
Conclusion
Several studies have constructed comprehensive models of residential satisfaction. The complex attributes of neighborhoods can be categorized to 7 types, each have several characteristics. These are the main features which should be studied, measured and rated to estimate the residential satisfaction of their neighborhoods.
We must point that each group of neighborhood satisfaction dimension has to deliberate separately by the 4 types of the consumers of the neighborhood which have been mentioned before. The total rank will demonstrate the neighborhood satisfaction status.
As the result of the essay we introduce a classification of the satisfaction dimensions. This can make a comprehensive base for evaluating almost all the features that influence the residential satisfaction in neighborhood scale.
The 7 types of the neighborhood attributes and satisfaction dimensions are presented in the table below (table 2):
Table 2. Complex Attributes of Neighborhoods
Satisfaction dimension Assessment factors Sub-factors
Spatial characteristics Proximity characteristics access to major destinations of employment
both distance and transport infrastructure
Local facility use Local interests
Open spaces
Access to recreational opportunities
Entertainment, shopping, etc
Mass – void –
Neighborhood boundaries
Unity
Pedestrian access to stores
Place – oriented design process
Legibility
Physical characteristics Structural characteristics of the residential and non-residential buildings Type
scale
materials,
design, state of repair
density, landscaping, etc.
Infrastructural characteristics roads
sidewalks
streetscaping
utility services, etc.
traffic –
Aesthetics / appearance Naturalness
Upkeep / civilities
Openness
Historic significance
Order
color
Density of housing –
Building type Apartment
Villa, etc.
Environmental characteristics degree of land
topographical features
views, etc. –
pollution
air,
water
noise
cleanliness –
Climatic design Architecture
Wind tunnels
Sunny / too hot
Sentimental characteristics Place identification
Historical significance of buildings or district, etc.
length of residence
proximity to problem areas
name/ area pride
local awareness
–
Living space
New towns
Less planned areas
cognition
Place identity
Sense of place
Sense of Belonging to place
Social characteristics Local friend and kin networks
Degree of interhousehold familiarity
Type and quality of interpersonal associations
Residents’ perceived commonality Participation in locally based voluntary associations
Strength of socialization and social Control forces
Social support
Racial homogeneity
Neighborhood cohesion
–
Collective life
Interaction with communities
Interaction through favors
Interaction through social activity
Amount of social interaction
Territorial group
Common interests
Participation: informal social participation & participation in formal neighborhood organizations
Common conduct
Physical disorder (incivilities) Fixed feature elements (such as, a vacant house and dilapidated building)
Semi-fixed feature elements (such as, graffiti and broken feature on public property)
Non-fixed (movable) elements
Defensible space features Foster territoriality (such as block watch signs, security alarm stickers, and evidence of dogs)
Natural surveillance (such as windows facing the streets, and place to sit outside)
Built or natural characteristics Residential density
Land use
Vegetation
Demographic- economic characteristics Age distribution,
Family composition,
Ethnic
Religious Types
Tenure period / Home ownership
Wealthy / Poor
Ratio of owners / renters
Gender
Marital status
Cultural characteristics
–
Age
Young
Old
Children under 18
etc.
Education
educated
Uneducated
education composition
Occupation
Local business workers / retried
Income
Family / friend nearby
Friendly / trusting and supportive neighbors
Crime rate
Teenagers hanging out
Noise
Fighting / arguing –
Management – political characteristics The quality of safety forces
Public schools
Public administration
Parks and recreation, etc.
Residents exert influence in local affairs through spatially rooted Channels or elected representatives
Local government service
Local associations
Political control
Local organizations –
References
Amole, Dolapo, 2009, Residential Satisfaction and Levels of Environment in Students’ Residences, Environment and Behavior, Volume 41, No. 6, P 867.
Barton, Hugh, 2000, Sustainable Communities: The Potential for Eco; neighborhoods- Earthscan Publications Ltd.
Blowers, A. (1973). The neighbourhood: exploration of a concept, Open Univ. Urban Dev. Unit 7, Pp 49-90.
Bright, Elise M., 2000, Reviving America’s Forgotten Neighborhoods: An Investigation of Inner City Revitalization Efforts. New York: Garland Publishing, Inc.
Brower, Sidney. (1996). Good neighborhoods: Study of in-town and suburban residential environments. Westport, CT: Praeger Publishers.
Butler, Kevin A., 2008, A Covariance Strucyural Analysis Of A Conceptual Nehghborhood Model, A dissertation for the degree of Doctor of Philosophy submitted to Kent State University, P 8.
Canter, D, & Rees, K.A. (1982). Multivariate model of housing satisfaction. International Review of Applied Psychology, 32, Pp 185-208.
Chaskin, Robert J., 1997, Perspectives on Neighborhood and Community: A Review of the Literature, The Social Service
Review, Vol. 71, No. 4, pp. 521-547. p. 522.
Churchman, A. (1999, May). Disentangling the concept of density. Journal of Planning Literature, 13(4), Pp 389-411.
Ellis, C. D., Lee, S. W., & Kweon, B. S. (2006). Retail land use, neighborhood satisfaction and the urban forest: An investigation into the moderating and mediating effects of trees and shrubs. Landscape and Urban Planning, 74, Pp 70-78.
Fleury-Bahi, Ghozlane & Félonneau, Line, 2008, Processes of Place Identification and Residential Satisfaction, Environment and Behavior, Volume 40, No.5, pp 669-682.
Forrest, R., & Kearns, A. (2004). Who Cares About Neighbourhood? Paper presented at the Community, Neighbourhood, Responsibility. From http://www.neighbourhoodcentre.org.uk.
Forrest, Ray & Kearns, Ade, 2001, Social Cohesion, Social Capital and the Neighbourhood, Urban Studies, Vol. 38, No. 12, pp.2125–2143.
Galster, George, 2001, On the Nature of Neighbourhood, Urban Studies, Vol. 38, No. 12, Pp 2113.
Gifford, R. (1997). Environmental psychology: Principles and practices. Boston: Allyn and Bacon, p. 200
Hur, Misun & Morrow-Jones, Hazel, 2008, Factors That Influence Residents’ Satisfaction with Neighborhoods, Environment and Behavior, Volume 40, No. 5, Pp 620.
Hur, Misun, 2008a, Neighborhood satisfaction, physical and perceived characteristics, A dissertation for the degree of Doctor of Philosophy submitted to Ohio state university, Pp 8- 17 – 20,21,22,19.
Johnson , Philip, 2008, Comparative Analysis of Open-Air and Traditional Neighborhood Commercial Centers, A dissertation for the degree of MASTER OF Master of Community Planning submitted to the University of Cincinnati.
Kaplan, R. (1985). Nature at the doorstep: Residential satisfaction and the nearby environment. Journal of Architectural and Planning Research, 2, Pp 115-127.
Kearns, Ade & Parkinson, Michael, 2001, The Significance of Neighbourhood, Urban Studies, Vol. 38, No. 12, pp. 2103–2110.
Keller, Suzanne, 1968, The Urban Neighborhood: A Sociological Perspective, Random House, p. 87
Ladd, F. C. (1970). Black youths view their environment: Neighborhood maps. Environment and Behavior, 2, Pp 74-99.
Lansing, J. B., Marans, R. W., & Zehner, R. B. (1970). Planned residential environments. Ann Arbor, Michigan: Institute for Social Research, The University of Michigan.
Lee, B. A., Oropesa, R. S., & Kanan, J. W. (1994). Neighborhood context and residential mobility. Demography, 31, Pp 249-270.
Mesch, G. S., & Manor, O. (1998). Social ties, environmental perception, and local attachment. Environment and Behavior, 30, Pp 504-519.
Morrow-Jones, H.,Wenning, M. V., & Li,Y. (2005). Differences in neighborhood satisfaction between African American and White homeowners. Paper presented at the Association of Collegiate Schools of Planning (ACSP46), Kansas City, MO.
Nasar, J. L. (1988). Perception and evaluation of residential street scenes. In J. L. Nasar (Ed.), Environmental aesthetics: Theory, research, and applications (pp. 275- 289). New York: Cambridge University Press.
Newman, O. (1972). Defensible space; crime prevention through urban design. New York: The MacMillian Company.
Oseland, N. A. (1990). An evaluation of space in new homes. Proceedings of the IAPS Conference Ankara, Turkey, Pp 322-331.
Park, Robert E. & Burgess, Ernest W., (1967) or (1984) or (1992), The City; Suggestions for Investigation of Human Behavior in the Urban Environment,
Potter, J., & Cantarero, R. (2006). How does increasing population and diversity affect resident satisfaction? A small community case study. Environment and Behavior, 38, Pp 605-625.
Rapoport, A. (1982). The meaning of the built environment: a nonverbal communication approach. Beverly Hills: Sage Publications.
Sizemore, Steve, 2004, Urban Eco-villages as an Alternative Model to Revitalizing Urban Neighborhoods: The Eco-village Approach of the Seminary Square/Price Hill Eco-village of Cincinnati, Ohio, A dissertation for the degree of MASTER OF COMMUNITY PLANNING submitted to The University of Cincinnati.
Soja, E. (1980). The socio-spatial dialectic. Annals of the Association of American Geographers, 70, Pp 207-225.
Talen, E., & Shah, S. (2007). Neighborhood evaluation using GIS: An exploratory study. Environment and Behavior, 39(5), Pp 583-615.
Vyankatesh, Terdalkar Sunil, 2004, Revitalizing Urban Neightborhoods: A Realistic Approach to Develop Strategies, A dissertation for Master of Community Planning submitted to University of Cincinnati, Pp 22-23-20-22.
Wilkinson, Derek, 2007, The Multidimensional Nature of Social Cohesion: Psychological Sense of Community, Attraction, and Neighboring, Springer Science+Business Media, pp. 214–229.
Zehner, R. B. (1971, November). Neighborhood and community satisfaction in new towns and less planned suburbs. Journal of the American Institute of Planners (AIP Journal), Pp 379-385.
2017-5-16-1494940409
Vernacular Architecture: college essay help near me
01.1 Background
How sensitive are you of the built environment that you live in? Have you ever come across a building that is rather ordinary but is fascinating and has a story behind it? Have you ever wondered why people build the way they do or why they choose that material over others or even why the building faces in that direction.
Fig 1- Palmyra House Nandgaon, India (Style-Contemporary; Principles-Vernacular)
In answering these questions we need to look at the communities, their identities and their tradition over time and this in essence is what is called “Vernacular Architecture’’.
The purest definition of vernacular architecture is simple…it is architecture without architects. It is the pure response to a particular person’s or society’s building needs. It fulfils these needs because it is crafted by the individual and society it is in. In addition the building methods are tested through trial-and-error by the society of which they are built until their building methods near perfection (over time) and are tailored to the climatic, aesthetic, functional, and sociological needs of their given society. Because the person constructing the structure tends to be the person who will be using it, the architecture will be perfectly tailored to that individual’s particular wants and needs.
Much of the assimilation of the vernacular architecture that we see today in India comes from the trading countries. India is a place which has many different cultures and has seen rapid economic growth over the past few decades which not only transforms people’s lives but also changes everyday environment in which they live, people in the nation are faced daily with the dual challenges. On one hand modernization and on the other preserving the heritage including all their built heritage. This gives us multiple perspectives on vernacular environments and the pure heritage of the country.
Fig 2-A modern adaptation of brick façade along with the contemporary design of the building. https://www.archdaily.com/530844/emerging-practices-in-india-anagram-architects
Gairole House, Gurgaon, Haryana, India
“Vernacular buildings’’ across the globe provide instructive examples of sustainable solutions to building problems. Yet, these solutions are assumed to be inapplicable to modern buildings. Despite some views to the contrary, there continues to be a tendency to consider innovative building technology as the hallmark of modern architecture because tradition is commonly viewed as the antonym of modernity. The problem is addressed by practical exercises and fieldwork studies in the application of vernacular traditions to current problems.
The humanistic desire to be culturally connected to ones surroundings is reflected in a harmonious architecture, a typology which can be identified with a specific region. This sociologic facet of architecture is present in a material, a color scheme, an architectural genre, a spatial language or form that carries through the urban framework. The way human settlements are structured in modernity has been vastly unsystematic; current architecture exists on a singular basis, unfocused on the connectivity of a community as a whole.
Fig 3-Traditional jail screens, Rajasthan, India
Vernacular architecture adheres to basic green architectural principles of energy efficiency and utilizing materials and resources in close proximity to the site. These structures capitalize on the native knowledge of how buildings can be effectively designed as well as how to take advantage of local materials and resources. Even in an age where materials are available well beyond our region, it is essential to take into account the embodied energy lost in the transportation of these goods to the construction site.
Fig 4- Anagram Architects, Brick screen wall: SAHRDC building, Delhi, India
The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.
01.2NEED FOR STUDY
Buildings use twice the energy of cars and trucks consuming 30% of the world’s total energy and 16% of water consumption by 2050 they could go beyond 40%
Emitting 3008 tons of carbon which is the main cause of global warming.
In India a quarter of energy that is consumed goes in making and operating the buildings. Also almost half of the materials that we dig out form the ground goes into construction of buildings, roads and other construction projects. Hence, buildings are a very large cause of the environmental problems that we face today. Therefore, it is really important to re-demonstrate that good, comfortable sustainable buildings can play a major role in the improvement of our environment as well as can keep par with the modern designs and can perform even better than them.
The form and structure of the built environment is highly controlled by the factors such as the local area architecture or climate etc. In situations like these we need to study the forms in respect to our environment.
In India there is a whole lot variety of climate ranges and a constant need for developing architecture that will support the environment. We as architects need to study modern designs as well as the functions of the built form in respect to the local climate and cultural context.
VERNACULAR Architecture the simplest form of addressing human needs, is seemingly forgotten in modern architecture .But the amalgamation of the two can certainly aid to a more efficient built form.
However, due to recent rises in energy costs, the trend has sensibly swung the other way. Architects are embracing regionalism and cultural building traditions, given that these structures have proven to be energy efficient and altogether sustainable. In this time of rapid technological advancement and urbanization, there is still much to be learned from the traditional knowledge of vernacular construction. These low-tech methods of creating housing which is perfectly adapted to its local area are brilliant, for the reason that these are the principles which are more often ignored by prevailing architects. Hence, the study of this subject is much needed for better architects of future that are sensitive to the built form and the environment as well.
01.3 AIM
This study aims to explore the balance between the contemporary architecture practices Vis a Vis the vernacular architectural techniques. This work hinges on such ideas and practices as ecological design, modular and incremental design, standardization, and flexible and temporal concepts in the design of spaces. The blurred edges between the traditional and modern technical aspects of building design, as addressed by both vernacular builders and modern architects, are explored.
OBJECTIVE-
The above aim has been divided among the following objectives-
• Study of vernacular architecture in modern context.
• Study of parameters that make a building efficient.
• To explore new approaches towards traditional techniques.
• Study of the built environment following this concept.
• To explore approaches to achieve form follows energy.
01.4 FUTURE SCOPE
The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.
Hence, the need to study this approach is becoming more relevant with the modern times.
01.5 HYPOTHESIS
Fusion of the vernacular and contemporary architecture will help in the design of buildings which are more sustainable and connect to the cultural values of people.
01.6 METHODOLOGY
01.7 QUESTIONAIRES
• Is vernacular architecture actually sustainable in today’s context in terms of durability and performance?
• How vernacular architecture has influenced the urban architecture of INDIA?
• Local architecture or modern architecture which is more loved by the locals that are living in the cities compared to the locals living the rural area?
• Will the passive design techniques from vernacular architecture contribute in the reduction of environmental crisis due to increasing pollution and other threats?
• Modern architecture has evolved from the use of concrete to steel and glass and other modern materials. What is the reason that the sustainability of local materials were compromised during these times which led to the underrated statement or a norm that vernacular architecture is village architecture as stated by the majority today ?
02.1Introduction
The discussion and debate about the value of vernacular traditions in the architecture and formation in the settlements in today’s world is no longer polarized.
India undoubtedly has a great architectural heritage which conjures images of Taj Mahal, Fatehpur Sikri, South Indian temples and Forts of Rajasthan. But, what represents Modern Architecture in India.
India has been a country of long history and deep rooted traditions. Here history is not a fossilized past but a living tradition. The very existence of tradition is proof in itself of its shared acceptance over changed time and circumstance, and thus its continuum.
This spirit of adaptation and assimilation continues to be an integral aspect of Indian architecture in the post-independence era as well. As such post-Independence India had voluntarily embraced modernism as a political statement by inviting world renowned Modern architect Le Corbusier to design capital city of young and free nation with democratic power structure.
Despite strong continuum of classical architecture from Indian traditions, these new interventions gained currency and came to be preferred choices for emulation of architects of the following genre. Not only Corbusier, even Louis Kahn, Frank Lloyd Wright and Buck Minster Fuller had their stints in India, Indian masters also got trained and apprenticed overseas, under international masters and continued the legacy forward.
Figure 1 Terracotta Façade –A traditional material used to create a modern design for a façade https://in.pinterest.com/pin/356910339198958537/
02.2Vernacular architecture
02.2.1 Definition
Vernacular architecture is an architectural style that is designed based on local needs, availability of construction materials and reflecting local traditions. Originally, vernacular architecture did not use formally-schooled architects, but relied on the design skills and tradition of local builders.
Figure 2 A Traditional Kerala house https://in.pinterest.com/pin/538672805410302086/
Later in the late 19th century many professional architects started exploring this architectural style and worked while using elements from this style. Many of those architects included people such as Le Corbusier, Frank Ghery and Laurie baker.
Vernacular architecture can also be defined as the “architecture of people “with its ethnic regional and local dialects. It is an aware style of architecture coined by the local builders through their practical knowledge and experiences gained overtime. Hence, Vernacular architecture is the architectural style of the people, by the people, for the people.
02.2.2 Influences on the vernacular
Vernacular architecture is influenced by a great range of different aspects of human behavior and environment, leading to differing building forms for almost every different context; even neighboring villages may have subtly different approaches to the construction and use of their dwellings, even if they at first appear the same. Despite these variations, every building is subject to the same laws of physics, and hence will demonstrate significant similarities in structural forms.
Climate
One of the most significant influences on vernacular architecture is the macro climate of the area in which the building is constructed. Buildings in cold climates invariably have high thermal mass or significant amounts of insulation. They are usually sealed in order to prevent heat loss, and openings such as windows tend to be small or non-existent. Buildings in warm climates, by contrast, tend to be constructed of lighter materials and to allow significant cross-ventilation through openings in the fabric of the building.
Buildings for a continental climate must be able to cope with significant variations in temperature, and may even be altered by their occupants according to the seasons.
Buildings take different forms depending on precipitation levels in the region – leading to dwellings on stilts in many regions with frequent flooding or rainy monsoon seasons. Flat roofs are rare in areas with high levels of precipitation. Similarly, areas with high winds will lead to specialized buildings able to cope with them, and buildings will be oriented to present minimal area to the direction of prevailing winds.
Climatic influences on vernacular architecture are substantial and can be extremely complex. Mediterranean vernacular, and that of much of the Middle East, often includes a courtyard with a fountain or pond; air cooled by water mist and evaporation is drawn through the building by the natural ventilation set up by the building form. Similarly, Northern African vernacular often has very high thermal mass and small windows to keep the occupants cool, and in many cases also includes chimneys, not for fires but to draw air through the internal spaces. Such specializations are not designed, but learned by trial and error over generations of building construction, often existing long before the scientific theories which explain why they work.
Culture
The way of life of building occupants, and the way they use their shelters, is of great influence on building forms. The size of family units, who shares which spaces, how food is prepared and eaten, how people interact and many other cultural considerations will affect the layout and size of dwellings.
For example- In the city of Ahmedabad, the dense fabric of city is divided in pols, dense neighborhoods developed on the basis of its community and its cohesion.Traditionllay the pols are characterized by intricately carved timber framed buildings built around a courtyards with narrow winding streets to ensure a comfortable environment within the Hot Arid climate of Ahmedabad. The design of these settlements also included stepped well and ponds to create a cooler microclimate, these are a great example of ecological sustainability with the Cultural influences.
Figure 3 Mud house Gujrat, Traditional mirror work done on the elevation of the hut https://in.pinterest.com/pin/439875088574491684/
Culture also has a great influence on the appearance of vernacular buildings, as occupants often decorate buildings in accordance with local customs and beliefs.
For example- Warli art a form of representation of stories through simple forms like circles triangles and square are a form of decoration as well as a cultural tradition.
02.2.3 The Indian vernacular architecture
India is a country of great cultural and geographical diversity. Encompassing distinct zones such as the great Thar desert of Rajasthan, the Himalayan mountains, the Indo-Gangetic Plains,the Ganga delta, the tropical coastal region along the Arabian sea and the Bay of Bengal,the Deccan plateau and the Rann of the Kutch, each region has its own cultural identity and its own distinctive architectural forms and construction techniques that have evolved over the centuries as a response to its environmental and cultural setting. A simple unit of the dwelling has many distinct forms which depend on the climate, material available , social and cultural needs of the community.
Indian vernacular architecture the informal, functional architecture of structures, are unschooled informal architectural design and their work reflects the rich diversity of Indian climate, locally available building and materials and intricate variation in local social custom and craftsmanship. It has been estimated that worldwide close to 90% of all buildings is vernacular, meaning that it is for daily use for ordinary, local people and built by local craftsman. The term vernacular architecture in general refers to the informal building structures through traditional building methods by local builders without using the services of a professional architect. It is the most widespread form of building.
Indian vernacular architecture has evolved over time through the skillful craftsmanship of the local people. Despite the diversity, this architecture can be broadly divided into three categories.
• Kachha
• Pakka
• Semi pakka
“Vernacular traditions are a dynamic and creative ‘processes through which people, as active agents, interpret pat knowledge and experience to face the challenges and demands of the present. Tradition is an active process of transmission, interpretation, negotiation and adaptation of vernacular knowledge, skills and experience.”
-Asquith and Vellinga(2006)
IMG-Vellore house, Chennai, India
The architecture that has evolved over the centuries may be defined as the “architecture without architects”
1. KUCCHA BUILDINGS
They are the simplest and most honest form of buildings constructed using materials as per their availability. The practical limitations of the available building material dictate the specific form. The advantages of a Kuccha is that construction materials are cheap and easily available and relatively less labor is required. It can be said the Kuccha architecture Is not built for posterity nut with a certain lifespan in mind after which it will be renewed.
According to Dawson and Cooper (1998), the beauty of kuccha architecture lies in the practice of developing practical and pragmatic solutions to use local materials to counter the environment in the most economically effective manner.
For example in the North East, Bamboo is used to combat a damp, mild climate while in Rajasthan and Kutch ,mud, sunbaked bricks and locally available material is used to mould structures ;in the Himalayas they often use stone and sunken structures to protect themselves from the harsh cold. While in the south, thatch, coconut palms is used to create pitched roofs to confront a fierce monsoon.
MATERIALS-Mud, Grass, Bamboo, Thatch or sticks, Stone, Bamboo, lime
TECHNIQUE OF CONSTRUCTION: Construction of these houses were constructed with earth or soil as the primary construction material. Mud was used for plastering the walls.
IMG-House dwellings in Himalayas with sunken construction and stone used as insulating materials to block winds during harsh winters, HIMACHAL PRADESH
2. PUKKA BUILDINGS
The architectural expression of Pukka is often determined by the establishments or art form which has been developed by the community, such as WARLI paintings. The Pukka buildings are generally built with permanence in mind. Often using locally available materials. Often using locally available materials, the pukka architecture has evolved to produce architectural typologies which are again region specific.
MATERIALS-Stone, brick, clay etc.
TECHNIQUE OF CONSTRUCTION- Construction of their house are done using masonry structure which may be brick or stone, depending upon the locally available material in the region where the structure is constructed, Manual labor is much high in construction of these structure than the kachcha houses.
3. SEMI PUKKA BUILDINGS
A combination of the kachcha and pukka style together forms the semi- pukka. It has evolved as villagers have acquired the resources to add elements constructed of the durable materials characteristic of a pukka house, Its architecture has always evolves organically as the needs and resources of the local people residing in the specific region. The characteristic feature of semi pukka houses are that these houses has walls made from pukka material such as brick in cement/lime mortar, stone, clay tile but the roof construction is done in the kachcha way using Thach, bamboo etc. as the principal material of construction. Construction of these houses employs less manual labor than that of the pukka houses. Thach roofing Mud Adobe walls with Lime plaster.
02.2.4 CLIMATE RESPONSIVE ARCHITECTURE
The Climate of India comprises a wide range across its terrain. Five zones that can be identified in India on the basis on their climate are Cold, Hot and Dry, Composite, Temperate and Warm and humid.
Figure 4Climate zones of INDIA
Source- http://high-performancebuildings.org/climate-zone.php#;
These zones can be further narrowed down to three on the basis of passive techniques used and architectural styles of different regions.
1. HOT AND DRY
2. WARM AND HUMID
3. COLD
• HOT AND DRY
The hot and dry zones of India include Ahmedabad, Rajasthan, Madhya Pradesh and Maharashtra.
A hot and dry climate is characterized by a mean monthly maximum temperature above 30 ºC. The region in this climate is usually flat with sandy or rocky ground conditions.
In this climate, it is imperative to control solar radiation and movement of hot winds. The building design criteria should, thus, provide appropriate shading, reduce exposed area, and increase thermal capacity.
Design Considerations for building in Hot and dry climate-
The hot and dry climate is characterized by very high radiation levels and ambient temperatures, accompanied by low relative humidity. Therefore, it is desirable to keep the heat out of the building, and if possible, increase the humidity level. The design objectives accordingly are:
(A) Resist heat gain by:
• Decreasing the exposed surface
• Increasing the thermal resistance
• Increasing the thermal capacity
• Increasing the buffer spaces
• Decreasing the air-exchange rate during daytime
• Increasing the shading
(B) Promote heat loss by:
• Ventilation of appliances
• Increasing the air exchange rate during cooler parts of the day or night-time
• Evaporative cooling (e.g. roof surface evaporative cooling)
• Earth coupling (e.g. earth-air pipe system)
Figure 5 JODHPUR CITY CLOSELY STACKED HOUSES TO PREVENT HEAT GAIN AND TO PROVIDE SHADE Source- http://www.traveldglobe.com/destination/jodhpur
(a) Planning: Indigenous planning layout was followed for places and simple small dwellings as seen in Shahjahanabad, Jaisalmer and many other cities in India. This type of a dense clustering layout ensured that the buildings were not exposed to the outer sun. This prevents the solar gain and the hot winds from entering the premises and also allows the cold wind to circulate within the building.
Figure 6 Hot and dry region settlement https://www.slideshare.net/sumiran46muz/hot-and-dry-climate-65931347
(b) Waterbodies: Use of waterbodies such as ponds and lakes. These not only act as heat sinks, but can also be used for evaporative cooling. Hot air blowing over water gets cooled which can then be allowed to enter the building. Fountains and water cascades in the vicinity of a building aid this process.
Figure 7 AMBER FORT RAJASTHAN, INDIA A garden is positioned amidst the lake to provide a cooler microclimate for outdoor sitting.
Source-https://commons.wikimedia.org/wiki/File:Maota_Lake.JPG
Figure 8 Earth berming technique: Evaporative cooling through water feature Source-http://mnre.gov.in/solar-energy/ch5.pdf
(c) Street width and orientation: Streets are narrow so that they cause mutual shading of buildings. They need to be oriented in the north-south direction to block solar radiation.
Figure 9 Design techniques in Hot and dry regions Source-http://mnre.gov.in/solar-energy/ch5.pdf
(c) Open spaces and built form: Open spaces such as courtyards and atria are beneficial as they promote ventilation. In addition, they can be provided with ponds and fountains for evaporative cooling.
Courtyards act as heat sinks during the day and radiate the heat back to the ambient at night. The size of the courtyards should be such that the mid-morning and the hot afternoon sun are avoided. Earth-coupled building (e.g. earth berming) can help lower the temperature and also deflect hot summer winds.
Figure 10 Courtyard planning of Hot and dry region Source-http://mnre.gov.in/solar-energy/ch5.pdf
(2) Orientation and planform
An east-west orientation (i.e. longer axis along the east-west), should be preferred. This is due to the fact that south and north facing walls are easier to shade than east and west walls.
It may be noted that during summer, it is the north wall which gets significant exposure to solar radiation in most parts of India, leading to very high temperatures in north-west rooms.
For example, in Jodhpur, rooms facing north-west can attain a maximum temperature exceeding 38 ºC. Hence, shading of the north wall is
Imperative.
The surface to volume (S/V) ratio should be kept as minimum as possible to reduce heat gains.
Cross-ventilation must be ensured at night as ambient temperatures during this period are low.
(3) Building envelope
(a) Roof: The diurnal range of temperature being large, the ambient night temperatures are about 10 ºC lower than the daytime values and are accompanied by cool breezes. Hence, flat roofs may be considered in this climate as they can be used for sleeping at night in summer as well as for daytime activities in winter.
Figure 11 Flat roof for reverse heat gain during night Source-http://mnre.gov.in/solar-energy/ch5.pdf
The material of the roof should be massive; a reinforced cement concrete (RCC) slab is preferred to asbestos cement (AC) sheet roof. External insulation in the form of mud phuska with inverted earthen pots is also suitable. A false ceiling in rooms having exposed roofs can help in reducing the discomfort level.
Evaporative cooling of the roof surface and night-time radiative cooling can also be employed. In case the former is used, it is better to use a roof having high thermal transmittance (a high U-value roof rather than one with lower U-value). The larger the roof area, the better is the cooling effect.
The maximum requirement of water per day for a place like Jodhpur is about 14.0 kg per square meter of roof area cooled. Spraying of water is preferable to an open roof pond system. One may also consider of using a vaulted roof since it provides a larger surface area for heat loss compared to a flat roof.
(b) Walls: In multi-storeyed buildings, walls and glazing account for most of the heat gain. It is estimated that they contribute to about 80% of the annual cooling load of such buildings .So, the control of heat gain through the walls by shading is an important consideration in building design.
(c) Fenestration: In hot and dry climates, minimizing the window area (in terms of glazing) can definitely lead to lower indoor temperatures. It is found that providing a glazing size of 10% of the floor area gives better performance than that of 20%. More windows should be provided in the north facade of the building as compared to the east, west and south as it receives lesser radiation during the year. All openings should be protected from the sun by using external shading devices such as chajjas and fins.
Moveable shading devices such as curtains and venetian blinds can also be used. Openings are preferred at higher levels (ventilators) as they help in venting hot air. Since daytime temperatures are high during summer, the windows should be kept closed to keep the hot air out and opened during night-time to admit cooler air.
Figure 12 Louvers for providing shade and diffused lighting
http://www.nzdl.org
The use of ‘jaalis’(lattice work) made of wood, stone or RCC may be considered as they
Allow ventilation while blocking solar radiation.
(a) Color and texture: Change of color is a cheap and effective technique for lowering
Indoor temperatures. Colors having low absorptivity should be used to paint the external surface. Darker shades should be avoided for surfaces exposed to direct solar radiation. The surface of the roof can be of white broken glazed tiles (china mosaic flooring). The surface of the wall should preferably be textured to facilitate self-shading.
Remarks: As the winters in this region are uncomfortably cold, windows should be designed such that they encourage direct gain during this period. Deciduous trees can be used to shade the building during summer and admit sunlight during winter. There is a general tendency to think that well-insulated and very thick walls give a good thermal performance. This is true only if the glazing is kept to a minimum and windows are well-shaded, as is found in traditional architecture.
However, in case of non-conditioned buildings, a combination of insulated walls and high
Percentage of glazing will lead to very uncomfortable indoor conditions. This is because the building will act like a green house or oven, as the insulated walls will prevent the radiation admitted through windows from escaping back to the environment. Indoor plants can be provided near the window, as they help in evaporative cooling and in absorbing solar radiation. Evaporative cooling and earth-air pipe systems can be used effectively in this climate. Desert coolers are extensively used in this climate, and if properly sized, they can alleviate discomfort by as much as
90%.
• Warm and humid
The warm and humid climate is characterized by high temperatures accompanied by very
High humidity leading to discomfort. Thus, cross ventilation is both desirable and essential.
Protection from direct solar radiation should also be ensured by shading.
The main objectives of building design in this zone should be:
(A) Resist heat gain by:
• Decreasing exposed surface area
• Increasing thermal resistance
• Increasing buffer spaces
• Increasing shading
• Increasing reflectivity
(B) To promote heat loss by:
• Ventilation of appliances
• Increasing air exchange rate (ventilation) throughout the day
• Decreasing humidity levels
The general recommendations for building design in the warm and humid climate are as follows:
(1) Site
(a) Landform: The consideration of landform is immaterial for a flat site. However, if there
are slopes and depressions, then the building should be located on the windward side or crest to take advantage of cool breezes.
(b) Waterbodies: Since humidity is high in these regions, water bodies are not essential.
(c) Open spaces and built form: Buildings should be spread out with large open spaces for
Unrestricted air movement. In cities, buildings on stilts can promote ventilation
and cause cooling at the ground level.
(d) Street width and orientation: Major streets should be oriented parallel to or within 30º of the prevailing wind direction during summer months to encourage ventilation in warm and humid regions. A north-south direction is ideal from the point of view of blocking solar radiation. The width of the streets should be such that the intense solar radiation during late morning and early afternoon is avoided in summer.
(2) Orientation and planform
Since the temperatures are not excessive, free plans can be evolved as long as the house is under protective shade. An unobstructed air path through the interiors is important. The buildings could be long and narrow to allow cross-ventilation. For example, a singly loaded corridor plan (i.e. rooms on one side only) can be adopted instead of a doubly loaded one. Heat and moisture producing areas must be ventilated and
Separated from the rest of the structure (Fig. 5.21) [8]. Since temperatures in the shade are not very high, semi open spaces such as balconies, verandahs and porches
can be used advantageously for daytime activities. Such spaces also give protection from rainfall. In multistoreyed buildings a central courtyard can be provided with vents at higher levels to draw away the rising hot air.
(3) Building envelope
(a) Roof: In addition to providing shelter from rain and heat, the form of the roof should be planned to promote air flow. Vents at the roof top effectively induce ventilation and draw hot air out. As diurnal temperature variation is low, insulation does not provide any additional benefit for a normal reinforced cement concrete (RCC) roof in a non-conditioned building.
However, very thin roofs having low thermal mass, such as asbestos cement (AC) sheet roofing, do require insulation as they tend to rapidly radiate heat into the interiors during
daytime.
Fig- Padmanabhapuram Palace
A double roof with a ventilated space in between can also be used to promote air flow.
(a) Walls: As with roofs, the walls must also be designed to promote air flow. Baffle walls, both inside and outside the building can help to divert the flow of wind inside .They should be protected from the heavy rainfall prevalent in such areas. If adequately sheltered, exposed brick walls and mud plastered walls work very well by absorbing the humidity and helping the building to breathe. Again, as for roofs, insulation does not significantly improve the performance of a non-conditioned building.
(b) Fenestration: Cross-ventilation is important in the warm and humid regions. All doors and windows are preferably kept open for maximum ventilation for most of the year. These must be provided with venetian blinds or louvers to shelter the rooms from the sun and rain, as well as for the control of air movement.
Openings of a comparatively smaller size can be placed on the windward side, while the corresponding openings on the leeward side may be bigger for facilitating a plume effect for natural ventilation. The openings should be shaded by external overhangs. Outlets at higher levels serve to vent hot air. A few examples illustrating how the air movement within a room can be better distributed, are shown in figures below-
(c) Color and texture: The walls should be painted with light pastel shades or whitewashed, while the surface of the roof can be of broken glazed tile (china mosaic flooring). Both techniques help to reflect the sunlight back to the ambient, and hence reduce heat gain of the building. The use of appropriate colors and surface finishes is a cheap and very effective technique to lower indoor temperatures. It is worth mentioning that the surface finish should be protected from/ resistant to the effects of moisture, as this can otherwise lead to growth of mould and result in the decay of building elements.
Remarks: Ceiling fans are effective in reducing the level of discomfort in this type of climate. Desiccant cooling techniques can also be employed as they reduce the humidity level. Careful water proofing and drainage of water are essential considerations of building design due to heavy rainfall. In case of air-conditioned buildings, dehumidification plays a significant role in the design of the plant.
Architecture for hot and humid climate from Asmita Rawool
Figure 13 Traditional Kerala house
Parameters for sustainability in Warm and Humid Climate
Ecological Site planning The house is generally designed in response to ecology-the backwaters, plantations etc. allowing the building to effortlessly blend in to the landscape of coconut, palm and mango trees etc.
The house is divided in to quarters according to “Vastu Shastra”. It is generally desirable to build the house in the south west corner of the north-west quadrant. The south east corner is reserved for cremation purposes while the north-east corner has a bathing pool.
Local Materials The building is made from locally available stone and timber and terracotta tiles for roof.
Physical Response to climate The plan is generally square or rectangular in response to the hot and humid climate. The central courtyard and the deep verandas around the structure ensure cross ventilation. The south west orientation of the house prevent harsh sun rays from penetrating the house. Sloping roofs designed to combat heavy monsoon of the region. The overhanging roofs with projecting caves help to provide shade and cover up the walls from the rain.
Embodied energy The building use materials like Stone and timber which are a reservoir of embodied energy and have the potential to be recycled or reused.
Socio-Economic Adaptability Toilets have been integrated into the design of the house and RCC (Reinforced cement concrete) has been introduced to build houses with larger spans.
2017-12-6-1512573056
Thomas D’Arcy McGee – Canadian Figure
Thomas D’Arcy McGee is a historical figure who as Charles Macnab states “was the first political leader in Canada to be assassinated.” McGee is referred to by historian Alexander Brady as “[having] a unique place among the Canadian statesman of his time.” Canadian Archives states that Thomas D’Arcy McGee “was born in Carlingford, Ireland, the son of James McGee, and Dorcas Catherine Morgan.” It was during his childhood where Mcgee’s knowledge was made known to the members outside of the family, As explained by author T.F Slattery “a hedge schoolmaster, Michael Donnelly, helped him along with his books and fertilized his dreams.” When asked about McGee as a student Donnelly referred to McGee as the “brightest scholar [he] ever taught.” Thomas D’Arcy McGee did not live in Ireland his entire life as “in 1842 McGee left Ireland and travelled to North America.” T.P Slattery details the trip when he writes “McGee left Ireland with his sister Dorcas to go and live in Providence Rhode Island.” Once arriving in America McGee became a publisher Charles Macnab states “he was publishing his New York Nation at New York, Boston, and Philadelphia, and shipping it to Ireland, assuming a nationalist leadership as best he could over the remnants of Ireland.” He would spend a long time in America before “in the spring of 1857 McGee moved to Montreal, at the invitation of leaders of that city’s Irish community who expected him to promote their interest.” Thomas D’Arcy McGee made the jump into the political life in Canada as “he was elected to the Legislative assembly in December of 1857… He joined the cabinet of John Sandfield MacDonald in 1862, and chaired that year’s Intercolonial Railway conference at Quebec City.” However that career ended tragically, as stated by author James Powell “Thomas D’Arcy McGee, the much revered Canadian statesman, and orator, died by an assassin’s bullet on April 7, 1868 entering his boarding house on Sparks Street.” This death is significant as in a letter “Lady Agnes MacDonald, the prime minister’s wife… ‘McGee is murdered.. lying in the street.. shot through the head.” Thomas McGee was a great public speaker, and highly intelligent figure who played a role Canadian history as being the victim of one of the first political assassinations in the history of the country.
When thinking about infamous Canadian historical figures, one name that factors into the story of Canada is Thomas D’Arcy McGee. As historian T.P Slattery explains “Thomas D’Arcy McGee was born on Wednesday, April 13, 1825, in Carlingford Ireland on the Rosstrevor coast.” Thomas was raised by parents “James McGee and Dorcas Catherine Morgan.” Carlingford would just be McGee’s first residence as “When D’Arcy was eight, the family moved south to Wexford.” It was tragically during this time that “[Mcgee’s] mother was the victim of an accident and died on August 22, 1833. This was a heavy blow.” Thomas D’Arcy McGee was an Irish born citizen who at a young age lost a central figure in his life.
The importance Thomas D’Arcy McGee’s mother had on his life was influencing him on his ideological beliefs. One factor of influence was a nationalistic belief as Alexander Brady states “she was a woman who cherished the memory of her father’s espousal of the national cause and preserved all his espousal of the national cause and preserved all his national enthusiasms which she sedulously fed to her son.” McGee also developed his knowledge on Irish literature from his mother, as Brady states “She was interested in all of the old Irish myths and traditions and poetry, and these she related to her [son].” Thomas D’Arcy Mcgee’s nationalist ideologies were influenced upon him at a young age, and that led McGee to become “an ardent idealist for the nationality of his country.”
Thomas D’Arcy McGee was a highly intelligent human from a young age. T.P Slattery in his novel writes “A hedge schoolmaster, named Michael Donnelly, helped him along with his books.” Donnelly was a mentor figure to McGee as he helped him in regards to his schooling. When asked about McGee as an academic, Donnelly replied stating that McGee was “the brightest scholar I ever taught.” Thomas D’Arcy McGee was also a great public speaker. As T.P Slattery writes “In Wexford, D’Arcy had a boyish moment of triumph when he gave a speech before the Juvenile Temperance society, and Father Matthew, who happened to be there, reached over and tousled his hair.” Author Alexander Brady writes in regards to McGee’s performance that it was “delivered before the society a spell-binding oration, on which he received the hearty congratulations… This was [Mcgee’s] first public speech.” Thomas D’Arcy McGee was a very unique human being with a high level of intelligence and a gift for public speaking both noticeable at a young age.
Not a lifelong citizen in Ireland, Thomas D’Arcy McGee moved on in to another chapter in his life. There were multiple reasons that led to McGee leaving Ireland, one of which being that “Mcgee’s father had married again, and the stepmother was not popular with the children.” Another reason of McGee moving had to deal with the economic realities of Ireland as explained by Alexander Brady “The economic structure of Irish society was diseased. Approximately seven million were vainly endeavouring to wring a lean subsistence from the land, and hundreds of thousands were on the verge of famine.” With that information in mind Robin Burns explains on biographi.ca that “D’Arcy McGee left for North America in 1842, one of almost 93000 Irishmen who crossed the Atlantic that year.” Thomas set out from Ireland “on April 7th McGee [who] was not yet seventeen… with his sister Dorcas to go and live with their aunt, in Providence Rhode Island.” Thomas D’Arcy McGee had just set out on a new chapter in his life
Thomas D’arcy McGee arrived in America with “few material possessions beyond the clothes on his back.”. One of the first things that happened upon landing in America was that he presented a speech as TP Slattery explains “he was on his feet speaking at an Irish assembly.” It was at this speech where Mcgee’s feelings were stated towards the “British rule in Ireland,” as he states “the sufferings which the people of that unhappy country have endured at the hands of a heartless, bigoted despotic government are well known… Her people are born slaves, and bred in slavery from the cradle; they know not what freedom is.’ This message condemning the British rule over the Irish made an impact as it got Thomas D’Arcy McGee into a new profession while in the United States as a writer where he “joined the staff of the Boston Pilot.”
Thomas D’Arcy McGee had just moved to America, and “within weeks he was a journalist with the Boston Pilot, the largest Irish Catholic paper in the [United] [States].” With his new role, Thomas D’Arcy McGee was described as “the pilot’s traveling agent, [who] for the next two years travelled through New England collecting overdue accounts and new subscribers.” While being apart of these trips, McGee became connected a group known as “young Ireland Militants in Dublin.” One of the key members of the group was “Daniel O’Connell [who] held to a non violent political philosophy, but in 1843 he followed a change in strategy when he allowed some of the young militants who had joined the association after 1841 to plan and manage a series of rallies of hundreds of thousands across Ireland to hear him.” One of the members of the young Ireland group was “a young Ireland moderate Gavan Duffy who was the publisher of the Nation.” It was through the Nation that McGee became connected to the Young Ireland’s as Gavan Duffy had interest in him. Duffy had for a long time been a fan of McGee as Charles Macnab writes “Duffy had been impressed enough with young McGee to have engaged with him almost immediately to write a volume for Duffy’s library of Ireland series.” Hereward Senior details Duffy’s interest in Mcgee’s ability as he writes “the talents of D’Arcy McGee were recognized by Duffy, editor of the Nation who invited McGee to join its staff and McGee subsequently became part of the “Young Ireland” group.” In just a short amount of time, Thomas D’Arcy McGee had gone from arriving in the United States to now becoming recognized by the public for his ability as a writer.
Outside of his professional life, one of the actions that Thomas D’Arcy made a deeply personal decision in his life. As explained by historian David Wilson “On Tuesday, 13 July he married Mary Theresa Caffrey, whom he met at an art exhibition in Dublin.” The two connected over “their love in romantic poems, and letters show that she cared deeply about him.” The travel that McGee made however took a toll on the marriage, as Wilson states “They were torn apart by exile and continually uprooted as McGee moved from Dublin to New York, Boston, Buffalo, back to New York… When McGee was on the road Mary experienced periods of intense loneliness; when he was at home she often had to deal with his heavy drinking.”. The family suffered through tragedy as “of their five children, only two survived into adulthood.” However with all that tension and tragedy which happened there was still a connection between the family as Wilson writes “there was great affection and tenderness within the family, as Mcgee’s letters to his children attest. Mary continued to write of ‘my darling Thomas’, until the end of her days.” Outside of his workings as a writer, Thomas D’Arcy McGee did have his personal life with his family, whom he evidently cared for.
Mcgee’s last movements around Ireland came from incidents that he witnessed while in the country. This incident happened in the year 1847 as “the Irish confederation was frustrated in the general election, and a radical faction developed calling for armed action.” As explained by historian Hereward Senior “The young Irelanders were converted to the idea of a barricade revolution carried out by a civilized militia. They conspired to re-enact the French revolution on Irish soil. These young Irelanders were more attracted by the romance of revolution than by the republican form of government.” Thomas D’Arcy McGee had been a member in this revolution as Alexander Brady explains “he [consulted] the Irish revolutionists in Edinburgh, and Glasgow and enrolled four hundred volunteers.” However the downfall of McGee’s time in the revolution came as “he was arrested for sedition on the eve of his first wedding anniversary, the charges though are dismissed the next day.” This led to McGee ultimately leaving Ireland as “with a sad heart [McGee] boarded a brig at the mouth of the Foyle and sailed for the United States… In America he began at the age of twenty three a new life destined to plead for causes to prove more successful than the Irish independence.” This was the end of Thomas D’Arcy McGee’s life in Ireland.
Upon returning to the United States, Thomas D’Arcy McGee had moved to a different chapter in his life. He began writing papers one of the papers written was known as the New York Nation, as explained by Charles Macnab McGee “was publishing his New York Nation at New York, Boston, and Philadelphia.” Macnab also made sure that his paper reach Ireland, as Macnab writes “McGee shipped the paper to Ireland assuming a nationalist voice as best he could over the remnants of Young Ireland and the future political and cultural directions of the Irish world.” McGee made it clear in regards to this paper that he was willing to take a radical approach as explained by David Wilson one of his enemies being the catholic church. Wilson writes in his book “the reference [McGee] [Makes] to “priestly preachers of cowardice was pivotal; the catholic church had transformed heroic Celtic warriors into abject slaves. “The present generation of Irish Priests” he wrote, ‘have systematically squeezed the spirit of resistance out of the hearts of the people.’” In response to the criticism of the catholic church, the church condemned the Thomas D’Arcy McGee in a statement made by author being “Bishop John Hughes,” who described Mcgee’s writings as having “transferred the ‘odium of oppression’ from the British government to the catholic clergy.” The demand that Hughes made was that “unless the Nation shall purify its tone.. let every diocese, every parish, every catholic door be shut against it.” The eventual result of the Nation was explained by T.P Slattery stating that “The Mcgee’s were just in time to witness the collapse of his New York Nation… He moved on to Boston planning to sail back to Ireland.”
Thomas D’arcy Mcgee didn’t end up moving to Ireland as T.F Slattery writes “Mcgee postponed his return to Ireland and remained with his young family in Boston. There he pucked up a few fees lecturing.” As explained in the Quebec history the next chapter of Mcgee’s life happened “in 1950 McGee moved to Boston and founded the American Celt, and in 1952 he moved to buffalo where he published the American Celt for five years.” The purpose of the Celt as explained by T.P Slattery was to focus on “aid for the ancient missionary schools; encourage the Irish industrial enterprise, develop literature, and revive the music of Ireland.” The audience that was intended for the Boston Nation was “Irish worked who were irritated by the unexciting views of the Boston Pilot, and took for granted that McGee would be more to their taste as a rebel.” While in America McGee was also a novelist, as he published multiple writings about the Irish people. Examples of these writings are “A history of Irish settlers in North America (1851) to demonstrate that the Irish had made significant contribution to the history of North America.” McGee also wrote three other books titled “A history of the attempts to establish protestant reformation in Ireland (1853), the Catholic history of North America 1855), and the life of Rt. Rev Edward Maginn (1857).” In the same year of writing his last book a new chapter on Mcgee’s life was opened as “In 1857 he moved from Buffalo to Montreal, Lower Canada at the invitation of some Irish Canadians.” Thomas D’Arcy McGee was now moving to his third country.
While in Canada Thomas D’Arcy McGee continued continued writing. As Hereward Senior “Upon his arrival in Montreal McGee started to publish the New Era.” Mcgee’s new paper was quite significant to Canadian history as “a series of editorial and speeches by D’Arcy McGee had become historic. They constitute the evidence that McGee was the first of all the fathers of confederation to advocate a federal basis for a new nation.” What Slattery is implying is that McGee was the first major endorser of the formation of what would become known as Canada. Slattery continues in writing that “It began unnoticed in an article of June 27 called “queries for Canadian Constituencies,” with an acute analysis of some of the practical issues. This led the way to three important editorials… written on August 4,6, and 8 1857.” McGee’s writings from the New Era led to the next major decision in Mcgee’s life as “In December 1857 D’Arcy McGee was one of three members elected to represent Montreal in the Legislative Assembly. He had been nominated by the St Patrick’s society of Montreal.”
In regards to what McGee discussed in his editorials for the New Era, T.P Slattery states “The first editorial stressed the need for union as distinct from uniformity. The second was on the role of the French language, and the third, was on confederation.” In the first editorial McGee explained that “Uniform currency was needed “Uniform currency was needed; so were a widespread banking and credit system, the establishment of courts of last resort and an organized postal system “one is much more certain of his letters from San Francisco,” The next editorial McGee writes discusses is based on “the quality of Quebec,” which McGee discussed in an editorial on April 6th 1858 “urging parliament to adopt the proposals for federation which were to be introduced by Alexander Galt.. ‘we are in Canada two nations, and most mutually respect each other. Our political union must, to this end be made more explicitly we are to continue for the most general purpose as a united people.” The third editorial presented by McGee States “‘the federation of feeling must precede the federation of fact’. that epigram not only exposed the weakness of previous unions; it expressed Mcgee’s passion to arouse such a spirit, so a new people could come together in the north.” To specify the overall political philosophy Slattery States “[McGee] was a devoted student of Edmund Burke for theory, and of Daniel O’Connell for practice. His studies sharpened by his intelligence, and corrected as he matured through his sharper experiences.” With his political ideology out in the open Thomas D’arcy Mcgee had his path to “springboard for his start in Canadian politics. In December of 1857 he was elected to the Legislative assembly of the province of Canada.”
Thomas D’Arcy McGee had gotten into a new profession which ended up being politics. As Quebec history states “in 1958 McGee was elected as an Irish Roman Catholic to the Legislative assembly of Canada for Montreal west. A constituency which he represented until 1867, and he was re-elected for or to the house of commons of the new dominion.” He sat with the reform government of George Brown in 1858.” As Alexander Brady explains in his reasoning for supporting Brown “McGee was won by Brown’s frank, fearless character. Moreover, he believed that the Irish catholics could subscribe with little reservation to the reform leader’s principles.” One of the Principles that McGee agreed on with Brown was “a hostility to the intolerant Toryism of the old school and entertained his faith in the extension of popular suffrage economy in public expenditure and reduction of taxes.” Once the government returned in “March 1858, the parliamentary session began. From the outset McGee hurried into the leading debates and attacked the corruptions as the government party was descried, with all the weapons of wit and searching sarcasm.” What McGee was known for during his early years in Government was what he had been great at his whole life. Alexander Brady explains this talent when he writes a local reporter from “the globe wrote that [McGee] was undoubtedly the most finished orator in the house… he had the power of impression an audience accounted for by attributing to those which can only be accounted for by attributing to those who possess it some magnetic influence not common to everyone.” McGee may now have moved from a writer to a politician, however his childhood abilities as a public speaker had stayed with him.
Life for Thomas D’Arcy McGee in the Brown political party was not always phenomenal. As explained by David Wilson “the reform party began to alarm its French Canadian wing. Sensing an opportunity, the liberal Conservatives moved a non confidence resolution against the government.” This led to an area of debate in which “all the leading figures in government defended its record- all of them except McGee, who was getting drunk with friends when he was scheduled to speak. His erratic behaviour was symptomatic of deeper disillusionment with the reform party.” With Mcgee’s behaviour in question the government made its move to deal with him as agreements were reached between the leaders of the reform party that “a new reform government must abandon the Intercolonial railway, and that there would be no place of McGee in the new cabinet.” What also alienated Mcgee’s standing was in regards to political ideology, as David Wilson explains “McGee was a loose canon whose position on separate schools alienated the clear grits, whose position on separate schools alienated Rouges. For the members of the Reform party McGee had become a liability.” This was the beginning of the end for McGee in the reform party as “McGee felt that he had been strongly stabbed in the back by his own colleagues.” Feeling alienated by the members of his party Thomas D’Arcy McGee “transferred his allegiance to the conservatives, where he became minister of agriculture in the MacDonald Government of 1864.” McGee had thus crossed the political aisle embracing a new party.
As a member of the John A Macdonald party Mcgee’s status had increased. As explained in Canadian archives
In 1864 McGee had helped to organize the Canadian visit, a diplomatic goodwill tour of the Maritimes that served as a prelude to the first confederation conference. During this tour, Magee delivered many species in support of union and lived up to his reputation as the most talented politician of the era. He was a delegate to the Charlottetown conference and the Quebec conference. In 1865 he delivered two speeches on the union of the provinces, which subsequently bound and published.
Moments from McGee in regards to the two conferences are explained by T.P Slattery who writes, during the Quebec conference “McGee speaking with an ease of manner moved an amendment. He proposed that the provision be added to the provincial power over education… Andrew Archibald MacDonald, sitting at the far and of the table to Mcgee’s left seconded the amendment.” In explaining the logic behind his speech McGee stated “saving the rights and privileges which the protestant or catholic minority in both Canadas may possess as to their denomination schools when the constitutional act goes into operation.” In regards to the Charlottetown conference Mcgee’s major contribution as explained by David Wilson was “his principal contribution to the Charlottetown conference lay not in the formal proceedings but in the whirl of social events that surround the meetings- the dinner parties and luncheons, and the grand ball at the government house.” The effect that McGee had on these meetings was noticeable as “historians of confederation had pointed out, these events were important in creating a climate of camaraderie and allowing new friendships to form. At a liquid lunch on board the Victoria “Mcgee’s wit sparkled brightly as the wine,” the mapped was euphoric that the delegate proclaimed the banns of matrimony among the provinces.” Though Mcgee’s role in the Charlottetown conference was described by Wilson as “a secondary and often marginal role in the negotiation between Canada and the Maritimes… No other Canadian politician knew the maritimes better than McGee.” Hence McGee was more of an advertiser to the Maritime colonies with the goal of convincing them into join confederation.
The goal that McGee had played a role in aiming to accomplish finally had been accomplished. However as Alexander Brady states “In November 1866, the delegation of ministers appointed to represent Canada at the final drafting of the federal constitution sailed for England. McGee was not a member of that party.” This began Mcgee’s role declining in government as Hereward Senior explains “John A. MacDonald found it more convenient to draw the representative of the Irish Catholic community from the maritimes.” With that reality in mind Thomas D’Arcy McGee “prepared to run in his old constituency in Montreal west.” It was here that Thomas D’Arcy McGee faced off with a new foe.
The Fenian movement is explained by author Fran Reddy as she writes
The Irish Fenian Brotherhood movement spurred along the idea of union among the British North American Colonies had spurred along the idea of union amongst the British North American colonies. Due to increasing skirmishes along the border as the Fenians tried to move in from the United states to capture British North America colonies, believing that they could hold these as ransom to bargain for Ireland’s independence from British rule.
Why there are relevant to Thomas D’Arcy McGee is due McGee making an enemy with the Fenians when “in 1866 he condemned with vehemence the Irish American Fenians who invaded Canada; and in doing so he incurred the enmity of the Fenian Organization of the United States.” This played a role in the election that McGee was trying to win in Montreal as “In Montreal the Fenians were able to find allies amongst the personal and political enemies of McGee.” This movement caused an effect in the political life of McGee as “At the opening of the election campaign, Magee wrote to John A. MacDonald that he had decided not to go to Toronto, as it would provide the Grit Fenians” with an opportunity to offer him insults.” The attempt to stop McGee from getting elected failed as “McGee won by a slight majority in Montreal west,” thus regaining his old seat in government. The feelings the Fenians had of McGee influenced Thomas D’Arcy Mcgee’s time in government as explained by Canadian archives “Thomas D’Arcy McGee was seen as a traitor by the very Irish Community that he sought to defend, and by 1867 [Mcgee] expressed a desire to leave politics.”
However Thomas D’Arcy McGee would not get his wish of leaving the political scene, as Alexander Brady describes in detail the final moments of his life. As Brady writes “[McGee] spoke at midnight. Shortly after one on the morning of the 7th the debate closed. The members commented generally on Mcgee’s speech; some thought it was the most effective that they had ever heard him deliver.” After the evening concluded there was a new positive mood on McGee as “Perhaps part of the lightheartedness was caused by this reflection that on the morrow he would return to Montreal, where his wife and daughters were within a few days to celebrate his forty third birthday.” While continuing on McGee ended the evening as “he left his friend and walked to his loving on Sparks street. As he entered a slight figure glided up and at close range fired a bullet into his head. His assassin dashed away in the night, but left tell tale steps in the snow later to assist in his conviction.” The news of Mcgee’s death had spread quickly amongst Canada, one person who received the news was “Lady Agnes MacDonald the prime minister’s wife,” in which she states “The answer came up clear and hard through the cold moonlit morning: “McGee is murdered… lying in the street shot through the head.” The scene of the death was described by a witness on the scene “Dr. Donald McGillivray” in which he states “about half past two I was called and told that D’Arcy McGee had been shot at the door of his boarding house. I went at once. I found his body lying on its back on the sidewalk.” Thomas D’Arcy Mcgee’s life had come to an end.
The search for Mcgee’s killer led authorities to a man named “Patrick James Whelan who was convicted and hanged for the crime.” As Slattery explains “The police moved fast. Within twenty hours of the murder they had James Whelan in handcuffs. In Whelan’s pocket they found a revolver fully loaded. One of its chambers appeared to have been recently discharged.” There is also more evidence against Whelan as explained by Charles Macnab “Minutes before his execution, Patrick James Whelan admitted that he was present when McGee was shot.” Also presented during the trial was Whelan’s role in Mcgee’s campaign as written by Hereward Senior “his presence in Prescott during Mcgee’s campaign there, his return to Montreal when McGee returned, and his taking up employment in Ottawa when McGee took his seat in parliament all suggest he was stalking McGee.”
The main theory during the trial was that Whelan was a Fenian, which would make sense as they were the major enemy of McGee. However as Senior exclaimed “Whelan insisted he wasn’t a Fenian.” What Whelan was identified in was in fact called “the Ribbonmen, however Whelan was unquestionably under the influence of Fenian propaganda and engaged in clandestine work on their behalf.” There was a controversial moment in the trial as explained by T.F Slattery “The prisoner had come back from court and was telling what had happed. James Whelan did not say “he shot McGee like a dog’ but that Turner had sworn he heard Whelan say, “he’d shoot McGee like a dog,’ The prisoner asserts that his words have been twisted.” The trial resulted in a guilty verdict as “Whelan maintained his innocence throughout his trial and was never proven to be a Fenian. Nonetheless he was convicted of murder and hanged before more than 5000 onlookers on February 11th 1869.”
The funeral was very non luxurious as Charles Macnab states “The body was not handed over for a proper catholic burial. Instead it was buried in a shallow grave in the jail yary. There was fear of a massive fenian demonstration at Whelan’s funeral.” The status of Mcgee as a public figure was made evident by the amount of attendees at his funeral. As exclaimed by T.P Slattery “The population of the city was then one hundred thousand, but there was so many visitors for D’Arcy Mcgee’s funeral that the population had practically doubled.” Amongst the attendees were “Newspaper reporters who estimated the number marching and gathered along the long route wrote that a hundred thousand people participated in the demonstration of mourning.” In regards to the legacy of Thomas D’Arcy McGee Alexander Brady states “such material bases of union must fail to hold together different sects and races inhabiting the dominion, unless Canadians cherish what McGee passionately advanced, the spirit of toleration and goodwill, as the best expression of Canadian nationality.” David Wilson gives the perfect summary of who Thomas D’Arcy McGee was when he writes “For the myth makers, here was the ideal symbol of the Celtic contribution to Canadian nationality- an Irish catholic Canadian who became the youngest of the fathers of confederation, who was widely regarded as an inspirational and visionary Canadian nationalist and who articulated the concept of unity in diversity a century before it became the dominant motif of Canadian identity.” Thomas D’Arcy McGee was a very important public figure in Canadian history who met a tragic and unfortunate demise due to an assassination.
Thomas D’Arcy McGee was an Irish citizen born in the city of Carlingford Ireland. He moved at a young age, and during that time he dealt with the tragic loss of his mother who was killed in an accident. Mcgee’s Irish nationalist ideology was inspired by his mother, an ideology which played a major role in Mcgee’s life. Thomas D’Arcy McGee was a highly intelligent individual as while in his new location of Wexford a man who helped McGee in his studies said Thomas D’Arcy McGee was “the brightest scholar I have ever taught.” During his teenage years McGee moved over to the United States where over the next few years he published multiple papers which helped him gain the eye of an Irish nationalist organization. During this time Thomas D’Arcy McGee began his family by getting married in Dublin Ireland. Thomas D’Arcy Mcgee’s time in Ireland however came to an end as he was nearly arrested, a threat worthy enough of him moving back to America. While in America McGee went from New York to Boston publishing papers with a pro Ireland ideology. These papers led to the next chapter of Mcgee’s life involving his move to Canada, specifically Montreal. In Montreal McGee founded a new paper titled the Montreal Era, and in the Montreal Era he promoted what became known as Confederation. This led to Thomas D’Arcy McGee getting into politics in Montreal, where he became a member of the Reform Party of George Brown. While in the Reform party McGee was exposed as a loose canon, with views that split the party ideology, and he was also known for being an alcoholic. With that revelation McGee was angry to the point of which he joined the party in power under the leadership of John A. Macdonald. Thomas D’Arcy McGee played a role in Canadian Confederation as he attended both the Quebec, and Charlottetown conferences, leading to the formation of the country of Canada. However McGee was left off the delegation that would deliver the document of confederation to London. This development led McGee to run for a seat in political office, a seat to which McGee was attacked from a function of Ireland nationalists known as the Fenians. Thomas D’Arcy McGee won his seat however in April 7th 1868 McGee was murdered at the hands of a man named Patrick Whelan. Whelan was convicted of the crime and hanged as a result. McGee is one of Canadian history’s great public speakers, as there are several instances throughout his life where he swayed an audience with his speaking ability. Thomas D’Arcy McGee was an important figure in history and in the formation of the Country of Canada who tragically met his demise at the hands of a political assassination.
Bibliography
Powell, James. “The Hanging of Patrick Whelan.” Today in Ottawas History. August 22, 2014. Accessed November 28, 2018. https://todayinottawashistory.wordpress.com/2014/08/22/ the-last-drop/.
Archives Canada. “Thomas D’Arcy McGee (April 13, 1825 – April 7, 1868).” Library and Archives Canada. April 22, 2016. Accessed November 28, 2018. https://www.bac- lac.gc.ca/eng/discover/politics-government/canadian-confederation/Pages/thomas-darcy- mcgee.aspx.
Block, Niko, and Robin Burns. “Thomas D’Arcy McGee.” The Canadian Encyclopedia. April 22, 2013. Accessed November 28, 2018. https://www.thecanadianencyclopedia.ca/en/article/ thomas-darcy-mcgee.
Burns, Robin B. “Biography – McGEE, THOMAS D’ARCY – Volume IX (1861-1870) – Dictionary of Canadian Biography.” Home – Dictionary of Canadian Biography. 1976. Accessed November 28, 2018. http://www.biographi.ca/en/bio/ mcgee_thomas_d_arcy_9E.html.
Bélanger, Claude. “Quebec History.” Economic History of Canada – Canadian Economic History. January 2005. Accessed November 28, 2018. http://faculty.marianopolis.edu/c.belanger/ QuebecHistory/encyclopedia/ThomasDArcyMcGee-HistoryofCanada.htm.
Reddy, Fran. “The Fenians & Thomas D’Arcy McGee: Irish Influence in Canadian Confederation.” The Wild Geese. June 30, 2014. Accessed November 29, 2018. http:// thewildgeese.irish/profiles/blogs/the-fenians-thomas-d-arcy-mcgee-irish-influence-in- canadian.
Canada, Archives. “Common Menu Bar Links.” ARCHIVED – Daily Life: Shelter – Inuit – Explore the Communities – The Kids’ Site of Canadian Settlement – Library and Archives Canada. May 02, 2005. Accessed November 28, 2018. https:// www.collectionscanada.gc.ca/confederation/023001-4000.52-e.html.
Senior, Hereward. The Fenians and Canada. Toronto Ontario: The Macmillan Company of Canada Limited, 1978.
Macnab, Charles. Understanding the Thomas D’Arcy Mcgee Assassination: A Legal and Historical analysis. Ottawa Ontario: Stonecrusher Press, 2013.
Brady, Alexander. Thomas D’arcy Mcgee. Toronto Ontario: The Macmillan Company of Canada Limited, 1925.
Slattery, T.P. The Assassination of D’Arcy Mcgee. Garden City, New York: Doubleday & Company, Inc., 1968.
Wilson, David A. Thomas D’Arcy Mcgee: Volume 1. Passion, Reason, and Politics. 1825-1857. Montreal Quebec: Mcgill-Queen’s University Press, 2008.
Wilson, David A. Thomas D’Arcy Mcgee: Volume II. The Extreme Moderate. 1857-1868. Montreal Quebec: Mcgill- Queen’s University Press, 2011.
Slattery, T.P. They Got To Find Me Guilty Yet. Garden City, New York: Doubleday & Company Inc., 1972.
2018-11-30-1543538374
Hadrian’s works: online essay help
Architecture that has withstood the test of time gives us an insight into the culture and values of civilizations from the past. Ancient Roman architecture is widely known to be some of the most suggestive and prominent works because the Emperors who ruled used building designs to convey their strength and enrich the pride of their people. Hadrian was not a man of war like the emperors who preceded him. Instead, he dedicated his time to fortifying his nation’s infrastructure and politicking his way into the hearts of provinces far beyond the walls of Rome. I fell in love with the story of Hadrian for two reasons: his architectural contributions have withstood the test of time, and even though he is so well studied there is so much about his life we do not know. This research paper will zero in on the life of Roman Emperor Hadrian and how his upbringing and experiences influenced his architectural works. Hadrian struggled during his reign as well as within his own mind due to his enthusiasm for Classical Greek culture that was fused with the Roman pride his mentors had instilled in him. A description and discussion of Hadrian’s architectural works that I have found most interesting will illustrate this fusion even more.
Publius Aelius Hadrianus was born in Italica, Spain on the 24th of January, year 76 A.D. He was born to a family that was proud to be one of the original Roman colonists in the province that was considered to be one of Rome’s prized possessions. The land offered gold, silver, and olive oil of higher quality than that of Italy. Additionally, Hadrian was born during a period where Italica dominated the Roman literacy scene. The city also boasted being the birthplace of Hadrian’s predecessor, mentor, and guardian Trajan. Hadrian’s upbringing in Italica gave him a very unique perspective on Rome’s ruling of expansive territory as well as the artistic and intellectual qualities of Roman tradition. While growing up his “gaze would fall upon statues of Alexander, of the great Augustus, and on other works of art, which…were all of the highest quality.” He developed a sense of pride for being Roman, and this would translate into his future actions as emperor and architect.
Hadrian was strong in both mind and body. He was built tall and handsome, and kept in shape through his love for hunting. In the words of H.A.L Fisher, Hadrian was also “the universal genius.” He was a poet, singer, sculptor, and lover of the classics so he became known by many of his peers as a Greekling. The synergy between Greek and Roman ideals within Hadrian made him able to approach his nation’s opportunities and struggles from multiple angles, which is also why he would become such a successful emperor. By the time he came to power “Hadrian had seen more of the Roman dominion than any former emperor had done at the time of his accession. He knew not only Spain, but France and Germany, the Danube lands, Asia Minor, the Levant and Mesopotamia, and thus had a personal acquaintance with the imperial patrimony that no one else in Rome could rival.”
During Hadrian’s reign as emperor, he aligned himself with a military policy that was controversial at the time, but inspired by his upbringing in the province of Italica. He believed that the provinces should be guarded by a locally recruited military, while his Roman legions would stay in a single region for decades. The personal interest of provincial residents to protect themselves was his goal. The only Roman descendants that would aid in the protection of provinces were part of the corps d’elite – the best of the best – and would be sent only to train the recruited military-men. During his reign, however, Hadrian experienced a loss of two full legions. The thinning of his military meant he would rely heavily on recruited provincial men as well as physical barriers. One of which – his most famous – was located in Britain: Hadrian’s Wall.
Hadrian’s arrival in Britain was a spark that ignited a fire of progress and development. During the second century, much of London was destroyed by fire, and when the country was rebuilt to an area of about 325 acres, it became Rome’s largest northern territory by a long shot. Britons have historically always valued the countryside more than city life, as evident by their plain cities and attractive gardens, and for this reason many of the other cities that were rebuilt by the Romans ended up reducing in size, rather than expanding. The inhabitants simply wanted to live in the beauty of nature, and moved out of their towns exponentially as the countryside was developed. The most significant and long lasting accomplishment during the time that Rome rebuilt its English territory was the design and completion of Hadrian’s Wall.
Hadrian foresaw a symbiotic relationship that he and the British territory could share. It was based on his need for man power, which Britain had plenty to loan out. In return, Hadrian would fortify the territory and protect it from the northern savages. His past militarized protection experiences usually presented him with an expansive section of land to keep account of; but since Britain was surrounded by water in most directions his first inspiration was to build a wall. Looking back on his struggle at the Rhine-Danube region, Hadrian knew that if a military force were to be compromised a stronghold built for retreat would only lead them to death. His strategic mind led him to believe that mobility was crucial in remaining tactically offensive, so a system of fortifications spread out to increase the area of control and communication was his ideal option.
Hadrian’s Wall began near the River Tyne and stretched all the way to the Solway. It wasn’t meant to be manned at every point along its length, but rather act as a system that would drive the traffic of his enemies. “Because its course was plotted from one natural advantage to the next, the wall seems to have chosen the most difficult route across the English countryside.” It climbs to steep crags and clings to dangerous ridges. Enemy forces would not only deal with a man-made wall in their path, but in many cases they found themselves faced with natural structures that made traversing the wall even more difficult; not to mention the ditch on the north side of the wall that was twenty seven feet wide and nine feet deep. “The gateways allowed the passage of troops for operations to the north and were points where civilian traffic between north and south could be controlled.” The wall was intended to be made of mortared masonry up until the River Irving, where limestone was no longer available locally. The wall continued on made of turf. Gates were built along the wall roughly every one Roman Mile (1.5 km). Behind each gate was a reinforced guard tower that would house the patrol.
Another one of the reasons Hadrian’s construction of the wall is such an astonishing feat is because the entire project was done by hand. Roman legionaries would spend time completing a pre-specified length of the wall, and then allow the next legion to come along and continue where they left off. Unlike most Roman architecture, the stones used to build the wall were small, about eight inches in width and nine inches in length. Historians attribute the use of small stones to the work that was required to get them to the wall. Every stone would have to be carried by the backs of men or animals, and cross a distance of eight miles all the way from a quarry in Cumberland. Then, without the aid of pulleys or ropes, legionaries would place each stone one by one.
As time went on, the wall was rebuilt and fortified by Hadrian’s successors and became a permanent fixture in the British provincial landscape; far more than just a military structure. Romanesque townships were built along the wall situated near the guard forts. The townships would be fully equipped with bath houses, temples, and even full marketplaces.
In the modern world, we do not see Hadrian’s Wall as it was during the height of Roman rule, though it is clear that influential proprietors of the wall overtime tried their best to maintain the “symbolism and materiality of the Roman remains.” The years of the walls existence have allowed man and weather to tear down the wall so that its stones could be used to build churches, roads and farmhouses. Experienced architects have worked to rebuild the wall overtime and John Clayton is responsible for one of the most significant rebuilds. He purchased a long stretch of farms along the central portion of the wall, and used the original stones that had fallen over time to reconstruct it. Clayton also moved many of the inhabitants and communities that were built near the wall to locations further away so as to increase the walls visibility.
It is refreshing to know though that modern day Roman enthusiasts can see a virtually untouched portion of the wall between Chollerford and Greenhead known as “Britain’s Wall Country.” It is “an unspoiled region of open fields, moors and lakes in the country of Northumberland.” Chesters, a city about half mile west of Chollerford is home to one of the best excavated wall forts. It touts remains of towers, gates, steam rooms, cold baths, the commandant’s house, and chambers where soldiers relaxed. The most well preserved wall fort in all of Europe to date is located at Housesteads in the same region. The fort is in the shape of a rectangle with rounded edges, and “along its grid of streets are foundations marking the commandant’s house, administrative buildings, workshops, granaries, barracks, hospitals” and more. One of the most Romanesque features of the fort is the presence of latrines; complete with wooden seats, running water, and a flushing system to carry waste away. Britain would not see these luxuries again until the 19th century as Roman standards were not equaled again until that time. Modern museums along the wall feature many artifacts from the original dwellers and attract tourists from all around the world.
At the ripe age of ten years old, Hadrian’s father passed away. Ancient documentation lends us virtually no details about his mother, but a father figure would have been the most important in Hadrian’s upbringing. Fortunately for him, he had two men that would play that role in his life. The first was Acilius Attianus, with whom Hadrian would spend the next five years with and have his first introduction to the capital city. Attianus also introduced Hadrian to his first formal education. He would return home to Italica for a year or two only to be summoned back by his other guardian Trajan.
In order to truly understand the character and reasoning behind more of Hadrian’s architectural works, one must look closely at the influence his cousin, mentor and guardian, Trajan, had on him. From an objective point of view, Trajan paved the way for Hadrian by becoming the first emperor to ever be born outside of Italy, and proved to the people of Rome that “loyalty and ability were of more importance than birth.” Trajan also moved young Hadrian from place to place whenever he saw his perspective become too narrow or close-minded.
At the age of forty and prior to becoming an emperor, Trajan developed relationships with men like Domitian and his predecessor Nerva. The latter would eventually adopt him as his own heir. His status allowed him to usher Hadrian into political positions that would give him the opportunity to interact with powerful people and make a positive impression. Trajan led both Hadrian and Rome into the light as a positive example. Moderation and Justice were at the forefront of all of his decision making, and is exemplified in his declaring that all honest men were not to be put to death or disfranchised without trial. Trajan brought Hadrian along with him to fight the Dacian wars, and it is here that Hadrian learned how the Roman army was organized and led. He witnessed Trajan tearing “up his own clothes to supply dressing for the wounded when the supply of bandages ran out.” During the outbreak of the second Dacian war, he granted Hadrian the gift of serving as commanding officer. After Hadrian proved his worth to his uncle and Rome, Trajan granted him a gift of even more importance – a diamond ring originally owned by Trajan’s predecessor Nerva, and symbolized the fact that Hadrian would absolutely be his successor.
At the age of forty two, Hadrian for the first time showed Rome that he was an innovator and a man who lived by the beat of his own drum: he wore a beard. In the later days of the Roman Republic, beards had gone out of style. In fact, no emperor prior to Hadrian had worn a beard. Some historians credit his beard to wanting to look like a philosopher, while others think he did so to hide a scar running from his chin to the left corner of his mouth. The real reason is that Hadrian realized there was no point in carrying on with the custom without reason. During his lifetime, shaving was practically torture for men, because they had no access to soap or to steel. Hadrian’s reintroduction of the beard among Roman’s would also foreshadow his eventual distaste in all things Roman.
Hadrian adopted Trajan’s sense of modesty and moderation. He did not except titles bestowed upon him immediately, and would only accept it for himself when he had felt he truly earned it. One of the best examples of this is demonstrated by the titles he chose for himself to be printed on Roman currency during his reign. Historical records from the period that document Hadrian’s reign would incorporate each and every one of the titles that he was ever given. “But on the emperor’s own coins the full official titulature occurs only in the first year. After that, first imperator was dropped, then even Caesar. Up to the year 123, he is pontifex maximus…holder of the tribunician power…For the next five years his coins proclaim him simply as Hadrianus Augustus.”
As if paying homage to Augustus, the founder of the empire and title that he had come to honor, Hadrian set off to see that the infrastructure of his roman state was intact and fortified under his direction. After five years of travel to improve the cities of Corinth, Manteca, and Sicily, Hadrian returned to Rome. He had laid down excellent groundwork for his governmental policy, so he finally had time to improve the infrastructure in his nation’s capital. He would soon realize his visions for structures like the Temple of Venus, and his most significant architectural accomplishment of all: The Pantheon.
Rome’s Pantheon was originally built by Marcus Vipsanius Agrippa. Destroyed by fire during Nero’s reign in year 80, Hadrian had it completely redesigned and reconstructed. “The very character of the Pantheon suggests that Hadrian himself was its architect…an impassioned admirer of Greek culture and art and daring innovator in the field of Roman architecture, could have conceived this union of a great pedimental porch in the Greek manner and of a vast circular hall, a masterpiece of architecture typically Roman in its treatment of curvilinear space, and roofed with the largest dome ever seen.” In lieu of his inherent modesty, he decided not to even put his own name on the façade of the building. Instead he would give credit to the original designer, by inscribing it with M. Agrippa. Though there is no hard proof that Hadrian was its only designer, it is only reasonable to believe that his mind, infused with Roman and Greek culture, could conjure its design – that which is one of the most renowned structural feats in human history. The most significant difference between typical Roman and Greek architecture was the importance of height. Romans believed in reaching for the heavens with their architecture. The bigger and more grandiose a building or monument was the better.
It is unusual that we do not find much ancient documentation on the building despite its historical importance. In fact, the only written report from the time is from Dio Cassius who thought the building was constructed by its original designer, M. Agrippa. He referred to the building as a temple of many gods. “A rectangular forecourt to the north provided the traditional approach, its long colonnades making the brick rotunda, so conspicuous today, appear less dominating; a tall, octastyle pedimented porch on a high podium with marble steps also created the impression of a traditional Roman temple.” The building’s southern exposure would reveal to an onlooker the Baths of Agrippa, to the east lie the Saepta Julia, and to the west the Baths of Nero.
The Pantheon is basically composed of a columned porch and cylindrical space, called a cella, covered by a dome. Some would argue that the cella is the most essential aspect of the Pantheon, while the porch is only present in order to give the building a façade. “Between these is a transitional rectangular structure, which contains a pair of large niches flanking the bronze doors. These niches probably housed the statues of Augustus and Agrippa and provided a pious and political association with the original Pantheon.” Once inside the dome, a worshipper would find himself in a magnificently large space illuminated only by a large oculus centered on the ceiling. The walls of the chamber are punctuated with eight deep recesses alternating between semicircular and rectangular in shape. At the south end of the interior is the most elaborate recess complete with a barrel-vaulted entrance. “The six simple recesses are screened off from the chamber by pairs of marble columns, while aediculae (small pedimented shrines) raised on tall podia project in front of the curving wall between the recesses.” Encircling the entire room just above the recesses is an elaborate classically styled entablature. The upper portion of the dome was decorated as well, but what remains is mostly from an 18th century restoration. “The original decoration of the upper zone was a row of closely spaced, thin porphyry pilasters on a continuous white marble plinth.” The dome floor is decorated in a checkerboard pattern of squares and circles within squares. The tiles are made of porphyry, marbles, and granites while the circles are made of gilt bronze.
The pantheon was built almost entirely of concrete, save the porch which was also constructed of marble. From the outside of the domed section, it would appear to an onlooker to be made out of brick, but this is not the case. The bricks in this section are only a veneer, or thin decorative layer. Simple lime mortar that was popular during the period was made by combining sand, quicklime, and water. When the water evaporated, the concrete was set. Roman concrete used in the construction of the Pantheon, called pozzolana, acted quite like modern Portland cement and would set even when the mixture was still wet. Hadrian designed the Pantheon’s domed top to be 43.3 meters in diameter, which is also the exact height of the interior room. A cross section of the rotunda would reveal that it was based off of the dimensions of a perfect circle, and that is what makes the interior space seem so majestic. The sheer size of the dome was never replicated or surpassed until the adoption of steel and other modern reinforcements. What made Hadrian’s dome possible though was his use of concentric rings laid down one after the other over a wooden framework to create the basic shape of the dome during construction. The rings would apply pressure to one another, thus stabilizing the structure. The lower portion of the dome was thick and made of heavy concrete and bricks, while the upper portion was built thin and utilized pumice to make it lightweight.
The exact purpose of the front porch is unknown, and as mentioned before, may have only been added in order to give the building a façade. “It consists of a pedimented roof, supported by no less than sixteen monolithic columns, eight of grey Egyptian granite across the front, three on either flank, and two behind them on each side.” By adding this colonnade Hadrian had proven that he saw past what man had originally used it’s temples for. Traditionally, the temple cella would never be entered by the public, and so architects would hone and focus their craft on the exterior elements of the temple. Hadrian had effectively anticipated the Christian church by several centuries in the design of his “House of Many Gods.”
The Pantheon embodies everything Hadrian was as a person during the early portion of his ruling. It was very much a fusion of Greek and Roman principles that mirrored Hadrian’s inner character. He shared grand Roman pride with the people he served, and they would forever see the Pantheon as a symbol of that pride. However, as Hadrian matured as a ruler, saw more of the world, and returned to Rome for short periods at a time, there was a monumental shift in his opinions of his own capital.
Not unlike Trajan, there was another man who played an integral role in Hadrian’s life. His name was Antinous, and although not many specifics are known about his life and relationship with Hadrian, we do know that he was from Bythnia. The two met there when critics believe Antinous was the age of eighteen. “To say that he was ”like a son” to Hadrian is to put a charitable slant on their rapport. It was customary for a Roman emperor to assume the airs, if not the divine status, of the Olympian god Jupiter.” Though it was never explicitly stated or denied, it is widely believed that Hadrian and Antinous were more than just friends, but lovers.
As a part of Hadrian’s entourage, Antinous naturally went on all of the quests that led him to see the world. It was on one of these expeditions along the Nile River that Antinous lost his life, and forever plagued the mind of the now devastated emperor. Some say Antinous was murdered by his ship mates, while others even speculate that Hadrian may have sacrificed him in a testament to Egyptian mystery cults that involved Antinous’ sacrifice as a way for Hadrian to gain immortality. Nevertheless, Hadrian went on to express his admiration for the boy to the world at large. He ordered the production of his image in full scale statues, busts, and miniature printings on coins and other various items. “Full lips, slightly pouting; a fetching cascade of curls around his soft yet squared-off face; somewhat pigeon-breasted, but winningly athletic, his backside making an S-curve that begs to be stroked… one could rhapsodise further, but it is more telling to stress the sheer quantity of production.”
The most fascinated reason I have come to discover about Hadrian’s mass production of Antinous’ image is that of classical religious revival. “Hadrian knew about the Christians, whom he regarded as harmless idiots; he waged war against the Jews, who challenged his authority.” He presented Antinous as Dionysos, Pan, and as a second Apollo. Each of these disguises are intricately portrayed on images of Antinous in order to reinstall his personal views to the people he ruled.
Today the image of Antinous has survived even in Western culture. What we perceive as beauty in both men and women has been absolute for millennia; symmetrical features and calibrated proportions that Antinous embodied so thoroughly. Across other world cultures, the same holds true. Even populations completely secluded from the western world will perceive beauty as we do. As one inspects the image of Antinous methodically, they can only deduce that Hadrian was a man of fine taste.
After a stint in Africa, Hadrian returned to Rome for a short period of time, but felt as if he belonged there no more. “In Rome he hated the court etiquette, at the same time as he insisted on it: the wearing of the toga, the formal greetings, the ceremonies, the endless pressure of business.” So he left for Athens, and felt at home there. His distaste for the capital of his country foreshadowed his political decline and eventual downfall, but his positive contributions to the Roman society and historical architecture were far from over.
While in Athens, Hadrian had the opportunity to express his inner Greekling in a manner that was stronger than ever. He could talk the talk and walk the walk so well in Athens that he undertook the last round of initiation at Eleusis. The Panhellenic council offered him a place to continue leading the people who so dearly looked up to him. Though the Panhellenic council did not have formal political power, it unified the public because it was the only society that could grant a new territory to be truly Hellenic. While serving the council, and being referred to as Panhellenios, Hadrian was constantly immersing himself in the local culture, and enjoyed watching the best Athletes in Greece perform at the Panhellenic games. The Athenians even granted Hadrian the title of “Olympian.”
At this point in Hadrian’s reign, he seems to forget the lessons of moderation and justice taught to him by Trajan. He was once an emperor reluctant to accept praise from his people, but in Athens he did just the opposite. He designed and ordered the building of a new city called Hadrianopolis. As a testament to his distaste of Rome, a statue was erected of Hadrian at Olympia. The statue adorns a lorica, or breastplate that is engraved with symbols that depict the character of one who wears it. “Hadrian’s lorica shows Athene, flanked by her owl and her snake, being crowned by two graces, and standing atop the Wolf of Rome which suckles Romulus and Remus.” Clearly Hadrian believed deep down that Athens was a city superior to Rome, and the sight of the statue would surely leave a bitter taste in the mouth of any Roman who traveled to Olympia and gazed upon it.
Even after all Hadrian had done for the welfare and protection of Rome, he failed his people in one great aspect. He began his rule as an outsider, and remained so because he spent so little time in the capital city. Near the end of his days, the tension was amounting to a great amount of stress. So much so, that he became a tyrant. Hadrian would not have mercy on anyone who stepped on his toes as their leader. On one hand, the senate understood that he had outsmarted them, and the Italian members were fully aware that they were outnumbered by provincially born citizens. They had additional reasoning to dislike him because he had intentionally expensed Roman resources in order to benefit the provinces he would visit. On the other hand, “he had given them a fine new city, purged of old abuses, enriched and embellished with magnificent buildings…He had given them cleaner airier houses.” In the eyes of the Romans though, Hadrian had crossed a line. It was no secret that he had come to shy away from Rome, and that he preferred Athens. Fortunately for him, he had seen this end to his reign coming. Eight years prior, he began building a Villa in Tivoli, the classic Tibur, so that he would be able to spend the end of his days in his own version of paradise.
The most extensive architectural work of Hadrian’s is without a doubt his Villa at Rome. His villa was built at the base of Tivoli on a plain about 18 miles from Rome. Critics argue as to why Hadrian chose this spot for his Villa. He had an entire empire to choose from, and places like the Town of Tivoli offered fantastic views as well as better weather. Though Hadrian’s choice of location is criticized from a picturesque point of view, he chose it for more logical reasons. For one, he built his villa on the healthiest spot of land he could find – located on the breezy lowlands of the Apennines, within reach of wind from the west, and protected by hills. The plane was naturally unleveled, but the architect made it so by excavating obstacles in some places and paving others. All eight to ten square miles were eventually completely level, partially natural and partial of poured masonry. Another reason why Hadrian may have chosen the location is because the land belonged to his wife Sabina – albeit she played a very negligible part in his life. For all logical reasons, Hadrian chose the spot because he would be so easily able to make the land into anything he wanted with little effort.
Not unlike Versailles, Hadrian’s Villa imposes a formal order through a system of axes, so that the nature is dominated by geometry. The architecture is composed of spaces that are both closed and unclosed. The entire site was built on and around the north, west and south sides of a giant mound. In some cases, it cut well below ground level. A large multistoried wall superseded the mound, and contained cubicles that would house guards and slaves. As it has been well-established, Hadrian’s architectural mind drew from both Greek and Roman styles; it seems as though his villa is also illustrating a fusion of organic and man-made principles. “At Tivoli, it occurs, as it does perhaps even more powerfully on the arcades which form the face of the Palatine hill above the roman forum, that the scale of natural formations and of man-made structures coincides, so that the hills become in a sense man-made, and the structures take on the quality of natural formation.” For the representation of Canopus – a recreation of a resort near Alexandria – Hadrian designed a system of subterranean passages within a ravine to symbolize the River Styx. Hadrian truly felt that he had the control of the word in his hands, and felt no bound for what his works could be or represent.
A modern tourist would enter the villa through an area in the north moving toward the Poikele, yet Hadrian had intended his visitors to enter from an area between the Canopus and the Poikele so as to force them to walk under the huge mound walls filled with servants. The entrance into the Villa illustrates Hadrian’s juxtaposition of circles and squares that would be a recurring geometric theme in the rest of its architecture and layout. Canopus lies to the right of the entrance, with the Poikele to the left, and further on, two baths were in view. Although a further descriptive tour would help immensely in painting the picture of Hadrian’s Villa to the reader, it would take far too many words, so I am going to focus on only a few of the features that I find fascinating about the structure.
There is a space in Hadrian’s Villa known as a cryptoporticus. At its center there was a raised pool, about the size and shape of an average American swimming pool. Because the pool was raised, it seemed to hang in the middle of the court, while the double portico that surrounded it gave the structure a heavier feel. The Hall of Doric Pillars to its side are neither Roman or Greek in design, and feel as though Hadrian was experimenting with an architectural style all his own. The large field about the top of the hill is perfectly level up to the point where it drops off, and is supported by the Hundred Chambers before a vast valley. It is rectangular in shape with concave ends, and once again we find a pool at its center. Around that, what used to be a hippodrome, has been recreated as a garden.
The sculptures found at Hadrian’s Villa are so numerous that it is nearly impossible to study ancient sculpture without mention of the monument. Hadrian furnished his villa with not only all of the luxuries that Rome had to offer, but all of the best artwork. Egyptian figures and sculptures of his friends and family have been found in the ruins of his villa. Since each new excavation of the grounds reveals new artifacts, museums around the world have its works on display. Two statues of Antinous have been found in the ruins. One was created clearly by Greek design, while the other emanates Egyptian symbolism. Hadrian also had a curiosity for portraits, thus many were found in his ruins as well. He even went on to change Roman law and popularized self-portraits within the homes of Roman nobility and upper-class.
My overall goal with this paper was to dive headfirst into Hadrian’s life, and hopefully see why he built the things he did. Personally, seeing Rome through the eyes of Hadrian has given me a newfound appreciation for what inspires architects to design the things they do. All of Hadrian’s works mentioned in this document divulge both his inner and outer struggles as emperor, and more importantly have influenced the decisions of all architects beyond his time. Just like the emperors before him, Hadrian’s architecture made a statement about Roman strength and their everlasting objective to emulate their Gods. Hadrian’s title set him at the head of the Roman military, and his strategy and tactical senses were put forth by his design of his wall in Britain. He was not an emperor set on conquering as much land as possible, but of fortifying the land he already ruled over. I set out to illustrate two sides of Hadrian that were prominent in his works – his love for classical Greek culture and the Roman pride he was brought up with. We saw these two aspects outlined in his designs of the Pantheon and Hadrian’s Villa. The two designs also outline how at the beginning of his reign and directly after the influence of Trajan, Hadrian was still true to his Roman origin. By the end of his term, Hadrian had almost completely disregarded the culture of his capital city, and he fully embraced his Hellenistic tendencies. Hadrian’s Pantheon and Villa compare and contrast his Greco-Roman outlook within their own designs. What captivates me even more about Hadrian is that there are still so many mysteries about his life to uncover. Fortunately for us, he left behind artifacts and even entire monuments for us to interpret and imagine what life in ancient Rome would have been like.
2016-10-20-1476968295
What were Prisoner of War camps like during the Civil War?
What were Prisoner of War camps like during the Civil War, what were the conditions and how did it effect the prisoners?
During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. The statistics behind the prisoner of war camps have been concluded by multiple sources and records. In the four years of the Civil War more than 150 POW camps were established in the North and South combined (“Prisons”). That number of camps may seem large, but it clearly was not enough considering the issues concerning overcrowding. Though the exact number of deaths is not certain, records state 347,000 men died in camps total, 127,000 from the Union, and 220,000 from the Confederacy (“Prisons”). Of the men that died in the Civil War more than half of them were prisoners of war. When comparing the camps to war they should not have been so similar, Men in camps were usually left to die. They suffered from mental trauma and health complications the same if not worse than soldiers fighting the war. An example of prisons valuing extraneous items over the prisoners. From the years 1862-1865 Belle Isle held prisoners in Virginia under terrible conditions according to poet Walt Whitman. The prisoners endured the biting cold, filth, hunger, loss of hope, and despair (“Civil…Prison”). Belle Isle had an iron factory and hospital on the island, yet barracks were never built (Zombek). The prisoners only had small tents to protect them from the elements. The lack of shelter shows the priorities of the prisoners’ needs whilst having a hospital and iron factory. As an open-air stockade escaping Belle Isle was increasingly difficult (Zombek). The total disregard when it comes to the prisoner’s safety and protection from elements when it comes to Elmira is ridiculous. In July of 1864 Elmira prison was opened. Elmira was known for the terrible death rate of 25% and for holding 12,123 men Bailey when the regulated capacity was 4,000 (“Civil…) Prison”). The urgent need for medical supplies was ignored by the capital (“Elmira”). When winter came Elmira the prisoner’s clothing was taken and when Southerners were sent things, they would burn it if it were not grey (“Elmira”). The mistreatment of prisoners was intentional at Elmira as well as other prisons. After taking a glimpse at some prisons and the overall statistics of camps the following is still quite shocking. Andersonville was a Confederate prisoner of war camp, it is painted out to be the worst one in history.
Prisoners at Andersonville were so malnourished they looked like walking bones. They began to lose hope and turned to their lord. In Andersonville, the shelter, or lack thereof, was another issue. Prisoners had to use twigs and blankets due to inflation in lumber prices (“Civil…) Deadliest”). This represents how every material’s price adds up and contributes to the conditions. Within 14 months of 13,000 of the 45,000 prisoners died. The prison was low on Beef, cornmeal, and bacon rations meaning the prisoners lacked vitamin C therefore, most got scurvy (“Civil…) Prison”). With the guards turning a blind eye, prisoners had to fend for themselves. Some took this lack of authority to far and those were the “Andersonville Raiders.” They stole food, attacked their equals, and stole waves from their shelters (Serena). Andersonville especially made people turn violent and caused them to lose faith in humanity. A 15-foot-high stockade guarded the camp though the true threat was a line. 19 feet within the stockade there was a line, to keep prisoners away from the walls. If a prisoner were caught crossing the line they would be shot and killed (Serena). This technique was honestly unnecessary and a waste of resources. First the conditions now, the location of Andersonville. A swamp ran through the camp, with little access to running water or toilets prisoners used the swamps. This polluted the water, making it even more non-consumable (Serena). In the process of building Andersonville Prison, slave labor was implemented to build the stockade and trenches (Davis). The camps abused their Bailey power to not only harm prisoners but to use slaves. From swelling numbers of prisoners, they started having trouble finding space to sleep (Davis). With the capacity increasing and disgusting conditions it was a funhouse for disease. Andersonville was assumed to be the optimal position for a POW camp because of the food, the only problem was farmers did not wish to sell crops to the Confederacy (“Myths”). This is just another example of how Andersonville would have been better if given more assistance.
Was any justice ever served for the men who ran the camp.
James Duncan and Henry Wirz were both officers of Andersonville after the prison closed, they were both charged with war crimes (Davis). Wirz’s two-month trial started in August 1865. The trial included 160 witnesses, Wirz did not show a distaste towards prisoners. He served as a scapegoat for many of the allegations, he was charged with harming the lives and health of Union soldier’s and murder (“Henry”). Henry Wirz was a witness to all the mistreatment in Andersonville as a commander, therefore making him liable for the thousands of prisoners who died. Wirz was then executed (“Henry”). Unlike Wirz, Duncan was lucky, after a two-and-a-month trial he was sentenced to 15 years. After spending a year at Fort Pulaski Duncan escaped (Davis). Duncan was never truly punished for his actions because he escaped after so much time.With the logistics behind Andersonville its to realize and understand the arguments of each the Union and the Confederacy. Why were prisoners treated so poorly when the neck supplies for such was provided. The North had access to a surplus of medical supplies, food supplies, and other resources meaning they could have treated the prisoners better (Prisons). They had no reasoning besides wanting to save resources and torture Confederate soldiers. In the North they just sat around and made the soldiers live shelter less and lack protection from the elements (Macreverie). In opposition to the North, the South did not intend to have such poor conditions. For example, in Andersonville the prisoners and guards were both fed the same rations (Macreverie). The Bailey South struggled more with food compared to the North. Those tending the fields did not have shoes and only a handful of cornmeal or a few peanuts (“Prisons”). The prisoners were not fed due to a lack of preparation. Both sides tried to simplify the reasons for neglect in camps to shortage in food supply and seeking vengeance. Both sides ran the camps differently, but they faced the same problem, shortages of supplies (“Myth”). The south inevitably tried their best though their best was not good enough, the North had the luxury to a choice in how they treated prisoners and they chose the wrong one. During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. It’s safe to say Andersonville was a memorable prison but for all the wrong reasons. The arguments weren’t the best for either side when it came to justifying their actions. Generally statistics involving the camps were definitely interesting and honestly very shocking. All in all prisoner of war camps were unsafe and had terrible conditions but they served the purpose of capturing soldiers from the opposing side during war.
2021-11-17-1637162551
Formation of Magmatic-Hydrothermal Ore Deposits
Introduction:
Magmatic- hydrothermal ore deposits provide the main source for the formation of many trace elements such as Cu, Ag, Au, Sn, Mo, and W. These elements are formed in a tectonic setting, by fluid dominated magmatic intrusions in Earth’s upper crust, along convergent plate margins where volcanic arcs are created. Vapor and hypersaline liquid are the two forms of magmatic fluid important to the ore deposits. The term ‘fluid’ as it is being referred is non-silicate, aqueous liquid or vapor; Hypersaline liquid is also known as brine and is indicative of a salinity of >50wt%. The salinities in magmatic environments that can form ore deposits have a substantial range from a very low .2-.5wt% to the hypersaline >50wt%. The salinity of a fluid was thought to be one of the main contributing factors to which elements formed under certain specific conditions however, recent developments support a new theory that is discussed later. There are multiple different types of ore deposits such as skarn, epithermal (high and low sulphidation), porphyry and pluton-related veins. However, there are two different ore deposits, porphyry and epithermal, that produce the greatest abundance of trace elements around the world (Hedenquist and Lowenstern, 1994).
Porphyries, one type of ore deposit which occurs adjacent to or hosted by intrusions, typically develop in hypersaline fluid and are associated with Cu” Mo” Au, Mo, W or Sn. Another type of ore deposit which occurs either above the parent intrusion or distant from the magmatic source is known as epithermal and relates to Au-Cu, Ag-Pb, Au (Ag, Pb-Zn). The term epithermal rightfully refers to ore deposits formed at low temperatures of <300”C and at shallow depths of 1-2km (Hedenquist and Lowenstern, 1994). The epithermal ore deposits can be further separated in to two different types, the high sulfidation and the low sulfidation deposits which are shown in Figure 1. High sulfidation epithermal deposits form above the parent intrusion near the surface and from oxidized highly acidic fluids. These systems are rich in SO2- and HCl-rich vapor that gets absorbed in to the near surface waters causing argillic alteration (kaolinite, pyrophyllite, etc.). The highly acidic waters then get progressively neutralized by the host rock. Low sulfidation occurs near the surface as well but away from the source rock, as seen in Figure 1, and is dominated by meteoric waters. The fluids are reduced with a neutral pH and CO2+, H2S, and NaCl as the main fluid species. The main difference between the two epithermal fluids is how much they have equilibrated with their host rocks before ore deposition (White and Hedenquist, 1995). In addition to the two main types of ore forming deposits, there are certain environments where they are capable of occurring.
There are three important reoccurring ore-forming environments around the globe that produce these trace elements. The first of which is the deep crust where gold deposits form due to mixing and phase separation among the aquo-carbonic fluids. The second is the granite-related Sn-W veins which provide the interaction of hot magmatic vapor and hypersaline magmatic liquid with cool surface-derived meteoric water, as a widespread mechanism for ore mineral precipitation by fluid mixing in the upper crust. Third is the Porphyry-Epithermal Cu-Mo-Au systems resulting from the varying density and degree of miscibility of saline fluids between surface and magmatic conditions that propose the role of fluid phase separation in ore-metal fractionation and mineral precipitation (Heinrich et al., 2007).
Figure 1 (Hedenquist and Lowenstern, 1994)
PORPHYRY-EPITHERMAL Cu-Mo-Au
The formation of magmatic-hydrothermal ore deposits is a complicated but not so timely process that undergoes numerous phases. A general depiction can be seen in Figure 2 showing the different components involved in the system. Hydrothermal ore deposits are initiated by the ”generation of hydrous silicate magmas, followed by their crystallization, the separation of volatile-rich magmatic fluids, and finally, the precipitation of ore minerals in veins or replacement deposits.’ (Audetat, Gunther, and Heinrich, 1998). Porphyry magma chambers have been dated using individual zircon grains. Since the magma reservoirs in which porphyry deposits form occur in the upper crust, they are found to have a maximum life span of <1 Ma. The porphyry stocks struggle to remain ‘at the temperature of mineralization(>350”C) for more than even a few tens of thousands of years, even with massive heat advection by magmatic fluids.’ (Quadt et al., 2011) The hosted zircons analyzed contain significantly different ages that range over a span of millions of years indicating multiple pulses of porphyry emplacement and mineralization. Diffusive equilibrium occurs even faster than mineralization between magmatic fluids and altered rocks. Thermal constraints suggest that the porphyries and their constituent ore fluids underwent the ore-forming process in multiple spirts of as little as 100 yrs. each. The methods behind this are handled and discussed later (Quadt et al., 2011).
Figure 2. Illustration of ore-forming magmatic-hydrothermal system, emphasizing scale and transient nature of hybrid magma with variable mantle (black) and crustal (gray) components. Interacting processes operate at different time scales, depending on rate of melt generation in mantle, variable rate of heat loss controlled by ambient temperature gradients, and exsolution of hydrothermal ‘uids and their focused ‘ ow through vein network, where Cu, Au, or Mo are enriched 100 fold to 1000 fold compared to magmas and crustal rocks (combining Dilles, 1987; Hedenquist and Lowenstern, 1994; Hill et al., 2002; Richards, 2003). (Quadt et al., 2011)
Chemical and temperature gradients are important due to the selective dissolution and re-precipitation of minerals in rare elements to also form ore deposits. Most ore deposits form in the upper crust due to the advection of magma and hot fluids into cooler rocks, creating the rather steep temperature gradients. Temporary steep gradients in pressure, density, and miscibility in response to the brittle deformation of rocks to form vertical vein networks proves physical properties of miscible fluids to be of equal importance. H2O-CO2”-NaCl controls the composition of crustal fluids causing variations in the physical properties and in turn affecting the chemical stability of dissolved species. (Heinrich 2007)
Evidence from fluid inclusions suggests the interaction of multiple fluids in volcanic arcs through fluid mixing as well as fluid phase separation. These fluid inclusions can provide insight in to the substantial role the geothermal gradient plays in the formation of these ore deposits and why they only occur under certain environmental conditions. Salinity was thought to be a main contributing factor (primary control) to which elements were precipitated but it is now debated that vapor and sulfur play a key role, especially in terms of Cu-Au deposits. Supporting evidence suggests the likelihood of one bearing greater significance than the other so both are discussed and compared. The addition of sulfur causes Cu and Au to prefer the vapor phase. Figure 3 shows the vapor/liquid concentrations surpass 1 and allow the elements to more easily shift in to the vapor phase where they can then be transported.
Figure 3 (Left). Experimental data for the partitioning of a range of elements between NaCl-H2O-dominated vapor and hypersaline liquid, plotted as a function of the density ratio of the two phases coexisting at variable pressures (modified from Pokrovski et al. 2005; see also Liebscher 2007, Figs. 13, 14). As required by theory, the fractionation constant of all elements approaches 1 as the two phases become identical at the critical point for all conditions and bulk fluid compositions. Chloride-complexed elements, including Na, Fe, Zn but also Cu and Ag are enriched to similar degrees in the saline liquid, according to these experiments in S-free fluid systems. Hydroxy-complexed elements including As, Si, Sb, Sb and Au reach relatively higher concentrations in the vapor phase, but never exceed their concentration in the liquid (mvapor/mliquid < 1). Preliminary data by Pokrovski et al. (2006a,b) and Nagaseki and Hayashi (2006) show that the addition of sulfur as an additional complexing ligand increases the concentration ratios for Cu and Au in favor of the vapor (arrows); in near-neutral pH systems (short arrows) the increase is minor, but in acid and sulfur-rich fluids (long arrows) the fractionation constant reaches ~ 1 or more, explaining the fractionation of Cu and Au into the vapor phase as observed in natural fluid inclusions. (Heinrich 2007)
VAPOR AND HYPERSALINE LIQUIDS
The solubility of ore minerals increases as water vapor density increases with the transient pressure rise along the liquid-vapor equilibrium curve. The nature of this occurrence suggests ‘that increasing hydration of aqueous volatile species is a key chemical factor determining vapor transport of metals and other solute compounds.’ (Heinrich et al., 2007) The high salinity in hypersaline fluid systems allows for the vapor and liquid to coexist beyond waters’ super critical point. The increasing water vapor density accompanied by an increase in temperature leads to higher metal concentrations as an inherent result of increased solubility of the minerals in vapor. ‘[Observed] metal transport in volcanic fumaroles’ and even higher ore-metal concentrations in vapor inclusions from magmatic-hydrothermal ore deposits’ (Heinrich et al., 2007) has led to research in order to quantify vapor transportation. Fractionation is of key importance because all elements behave differently when it occurs to the coexisting vapor and the hypersaline liquid. Certain elements such as ”Cu, Au, As, and B partition into the low-density vapor phase while other ore metals including Fe, Zn, and Pb preferentially enter the hypersaline liquid.’ (Heinrich et al.,2007). This basically means that vapor is now known to contain higher concentrations of ore metals than any other known geological fluid.
SULFUR CONTRIBUTION TO VAPOR
‘Sulfur is a major component in volcanic fluids and magmatic-hydrothermal ores including porphyry-copper, skarn, and polymetallic vein deposits, where it is enriched to a greater degree than any of the ore metals themselves’ (Seo, Guillong, and Heinrich, 2009). Sulfur is necessary for the precipitation of sulfide minerals such as pyrite and anhydrite. Sulfide is an essential ligand in metal-transporting fluids to increase the solubility of Cu and Au. Introducing sulfur to Cu and Au in vapor phase can also cause them to be relatively volatile. Sulfur however changed the conditions in which Cu and Au enter the vapor phase, as seen in Figure 3, and shines light on why it’s possible for Cu and Au to partition in to low density magmatic vapor (Heinrich et al., 2007). Sulfur basically makes it easier for Cu and Au in particular to enter in to the vapor phase where they can then be more easily transported making sulfur a key to the high concentrations of ore in the vapor phase.
Methods and Results:
ZIRCON DATING using LA-ICP-MS and ID-TIMS Figure 4(Above, Left). Rock slab from Bajo de la Alumbrera, showing early andesite porphyry (P2, left part of picture and xenolith in lower right corner) that solidi’ed before becoming intensely veined and pervasively mineralized by hydrothermal magnetite + quartz with disseminated chalcopyrite and gold. After this ‘rst pulse of hydrothermal mineralization, a dacite porphyry intruded along an irregular subvertical contact (EP3, right part of picture), before both rocks were cut by second generation of quartz veins (diagonal toward lower right). (Quadt et al., 2011) Figure 5(Above, Right). A: Concordia diagram with isotope dilution’thermal ionization mass spectrometry (ID-TIMS) results from the ‘rst (red ellipses, P2) and second (blue ellipses, EP3) Cu-Au mineralizing porphyry of Bajo de la Alumbrera. B, C: For comparison, published laser ablation’inductively coupled plasma’mass spectrometry (LA-ICP-MS) analyses and their interpreted mean ages and uncertainties on the same age scale (replotted from Harris et al., 2004, 2008; LP3 is petrographically indistinguishable from EP3, but cuts also second phase of ore veins). All errors are ” 2”. MSWD’mean square of weighted deviates. (Quadt et al., 2011) ‘Porphyry Cu ”Mo ” Au deposits form by hydrothermal metal enrichment from fluids that immediately follow the emplacement of porphyritic stocks and dikes at 2-8km depth’ (Quadt et al., 2011) Samples were taken from two porphyry Cu-Au deposits, the first samples taken from Bajo de la Alumbrera, a volcanic complex located in northwestern Argentina. Uranium-Lead LA-ICP-MS, laser ablation- inductively coupled plasma- mass spectrometry, and ID-TIMS, isotope dilution- thermal ionization mass spectrometry, analyses were performed on zircons from the samples to conclude concordant ages of single crystals between two mineralizing porphyry intrusions. The LA-ICP-MS data was taken previously and is represented by Figure 5, B and C. ID-TMS analyzed samples from the two intrusions. One sample known as BLA-P2 is quartz-magnetite(-K-feldspar-biotite) altered P2 porphyry while the other sample, taken 5m from the EP3 contact to exclude contamination, is known as BLA-EP3. BLA-EP3 ”truncates the first generation of hydrothermal quartz-magnetite veinlets associated with P2, and is in turn cut by a second generation of quartz veins’ (Quadt 2011). The results were compared with the previously existing data and the P2 porphyry grain ages are shown to range from 7.772 ”0.135 Ma to 7.212 ”0.027 Ma. The maximum age for subvolcanic intrusion, solidification, and first hydrothermal veining of P2 is as late as 7.216 ”0.018 Ma (P2-11 is the most precise of the young group) when the zircons crystallized from the parent magma. ‘The EP3 porphyry truncated these veins and provided concordant single-grain ages with a range from 7.126 ”0.016 Ma to 7.164 ”0.057 Ma. It is ultimately concluded that the two intrusions are separated in age by .090 ”0.034Ma. With this data, it can be said that the two porphyries intruded within a period of 0.124 m.y. from each other. Figure 6. Concordia diagrams with isotopic dilution’thermal ionization mass spectrometry (ID-TIMS) results from three porphyries (A: KM10, KM2, 5091-400; B: KM5; C: D310) bracketing two main pulses of Cu-Au mineralization at Bingham Canyon (Utah, USA); Re-Os (molybdenite) data are from Chesley and Ruiz (1997). (Quadt et al., 2011) The second samples were taken from Bingham Canyon in Utah,USA and were found in pre-ore, syn-ore, and post-ore porphyry intrusions. All three of the porphyry intrusions were dated using ID-TIMS analysis and yielded the results seen in figure 6. It was found that two Cu-Au mineralization pulses occurred. The first is associated with a quartz monzonite porphyry which existed prior to the mineralization of the Cu-Au in the porphyry. A second pulse of Cu-Au is known to occur because it cuts through the latite porphyry and truncates the first veins. Thirty-one concordant ages were taken collectively from the three intrusions and the most precisely dated of the grains concluded all the porphyries overlap in an age range of 38.10 ‘ 37.78 Ma. A single outlying grain of younger age is present in the oldest intrusion and is thought to be attributed to residual Pb loss. Upon interpretation of the three porphyries and the two Cu-Au pulses, a window of .32Ma is the time it took for their occurrence. In all three of the intrusions there are significantly older concordant grains dated as far back as 40.5Ma which hosts a minimum life time of the magmatic reservoir to be .80 ‘ 2 million years in age. (Quadt et al., 2011) Errors in the analyzed zircon grains can be minimalized if crystals that have undergone Pb loss are avoided or have been removed by chemical abrasion. The lifetime of the mineralization of a single porphyry is important for alternative physical models of magmatic-hydrothermal ore deposits which are expected to be constrained to a lifetime of less than 100k.y. Comparison of the porphyry intrusions in both sites provided substantial evidence of the relatively short lifespan of their formation. In both sites, the two consecutive pulses occur >1M.y. apart, .09M.y. and .32M.y. respectively.
FLUID INCLUSIONS: Sn-W VEINS
Mineral deposits of Sn-W are commonly formed by the mixing of magmatic fluids with external fluids along the contact zones of granitic intrusions (Heinrich, 2007). Tin precipitation was proven to be driven by the mixing of hot magmatic brine with cooler meteoric water by using LA-ICP-MS to measure fluid inclusions taken before, during, and after the deposition of Cassiterite(Sn02). (Audetat, Gunther, and Heinrich 1998). The fluid inclusions that formed in minerals during the time of the ore formation recorded temperatures between 500-900”C at several kilometers depth. The average size range of the inclusions is between 5 and 50 micrometers. In order to prove the importance of fluid-fluid interaction in the formation of magmatic-hydrothermal ore deposits, the Yankee Lode was analyzed. The Yankee Lode is a magmatic-hydrothermal vein deposit located in eastern Australia and is a part of the Mole Granite intrusion. This vein consists of primarily quartz and cassiterite that’s well preserved in open cavities. Two quartz were analyzed, their crystals have the same pattern of hydrothermal growth and precipitation represented by successive zones of inclusions as seen in Figure 7.
Fig. 7 (A) Longitudinal section through a quartz crystal from the Yankee Lode Sn deposit, showing numerous trails of pseudosecondary fluid inclusions and three growth zones recording the precipitation of ilmenite, cassiterite, and muscovite onto former crystal surfaces. The fluid inclusions shown in the right part of the figure represent four different stages in the evolution from a magmatic fluid toward a meteoric water-dominated system. Thtot corresponds to the final homogenization temperature. (Audetat, Gunther, and Heinrich, 1998)
There are indications of boiling fluid throughout the entire history of the quartz precipitation due to the presence of both low-density vapor inclusions and high-density brine inclusions. Apparent salinities of both inclusions were taken using microthermometric measurements and ‘Pressure for each trapping stage was derived by fitting NaClequiv values and homogenization temperatures (Thtot) of each fluid pair into the NaCl-H2O model system’ (Audetat, Gunther, and Heinrich, 1998). This data basically shows that there were three pulses of extremely hot fluid injected into the system before cool water mixing and had a consecutive temporary increase in pressure. The pressure increases are noted along with some of the various fluid inclusions analyzed in Figure 7. In this system tin is the main precipitating ore-forming-element as represented in Figure 8. The initial Sn concentration of 20 wt% starts to drop drastically at the onset of cassiterite precipitation. By stage 23, represented in Figure 8C, only 5 wt% of the initial concentration of Sn remains. At this same stage in non-precipitating elements, the fluid mixture still contains 35% of the magmatic fluid indicating the chemical and cooling(thermal) effects of fluid mixing are the cause for the precipitation of cassiterite. Three pulses of magmatic fluid occurred before the formation of cassiterite was initiated in response to Sn precipitation, however the onset of cool meteroric groundwater mixing didn’t occur until the third pulse. This proves the fluid-fluid mixing is critical to the formation of trace elements (Audetat, Gunther, and Heinrich, 1998).
There is another component occurring in this system along with the precipitation of Sn, the magmatic vapor phase selectively transporting copper and boron into the liquid mixture represented by Figure 8D. Boron’s initial marked reduction occurs at stage 25 in Figure 8D, exactly where tourmaline begins to precipitate. Note that the concentration of B remained near its original magmatic value in stage 23 and 24 when simultaneously the none precipitating elements underwent substantial dilution. B also decreased in stages 26 and 27 relative to its initial value but not as much as would be expected considering the continual growing and extracting of B from the fluid to form tourmaline. Copper follows the same trend as Boron of having the same original magmatic value in stages 23 and 24, indicating an excess of these two elements. The vapor and brine inclusions in the vapor phase were found to be selectively enriched in Cu and B. This explains the excess to be condensation of magmatic vapor into the mixing liquids as Cu and B prefer to partition to the vapor phase as opposed to the saline liquid like the other elements. It has been suggested that Cu can be stabilized in a sulfur-enriched vapor phase as opposed to metals which stabilize in brine by chloro-complexes. Gold, Au, is thought to behave similarly to Cu which could explain why it is selectively coupled with Cu and As in high sulfidation epithermal deposits. (Audetat, Gunther, and Heinrich, 1998).
Fig. 8. (Left) Evolution of pressure, temperature, and chemical composition of the ore-forming fluid, plotted on a relative time scale recorded by the growing quartz crystal. (A) Variation in temperature and pressure, calculated from microthermometric data. Hot, magmatic fluid was introduced into the vein system in three distinct pulses before it started to mix with cooler meteoric groundwater. (B) Concentrations of non-precipitating major and minor elements in the liquid-dominant fluid phase, interpreted to reflect progressive groundwater dilution to extreme values. (C) A sharp drop in Sn concentration is controlled by the precipitation of cassiterite. (D) B and Cu concentrations reflect not only mineral precipitation (tourmaline) but also the selective enrichment of the brine-groundwater mixture by vapor-phase transport. (Audetat, Gunther, and Heinrich, 1998)
Fig. 9 (Right) Partitioning of 17 elements between magmatic vapor and coexisting brine, calculated from analyses of four vapor and nine brine inclusions in two ‘boiling assemblages.’ At both pressure and temperature conditions recorded in these assemblages, Cu and B strongly fractionate into the magmatic vapor phase. (Audetat, Gunther, and Heinrich, 1998)
SALT PRECIPITATION
Fluids are released from the upper crustal plutons associated with magmatic-hydrothermal systems. These fluids are usually saline and phase separation occurs into very low salinity vapors and high-salinity brines as discussed earlier. Salt precipitation can have a major impact on the permeability of a system and the ore formation along the liquid-vapor-halite curve making certain ore deposits precipitate out more than others. Halite-bearing fluid inclusions were analyzed from porphyry deposits using microthermometry to discover the inclusions can homogenize by halite dissolution. (Lecumberri-Sanchez et al., 2015).
Based on the hypothesis formed from the examination of fluid inclusions that there is widespread halite saturation in magmatic-hydrothermal fluids, further data was collected and studied. Roughly 11,000 fluid inclusions from 57 different porphyry systems were used to identify halite bearing inclusions. There were about 6,000 halite-bearing inclusions in the data set. These inclusions were then subdivided in to two different methods of homogenization, by vapor bubble disappearance or by halite dissolution and found that 90%, 52 out of the 57, of the porphyry systems homogenized by halite dissolution. The pressure at homogenization was then calculated based on the PVTX, pressure-volume-temperature-composition, properties of H2O-NaCl and found the pressures at fluid inclusion homogenization exceeds 300MPa. If significant fluid-inclusion migration is expected, several millimeters, then water loss can occur and would result in salinity changes as well as density changes. This however is not the proposed idea because migration of no more than a few micrometers is common. If no migration is evident, this leads the more plausible explanation that heterogeneous entrapment of halite due to highly variable temperatures, ”100”C, occurred. This means it is thought that halite saturation occurs at the time of trapping. The coexistence of vapor inclusions with homogenized brine inclusions are a result of halite saturation along the liquid-vapor-halite curve. Trapped halite found in the surface of another growing mineral has also been observed and means that, ‘heterogeneous entrapment of solid halite inside FIs is a natural consequence of halite saturation’ (Lecumberri-Sanchez et al., 2015).
Figure 10. Left: Pressure-salinity projection of the H2O-NaCl phase diagram at 400 ”C (Driesner and Heinrich, 2007) showing a potential mechanism for copper sulfide mineralization via halite (H) saturation. Destruction of the liquid (L) phase results in partitioning H2O to the vapor (V), and Cu and Fe to the solid phase. Right side shows the same process schematically. (Lecumberri-Sanchez et al., 2015)
Halite saturation usually occurs at the eutectic where vapor, liquid, and halite are all in existence together. Since halite precipitates at shallow crustal levels, other ore minerals are able to precipitate out of liquid. The Na,Cl, Fe& Cu-rich liquid + vapor phase traverses the phase boundary to the more stable vapor + halite stage as seen in Figure 10. Once this eutectic point is reached, liquid decreases and starts precipitating out the Cu-Fe sulfides (”Au) that was in liquid. It can be concluded that salt saturation acts as a precipitation mechanism in magmatic-hydrothermal fluids. This allows for the rapidly ascending vapor phase to transport sulfur and gold upward however the mechanism is limited by the availability of reduced sulfur. The disproportionation of SO2 similarly occurs at temperatures around which halite saturation occurs which provides the needed sulfur. This indicates that salinity is not the only key component to the formation of magmatic-hydrothermal deposits, sulfur is of equal if not more importance. Lecumberri-Sanchez et al., 2015).
SULFUR in a Porphyry Cu-Au-Mo System
In order to better understand the role sulfur plays in high temperature metal segregation by fluid phase separation, two porphyry Cu-Au-Mo deposits were examined along with two granite related Sn-W veins, and barren miarolitic cavities. The fluid inclusion assemblages underwent microthemometric analysis to measure salinities. No modification after entrapment occurred and the temperature range for homogenization of the brine inclusions was between 323”8 to 492”8 ”C. This indicates heterogeneous entrapment of variable temperatures, ”100”C, signifying halite saturation at the time of fluid inclusion trapping. LA-ICP-MS was used to measure absolute element concentrations with Na as a standard. The results were coupled with the microthemometry data to estimate the P-T conditions of the brine + vapor entrapment. (Seo et al., 2009)
Sulfur quantification in fluid inclusions was done by using two different ICP-MS instruments, a sector-field MS and the quadrupole MS, on homogeneous inclusions with similar salinities (42.4” 1.2 NaCl equiv.wt%). The size of the inclusions being analyzed can inhibit the ability to detect sulfur. The results of the quantification are such that the dominant components of the coexisting brine-vapor inclusions are NaCl, KCl, FeCl2, Cu and S. The concentrations of Cu to S are very similar and follow the same trend as seen in Figure 11 when normalized to Na (the dominant cation component). Figure 11 shows the correlation of S/Na to Cu/Na with a slope of 1 and mole ratio of 2:1, S:Cu. Figure 12 represents the fractionation behavior of how some elements prefer the brine and some prefer vapor. The elements are normalized to Pb which prefers brine and shows Au, Cu, and S are clearly correlated in their partitioning in to vapor. Figure 11 and 12 also indicates the significance of the environment in which the samples had formed. The Sn-W samples show the concentrations of Cu and S to prefer vapor where as in the porphyry Cu-Mo-Au samples show a Cu and S enrichment in the vapor phase relative to the salt components but the absolute concentrations in vapor are lower than in the brine. The overall combination of the two fluid phases in the porphyry Cu-Mo-Au are much higher in S, Cu, and Au than those in the Sn-W mineralizing fluids. The importance of sulfur and chloride as complexing agents in both of the fluid phases can be represented by the exchange equilibria:
Exchange equilibria (1) shows the preferred equilibria shift is towards Cu-S complexes in vapor and (2-4) shows stabilizing K, Na, and Fe as chloride complexes in brine. The main significance is Cu prefers stabilization in vapor with the addition of S. (Seo et al., 2009)
This means that salinity is not the main contributing factor to the formation of Cu deposits. S is now known to be important since the ”efficiency of copper extraction from the magma is determined by the sulfur concentration in the exsolving fluids’ (Seo et al., 2009). Magmatic sulfide melt inclusions have been observed and may have formed at the time of fluid saturation in the magma. Copper is precipitated out of brine and vapor as chalcopyrite (CuFeS2) and/or bornite (Cu5FeS4) once cooled. The Cu and S enriched vapor phase has the greatest contribution. (Seo et al., 2009)
Fig. 10(Next Page) Concentrations of sulfur and copper in natural magmatic’hydrothermal ‘uid inclusions. Co-genetic pairs of vapor+brine inclusions (‘boiling assemblages’) in high temperature hydrothermal veins from porphyry Cu’Au’Mo deposits (orange to red symbols), granite related Sn’W deposits (blue’green), and a barren granitoid (black’ gray)are shown. All vapor(a) and brine inclusions(b) have sulfur concentrations equal to copper or contain an excess of sulfur (the S: Cu=1: 1 line approximates a 2: 1 molar ratio). Element ratios (c), which are not influenced by uncertainties introduced by analytical calibration (Heinrich et al.,2003), show an even tighter correlation along and to the right of the molar 2:1 line, with Cu/Na as well as S/Na systematically higher in the vapor inclusions (open symbols) than in the brine inclusions (full symbols). Averagesof3’14single ‘uid inclusions in each assemblage from single healed fractures are plotted, with error bars of one standard deviation. Scale bars in the inclusion micrographs represent 50 ”m.(Seo et al., 2009)
Fig. 11(Above). Partitioning of elements between co-genetic vapor and brine inclusions. Fluid analyses including sulfur and gold are normalized to Pb, which is most strongly enriched in the saline brine (Seward, 1984). S, Cu, Au, As and sometimes Mo preferentially fractionate into the vapor relative to the main chloride salts of Pb, Fe, Cs, K and Na. A close correlation between the degrees of vapor fractionation of S,Cu and generally also Au indicates preferential sulfur complexation of these metals in the vapor. The two boxes distinguish assemblages in which absolute concentrations of Cu and S are higher or lower in vapor compared with brine. This grouping correlates with geological environment, i.e, the redox state and pH of the source magmas and the exsolving ‘uids. (Seo et al., 2009)
Conclusion:
Throughout the many years of research multiple types of analysis have been performed such as LA-ICP-MS, sector-field MS, microthermometry, quadrapole MS, and ID-TIMS. Zircon crystals were dated to provide ages of the magmatic system in which the ore deposits formed as well as help recognize multiple pulses can occur within the same system >1M.y. apart. Fluid inclusions have been examined in great detail to bring further insight in to the magmatic pulses. These pulses are critical to fluid-fluid mixing which in turn effects the precipitation of Sn, forming cassiterite, in Sn-W veins. There are however multiple different environments for deposits to form. Porphyry-epithermal Cu-Au-Mo deposits precipitate different elements. Vapor-liquid fractionation in the porphyry-epithermal system between coexisting brine and vapor is due to the increased transport of Cu and Au in sulfur-enriched acidic magmatic-hydrothermal vapors (Pokrovski et al., 2007).
The formation of magmatic-hydrothermal ore deposits was once thought to be mainly dependent on the salinity of the fluid, hypersaline or vapor. Salinity can be used to recognize an elements referential fluid. For example, Cu and Au prefer low salinity vapors as opposed to coexisting hypersaline fluid and elements such as Pb and Fe prefer hypersaline conditions (Williams-Jones and Heinrich, 2005). Salinity can also serve as a precipitation mechanism for Cu and Au into vapor phase however it has been discovered that reduced sulfur must be present. Fluid phase separation is critical for Cu and Au to partition in to the vapor phase which is aided by sulfur-enriched acidic magmatic-hydrothermal vapors. Sulfur is in turn essential for metal transport in fluids and increasing the solubility of Cu and Au. The low salinity Cu-Au-Mo rich vapor phase is greatest contributor to Cu-Au deposits. (Pokrovski et al., 2007)
References:
Hedenquist, Jeffrey W., and Jacob B. Lowenstern. “The role of magmas in the formation of hydrothermal ore deposits.” Nature 370.6490 (1994): 519-527.
Audetat, Andreas, Detlef G”nther, and Christoph A. Heinrich. “Formation of a magmatic-hydrothermal ore deposit: Insights with LA-ICP-MS analysis of fluid inclusions.” Science 279.5359 (1998): 2091-2094.
Heinrich, Christoph A. “Fluid-fluid interactions in magmatic-hydrothermal ore formation.” Reviews in Mineralogy and Geochemistry 65.1 (2007): 363-387.
Seo, Jung Hun, Marcel Guillong, and Christoph A. Heinrich. “The role of sulfur in the formation of magmatic’hydrothermal copper’gold deposits.” Earth and Planetary Science Letters 282.1 (2009): 323-328.
Von Quadt, Albrecht, et al. “Zircon crystallization and the lifetimes of ore-forming magmatic-hydrothermal systems.” Geology 39.8 (2011): 731-734.
White, Noel C., and Jeffrey W. Hedenquist. “Epithermal gold deposits: styles, characteristics and exploration.” SEG newsletter 23.1 (1995): 9-13.
Lecumberri-Sanchez, Pilar, et al. “Salt Precipitation In Magmatic-Hydrothermal Systems Associated With Upper Crustal Plutons.” Geology 43.12 (2015): 1063-1066. Environment Complete. Web. 20 Apr. 2016.
Pokrovski, Gleb S., Anastassia Yu Borisova, and Jean-Claude Harrichoury. “The effect of sulfur on vapor’liquid fractionation of metals in hydrothermal systems.” Earth and Planetary Science Letters 266.3 (2008): 345-362.
Williams-Jones, Anthony E., and Christoph A. Heinrich. “100th Anniversary special paper: vapor transport of metals and the formation of magmatic-hydrothermal ore deposits.” Economic Geology 100.7 (2005): 1287-1312.
Simmons, Stuart F., and Kevin L. Brown. “Gold in magmatic hydrothermal solutions and the rapid formation of a giant ore deposit.” Science 314.5797 (2006): 288-291.
essay-2016-05-03-000B1q
Improving agricultural productivity (focus on Tanzania): online essay help
Abstract:
Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. The biggest aspects of Tanzania’s low agricultural productivity is lack of response to changing weather patterns, the lack of a consistent farming system, the lack of awareness of different farming systems. Therefore, in this meta-analysis, the possibility of agricultural productivity improving was examined by evaluating the effectiveness of GM crops, with the assistance of either nitrogen fertilizers or legumes for biological nitrogen fixation. Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Deerfield Academy’s Ebscohost Database, Google Scholar and Google. After an evaluation of many studies, GM crops could be a solution under many conditions. Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation. Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and effiency of all three systems. Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.
Introduction
Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. Even with only a quarter of the land used, most is damaged by soil erosion, low soil productivity and land degradation. This is a result of several agricultural and economic problems including poor access to improved seeds, limited modern technologies, dependence on rain-fed agriculture, lack of education on updated farming techniques, limited funding by the government and availability of fertilizers. Tanzanian agriculture is characterized primarily by small-scale subsistence farming, and so approximately 85 percent of the arable land is used by smallholders cultivating between 0.2 ha and 2.0 ha. Tanzania devotes about 87% of their land to food crops, which include mainly banana, cassava, cereal, pulses and sweet potatoes. The other 13% is used for cash crops that include cashew, coffee, pyrethrum, sugar, tea and tobacco. Tanzania’s food crop production yields are estimated to be only 20-30% of potential yields. The average food crop productivity in Tanzania stood at about 1.7 tons/ha far below the potential productivity of about 3.5 to 4 ton/ha.
The biggest aspect of Tanzania’s low agricultural productivity, is the dependence on rain-fed agriculture, lack of a consistent farming system and the lack of awareness of different farming systems. Because of this, many studies have been done to promote either the more traditional approach of chemical fertilizer use, the genetic approach of GM crops or a more sustainable approach of using legumes for nitrogen fixation. I will be evaluating these three methods in this study. Both chemical fertilizers and legumes are currently being used by mostly uneducated Tanzanian farmers, but a very low level.
This study focuses on each farming system in relation to maize especially. This is because maize is the most preferred staple food and cash crop in Tanzania. Maize is grown in all agro-ecological zones in the country. Over two million hectares of maize are planted per year with average yields of between 1.2–1.6 tonnes per hectare. Maize accounts for 31 percent of the total food production and constitutes more than 75 percent of the cereal consumption in the country. About 85 percent of Tanzania’s population depends on it as an income-generating commodity. It is estimated that the annual per capita consumption of maize in Tanzania is over 115Kg; national consumption is projected to be three to four million tonnes per year.
A GM trial has just officially started last October, in the Dodoma region, a semi-arid area in the central part of the country. Tanzania took a long time to approve this trial because of its strict liability clause in the Environment Management Biosafety Regulations that stated that scientists, donors and partners funding research would be held accountable in the event of any damage that might occur during or after research on GMO crops. However, it was revised, and the trial began. It sets out to demonstrate whether or not a drought- tolerant GM white maize hybrid developed by the Water Efficient Maize for Africa (WEMA) project can be grown effectively in the country. Because of Tanzania’s dependence on rain-fed agriculture, this initiative could provide hope in increasing agricultural productivity of not only corn, but other food and cash crops. The project is funded by the U.S. Agency for International Development, the Bill and Melinda Gates Foundation and the Howard G. Buffett Foundation. The gene comes from a common soil bacterium and was made by Monsanto, a sustainable agriculture company that develops better seeds and systems to help farmers with productivity on the farm and grow more nutritious food while conserving natural resources, under the WEMA project. The GM seeds are affordable to farmers who works on relatively small plots of land. The corn is expected to increase yields by 25% during modern drought.
Nitrogen fertilizers (NF) are the conventional method, therefore it has the most recognition, but also the most controversy. NFs has boosted the amount of food that farms can produce, and the number of people that can be fed by farmers meeting crop demands for nitrogen and increase yield. The annual growth rate of nitrogen fertilizers in the world is 1.3%. Of the overall increase in demand for 6 million tons nitrogen between 2012 and 2016, 60 percent would be in Asia, 19 percent in America, 13 percent in Europe, 7 percent in Africa and 1 percent in Oceania.However, NFs have been linked to numerous environmental hazards including marine eutrophication, global warming, groundwater contamination, soil imbalance and stratospheric ozone destruction. In particular, in Sub-Saharan Africa, including Tanzania, nitrate runoff and leaching mainly from commercial farms have led to excessive eutrophication of fresh waters and threatened the lives of various fish species. However, this is because of the lack of understanding on the farmers of how much to use for a plot versus Tanzanian farmers having too much access to fertilizers. There also many health effects such as babies can ingest high nitrogen levels of water, and gets sick with gastrointestinal swelling and irritation, diarrhea, and protein digestion problems. Nitrogen leaches into groundwater as nitrate, which has been linked with blue-baby syndrome in infants, adverse birth outcomes and various cancers. Economically speaking, nitrogen fertilizers have become a huge cost in agriculture.
Legume nitrogen fixation provides a sustainable alternative to the costly and environmentally unfriendly nitrogen fertilizer for small-scaled farms. Biological nitrogen fixation is the process that changes inert N2 into biologically useful NH3 by plants. Perennial and forage legumes, such as alfalfa, sweet clover, true clovers, and vetches, may fix 250–500 pounds of nitrogen per acre. In a study that compared the environmental, energetic and economic factors of organic and conventional farming systems, in legume-based farming, the crop yields and economics of organic systems compared with the conventional definitely varied based on the type of crop, region and growing conditions however the environmental benefits attributable to reduced chemical inputs, less soil erosion, water conservation and improved soil organic matter were consistently greater in organic systems using legumes. However, there are many factors that also need to be in place for legumes to be the best option, this includes considering the best growing system, growing conditions, and non-fixing crops to grow with it.
The reason I wanted to study agriculture in Tanzania, particularly, is because of my love for the country after spending two weeks there learning about sustainable development and sustainable agriculture. Understanding the impact that agriculture has on the people and the economy is very inspirational to me, and connects to my passion in easing global hunger.
The purpose of this study is to provide a solution to Tanzania’s long standing fight against improving agricultural yields with a heavy consideration on drought tolerance through evaluating GMO crops, with the assistance of either increasing nitrogen fertilizer use or legumes for biological nitrogen fixation. With each approach comes many obstacles and challenges but also rewards if done properly. I believe the reason none of these methods have taken dominance is because of the lack of proper implementation, maintenance and funding. Therefore, the study will also address those concerns with each, and discuss a plan to follow if GM crops are used with either legumes or nitrogen fertilizers or both.
Methods and Materials
Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Google, Google Scholar and Deerfield Academy’s Ebscohost Database. I searched combinations of keywords to agriculture in Tanzania, GM technology, chemical fertilizer use in Tanzania and legume nitrogen fixation. Concrete keywords used related to agriculture in Tanzania were “agriculture in Tanzania,”problems affecting agriculture in Tanzania,” “farm yields in Tanzania.” Concrete keywords used related to GM technology were “GM crops,” “GM trial in Tanzania,” “Impact of GM crops,” “drought tolerant maize,” “herbicide tolerant,” “insect resistant.” Concrete words used related to chemical fertilizer use in Tanzania includes “fertilizer assessment in Tanzania,” “fertilizers costly in Tanzania,” “environmental impacts of fertilizer use in Tanzania,” “economic impacts of fertilizers in Tanzania.” Concrete keywords I searched for legume nitrogen fixation were “legume nitrogen fixation,” “improving yields with legumes,” “best legumes for nitrogen fixation,” “economic impact of legume nitrogen fixation.” The search was completed by February 2017.
Most of the publications on Google were news articles and articles in academic journals and website pages while Google Scholar and Deerfield Academy’s Ebscohost Database comprised of book chapters, conference papers, working papers, academic journals and reports in institutional series. Articles published in academic journals were all passed through a peer-reviewed process. Some of the working papers and reports are published by research institutes or government organizations, while others are NGO publications.
Each published work had to meet certain criteria to be included.
If it is a news article, it had to be from credible news sources like the Guardian, New York Times and Washington Post.
If it is from a academic journal, it had to be from a credible organization, institution or university like the World Bank, the UN, Wellesley College.
The study is an empirical investigation of the economic, health, or environmental impacts of GM crops in particular GM maize, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania.
The study reports the impacts of GM crops, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania, in terms of one of more of the following outcome variables: yield, farmer profits, environmental, economic and health advantages and disadvantages.
Results and Discussion
Problems with maize production
According to The African Agricultural Technology Foundation, in a policy brief detailing the WEMA project, despite the importance of maize as the main staple crop, average yields in farmers’ per hectare compared to the estimated potential yields of 4–5 metric tonnes per hectare. While farmers are keen on increasing maize productivity, their efforts are hampered by a wide range of constraints. The Foundation has identified three reasons for the low productivity of maize, which can be applied to any crop in Tanzania that grows in a semi-arid region:
Inadequate use of inputs such as fertiliser, improved maize seed and crop protection chemicals. The inputs are either not available or too expensive for the farmers to afford
Inadequate access to information and extension services. Many farmers continue to grow unsuitable varieties because they have no access to information about improved maize technologies due to the low levels of interaction with extension
Drought is a major threat to maize production in many parts of Tanzania. Maize production can be a risky and unreliable business because of erratic rainfall and the high susceptibility of maize to drought. The performance of local drought-tolerant cultivars is poor. Maize losses can go as high as 50 percent due to drought related stress.
These constraints highlight exactly what the problem is with increasing productivity. Without improving these three constraints for all crops and farmers, Tanzania’s agricultural productivity can never increase.
GM Crop Evaluation
Transgenic plants are species that have been genetically modified using recombinant DNA technology. Scientists have turned this method for many reasons including: engineering resistance to abiotic stresses like drought, extreme temperatures or salinity, or biotic stresses, such as insects and pathogens, that would normally be detrimental to plant growth or survival. In 2007, for the twelfth consecutive year, the global area of biotech crops planted continued to increase, with a growth rate of 12% across 23 countries. As of 2010, 14 million farmers from 25 countries, including 16 developing countries grow GM crops.
Right now, South Africa is the only African country that has completely implemented GMO crops including HT/Bt/HT-Bt cotton, HT/Bt/HT-Bt maize and HT soybean which are some of Tanzania’s major food and cash crops. South Africa has gained an income of US$156 million since the country switched mostly to biotech crops from 1998-2006. A study published in 2005 by Marnus Gouse is a Researcher in the Department of Agricultural Economics, Extension and Rural Development at the University of Pretoria, South Africa, involved 368 small and resource-poor farmers and 33 commercial farmers, the latter divided into irrigated and dry-land maize production systems. The data indicated that under irrigated conditions Bt maize resulted in an 11% higher yield, a cost savings in insecticides of US$18/ha equivalent to a 60% cost reduction, and an increase income of US$117/hectare. Under rain-fed conditions Bt maize resulted in an 11% higher yield, a cost saving on insecticides of US$7/ha equivalent to a 60% cost reduction, and an increased income of US$35/hectare.
Richard Sitole, the chairperson Hlabisa District Farmers’ Union, KZN in South Africa, said 250 emergent subsistence farmers of his Union planted Bt maize on their smallholdings, averaging 2.5 hectares, for the first time in 2002. His own yield increased by 25% from 80 bags for conventional maize to 100 bags, earning him an additional income of US$300 as of November 2007. He said, “I challenge those who oppose GM crops for emergent farmers to stand up and deny my fellow farmers and me the benefit of earning this extra income and more than sufficient food for our families.”
Because South Africa has the necessary resources, funding and experience in biotech crops, it can thrive in both the international public and private sector, and therefore improve their technology just as the other 23 countries can. Therefore, it is up to South Africa especially t share this knowledge with farmers in other African countries, but in particular, Tanzania, so that Tanzanian farmers can advance agriculture just as South Africa has, if this deems to be the best route to take.
NGO Opposition to GM Crops
Genetically modified crops have been opposed for several years by non-governmental organizations. Because they are not for profit, they have gained more social trust, and so people listen to them. Much of the NGO opposition has been from European-based organizations such as Greenpeace International, and Friends of the Earth International. Many U.S. and Canadian based organizations have also joined these organizations in the anti-GMO campaign. Notice, these are all rich countries, who have the influence over poorer countries. This kind of influence is harmful to countries that do not have the research or experience with GMOs such as Tanzania. People from Europe and North America would obviously not be attracted to GMOS because farming is already very productive. As many as 60 percent of all people are poor farmers could benefit from this technology. Farmers in poor countries rely almost entirely on food crops, not on crops for animal feed or industrial use like the U.S., so today’s ban on GMO foods is specifically damaging to those poor farmers. It becomes more shameful still when anti-GMO campaigners from rich countries intentionally hide from developing country citizens the published conclusions of their own national science academies back home, which continue to show that no convincing evidence has yet been found of new risks to human health or the environment from this technology.
Therefore, if GMOs were to be implemented in Tanzania, farmers would have to be trained and taught of the many benefits of GMOs. This training should be provided by the organizations that are providing the GMO seeds such as Monsanto. Without this training, GMOs crops could fail just like other methods, because of lack of knowledge and maintenance.
Importance of Seed-Saving
More than 90% of seeds sown in by farmers are saved on their own farms. Saving and exhanging seeds is important to Tanzanian farmers, and farmers in general for several reasons. According to the Permaculture Research Institute, saving seeds is important because big corporations that farmers buy from are only interested in the most profitable hybrids and ‘species’ of plants. Therefore, it decreases the biodiversity by condensing the market and discontinuing many crop varieties. When farmers save seeds with good genes and strong traits, the likely hood of better quality and the crops’ ability to adapt to its environment increases. Over generations, the crops will develop stronger resistance to pests as well. However, if GMO seeds provided by Monsanto were the sole practice in Tanzania, farmers could not save or exchange their seesd. As explained on the Monsanto website, “When farmers purchase a patented seed variety, they sign an agreement that they will not save and replant seeds produced from the seed they buy from us.” Therefore, unless USAID, the Bill and Melinda Foundation, and other organizations plan to support the costs of buying seeds on a regular basis, farmers will not be able to maintain their farms if they cannot afford to buy GMO seeds. Tanzanian farmers would be put at risk if this system was implemented without any financial support, and if they were to save or replant seeds, they have a chance of going to trial. Seeds, however, are so important to Tanzanians. Joseph Hella, a Professor from Sokoine University of Agriculture in Morogoro, Tanzania, in a documentary called Seeds of Freedom in Tanzania, insisted that “any effort to improve farming in Tanzania depends primarily on how we can improve farmers’ own indigenous seeds.” The practice of GMO crops does not take this into account. Janet Maro, director of Sustainable Agriculture Tanzania, said “These seeds are our inheritance, and we will pass them on to our children and grandchildren. These too are quality seed and a pride for Tanzania. But the law does not protect these seeds.”
However, if the drought tolerant white maize trial works, WEMA claims that farmers can choose to save the seeds for replanting. But as with all hybrid maize seed, maize production is heavily reduced with replanting of the harvested grain. Also, in order to make the improved seeds affordable, the new varieties will be licensed to the African Agricultural Technology Foundation (AATF), and distributed through local see suppliers on a royalty-free basis. According to Oliver Balch, freelance writer specialising in the role of business in society, if companies like Monsanto end up monopolizing the seed industry, African farmers fear becoming locked into cycles of financial obligation and losing control over local systems of food production. This is because unlike traditional seeds, new drought-tolerant seeds have to be purchased annually.
Lack of accessibility
The biggest problem Tanzania faces with adapting to drought-tolerant GM seeds is unavailibility and unaffordability. According to a study, Drought tolerant maize for farmer adaptation to drought in sub-Saharan Africa: Determinants of adoption in eastern and southern Africa, six African countries were studied to discuss the different setbacks to using drought-tolerant seeds.On a figure that represented these setbacks, seed availibility and seed price was the biggest concerns for Tanzanian small-holder farmers. High seed price was a commonly mentioned constraint in Malawi, Tanzania, and Uganda. Because many Tanzanian and Malawian farmers grow local maize, the switch to DT maize would entail a substantial increase in seed cost. Another observation in the study was that compared to younger people, older households were more likely to grow local maize which could reflect the unwillingness of older farmers to give up familiar production practices. Households with more educated people were more likely to grow DT maize and less likely to grow local maize, which justifies the point that general education and education on GM crops should be the primary goal before implementing any method in Tanzania. For example, some Tanzanian farmers were unwilling to try DT maize varieties as they were perceived as low yielding, late maturing and labor increasing. Educated people are more likely to process information about new technologies more quickly and effectively.
According to the study, there are a few things that need to be implemented if DT maize is to thrive. The seed supply to local markets must be adequate to allow farmers to buy, experiment with, and learn about DT maize. Second, to make seed more accessible to farmers with limited cash or credit (another major barrier), seed companies and agro-dealers should consider selling DT maize seed in affordable micro-packs. Finally, enhanced adoption depends on enhanced awareness, which could be achieved through demonstration plots, field days, and distribution of print and electronic promotional materials.
According to the Third World Network and African Centre for Biodiversity (ACB), the Wema project is set out to shift the focus and ownership of maize breeding, seed production and marketing almost exclusively into the private sector, in the process, forces small-scale farmers in Sub-Saharan Africa into the adoption of hybrid maize varieties and their accompanying synthetic fertilizers. Gareth Jones, ACB’s senior researcher says that Monsanto and the the rest of the biotechnology industry are using this largely unproven technology to weaken biosafety legislation on the continent and expose Africa to GM crops generally. With Tanzania’s unpredictable weather and seeds being incapable of growing without certain conditions like fertilizers, purchasing seeds annually becomes more of a burden and reduces farmers’ flexibility regarding their farming decisions. Also, Gareth Jones says the costly imputs and the very diverse agro-ecological systems in Sub-Saharan Africa mean the the WEMA project will only benefit a select amount of small-scale farmers, with evidently no consideration to the majority who will be abandoned. Again, the argument of seed costs and the monopoly of big seed companies comes up again as Jones also notes that the costs and technical requirements of hybrid seed production are presently also beyond the reach of most African seed companies and a focus on this market will inevitably lead to industry concentration, as has happened elsewhere, enabling the big multinational agro chemical seed companies to dominate.
Lack of progress in drought-tolerance
The United States is an example to take into consideration when evaluating GM crops, because after more than 17 years of field trials, only one GM drought-tolerant maize has been released. In fact, according to Gareth Jones, independent analysis has shown, under moderate drought conditions, the particular maize variety that has been reliaed only increased maize productivity by 1% annually, which is equivalent to improvements gained in conventional maize breeding.
Monsanto’s petition to the USDA cites results from two growing seasons of field trials in several locations in the United States and Chile that faced varying levels of water availibity. Company scientists measured drought through the amount of moisture in soil, and compared the crop’s growth response with that of conventional commercial varieties of corn grown in regions where the tests were performed. Monsanto reported a reduction in losses expected under moderate drought of about 6 percent, compared with non-GE commercial corn varieties, although there was considerable variability in these results. That means that farmers using Monsanto’s cspB corn could see a 10 percent loss of yield rather than a typical 15 percent loss under modern drought- or an increase of about 8 bushels per acre, based on a typical 160-bushel non-drought yield. However Monsanto’s cspB corn, the USDA asserts that it is effective primarily under moderate, not severe, drought conditions so there is no real benefit under extreme drought conditions. Because the cspB corn isn’t beneficial under severe drought conditions, farmers This would not be effective in semi-arid regions in Tanzania, like the Dodoma region that is drought-strickened.
Former Environmental Secretary Owen Paterson accused the EU and Greenpeace of condemning millions of people in developing countries to starvation and death by their stubborn refusal to accept the benefits of genetically modified crops. In response to this, Esther Bett, a farmer from Eldoret in Kenya, said last week: ”It seems that farmers in America can only make a living from GM crops if they have big farms, covering hundreds of hectares, and lots of machinery. But we can feed hundreds of families off the same area of land using our own seed and techniques, and many different crops. Our model is clearly more efficient and productive. Mr Paterson is wrong to pretend that these GM crops will help us at all.” Million Belay, coordinator of the Alliance for Food Sovereignty in Africa, highlights that “Paterson refers to the use of GM cotton in India. But he fails to mention that GM cotton has been widely blamed for an epidemic of suicides among Indian farmers, plunged into debt from high seed and pesticide costs, and failing crops.”
He also declared that,
“The only way to ensure real food security is to support farmers to revive their seed diversity and healthy soil ecology.”
Legume Biological Nitrogen Fixation vs. Nitrogen Fertilizers
The sustainable practice of intercropping nitrogen- fixing legumes with cash and food crops comes with both pros and cons. For farmers who cannot afford nitrogen fertilizer, biological nitrogen fixation (BNF) is could be a key solution to sourcing nitrogen for crops. BNF can be a major source of nitrogen in agriculture when symbiotic N2- fixing systems are used, but the nitrogen contributions from nonsymbiotic microorganisms are relatively minor, and therefore requires nitrogen fertilizer supplementation. The amount of nitrogen input is reported to be as high as 360 kg N ha-1. Legumes serve many purposes including being primary sources of food, fuel , and fertilizer, or to enrich soil, preserve moisture and prevent soil erosion. According to a study, Biological nitrogen fixation and socioeconomic factors for legume production in sub-Saharan Africa: a review, that reviews past and on-going interventions in Rhizobium inoculation in the farming systems of Sub-Saharan Africa, the high cost of fertilizers in Africa and the limited market infrastructure for farm inputs, current research and extension efforts have been directed to integrated nutrient management, in which legumes play a crucial role. Research on use of Rhizobium inoculants for production of grain legumes showed it is a cheaper and usually more effective agronomic practice for ensuring adequate N nutrition of legumes, compared with the application of N fertilizer.
Tanzania’s total fertilizer consumption was less than 9 kilograms (kg) of fertilizer nutrient per hectare of arable lands in 2009/10, compared with Malawi’s 27 kilogrammes and 53 kilogrammes in South Africa and that represented a substantial increase from the average 5.5 kg/ha that was used four years ago. 82 percent of Tanzanian farmers do not use fertilizer mainly because they lack knowledge of its benefits, rising cost of fertilizer, and not knowing how to go about accessing credit facilities. Although commercial banks in the country claim that they support agriculture, many farmers continue to face hurdles in readily accessing financing for agricultural activities, including purchasing fertilizer. The lack of high-yield seed varieties and level of fertilizer use of either traditional or improved seeds is a major contributor to low productivity in Tanzania and thus the wide gap between potential yields and observed yields.
Many will believe that nitrogen fertilizers are mostly responsible for eutrophication and the threatening of fish species. However, according to Robert Howarth, a biogeochemist, ecosystem scientist, active research scientist and professor at Cornell Univesersity, says that the real perpetrators of this in countries like Tanzania are the insufficient treatement of water from industries, erosion in infrastructure construction, runoff of feed and food waste from both municipal and industrial areas, atmospheric nitrogen deposition and nutrient leaching. In fact, in Tanzania, the average nitrogen balance in Tanzania in 2000 was as low as -32 Kg N ha-1 yr-1. This amount was similar to many other Sub-Saharan countries.
However, if Tanzania is to continue using nitrogen fertilizers, the nitrogen agronomic use efficiency needs to be improved. Nitrogen agronomic use efficiency is defined as the yield gain per unit amount of nitrogen applied, when plots with and without nitrogen are compared.** Right now smallholder farmers fields are still low because of poor agronomic practices, including blanket fertilizer recommendations, too low fertilizer application rates to result in significant effect and unbalanced fertilization.Recent interventions in Sub-Saharan Africa, including fertility management showed that nitrogen agronomic use efficiency could be doubled when good agronomic practices are adopted.The dilemma in SSA, including Tanzania, farming is mainly practiced by resource-disadvantaged smallholder farmers who cannot afford most of the inputs at the actual market prices.
In a study called Narrowing Maize Yield Gaps Under Rain-fed conditions in Tanzania: Effect of Small Nitrogen Dose, the authors evaluated the potential of the use of small amount of nitrogen fertilizer as a measure to reduce maize yield gap under rain fed conditions.From the experiment, it was observed that grain yields were similar in all water stressed treatments regardless of nitrogen dose, suggesting that water stress imposed after critical growth stage has no significant effect on final grain yield. The explanation they came up with is that within 45-50 days after sowing, the plant should have accumulated the required biomass for grain formation and filling, and water stress occuring afterwards has no effect on yield. For resource poor farmers, low doses of nitrogen fertilizer applied after crop establishment may make a substantial contribution to the food security over non-fertilized crop production. This approach can work well in environments with low seasonal rains because yield gain is higher than when high nitrogen quantities are applied in water scarce environment. In the study’s conclusion, it highlights that there is a limitation as the yield gap narrowing strategy was evaluated at a plot scale. Further study is needed to investigate the necessary response of small nitrogen doses as a strategy in bridging the maize yield gaps in multiple fields and many seasons especially under farmer’s management.
Conclusion:
To increase agricultural productivity, there are many factors to consider, drought-tolerance is just one of them. Semi-arid regions in Tanzania pose a serious problem for agriculture that depends on rainfall, however drought- tolerant GM crops could be a possibility, however a lot of work still needs to be done. Implementing these drought-tolerant seed varieties can only be a solution if:
The WEMA project for GM white maize is successful
Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. This ensures that farmers are given the flexibility to control their crop production.
Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation.
Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and efficiency of all three systems.
Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.
2017-3-5-1488741967
Working with hazard group 2 organisms within a containment level 2 laboratory
There are many aspects that must be reviewed when entering the laboratory and there are many regulations that need to be followed to ensure not just your own safety but the safety of your workers around you. Inhalation is one issue that could occur within the laboratory. Within the laboratory many procedures involve the breaking of fluids containing organisms and the scattering of tiny droplets names aerosols. These droplets have the potential to fall contaminating hands and benches while others are very small and dry out immediately. The organisms containing within the aerosol is names droplet nuclei and is airborne and move about in small air currents. If inhaled there could be potential risk of infection so it is important that nothing is inhaled within the laboratories.
Ingestion of organisms is another problem within the laboratory. There are many ways in which organisms may be introduced into to the mouth such as thorough using the mouth to the pipette by direct ingestion, fingers contaminated by handling spilled cultured or from aerosols can potentially transfer micro-organisms to the mouth directly or indirectly by eating, nail biting, licking labels etc. injection is another problem within the laboratory through infectious materials that may be injected by broken culture containers, glass Pasteur pipettes or other broken glass or sharp objects. Through the skin and eye small abrasions or cuts on the skin may not be visible to the naked eye and may allow microbes to enter the body, or splashed of bacterial culture into the eye could result in infection.
This laboratory consisted of working with Hazard group 2 organisms within a containment level 2 laboratory. The hazard level is the level given to the organism which indicates how dangerous the organism could be. Hazard level 2 organisms can cause human disease and may be a hazard to employees although it is unlikely to spread to the community and there is usually effective prophylaxis or treatment available, examples include examples include Salmonella typhimurium, Clostridium tetani and Escherichia Coli.
Within containment level 2 laboratories there are many health and safety procedures to follow, below are examples of health and safety procedures set for containment level 2 laboratory:
• Protective eye equipment is necessary within the laboratory apart from when using microscopes
• There must be specified disinfection procedures in place
• Bench surfaces must be impervious to water, easy to clean and resistant to acids, alkalis, solvents and disinfectants.
• Laboratory procedures that give rise to infectious aerosols must be conducted in a microbiological safety cabinet, isolator or be otherwise suitably contained.
• When contamination is suspected, hands should immediately be decontaminated after handling infective materials and before leaving the laboratory.
• Laboratory coats which should be side or back fastening should be worn and removed when leaving the laboratory.
Within this laboratory glitter bug was applied to the hands and analysed under the light box. Glitter bug is a hand lotion that has a UV fluorescent glow. When place under UV light the glitter bug glows on the places where germs are located which cannot be seen to the human eye.
Loffler’s Methylene blue is a simple stain that was used to stain Saccharomyces cerevisiae.
This is a simple stain which is used for the analysis and understanding of bacterial morphology. It is a cationic dye which stains the cell blue in colour and can be used for the staining of gram-negative bacteria.
Results
Below are the results gathered from the glitter bug before washing our hands. The blue areas indicate where the glitter bug was most fluorescent under the light.
Introduction
Gram Stain
In microbiology one of the most common stains to carry out is the gram stain to understand and observe the differentiation between microbiological organisms. It is a differential stain which can differentiate between gram positive bacteria and gram negative bacteria. The gram-positive bacteria will stain purple/blue in colour and gram negative bacteria will stain red/pink in colour. The results indicating this differentiation can be seen within the variation of the arrangement, cell wall and cell shape structure.
The gram stain has many advantages such as it is very straightforward to partake in, it is cost effective and is one of the quickest methods used to determine and classify bacteria.
The gram stain is used to provide essential information regarding the type of organisms present directly from growth on culture plates or from clinical specimens. The stain is also used within the screening of sputum specimens to investigate acceptability for bacterial culture and could reveal the causative organisms within bacterial pneumonia. Alternatively, the gram stain can be used for the identification of the existence of microorganisms in sterile bodily fluids such as synovial fluid, cerebrospinal fluid and pleural fluid.
Spore stain
An endospore stain is also a differential stain which is used in visualizing bacterial endospores. The “production of endospores is an essential characteristic for some bacteria enabling them to become resistant within many detrimental environments such as extreme heat, radiation and chemical exposure. Spores contain storage materials and possess a relatively thick wall. Possession of a thick wall cannot be penetrated by normal stains either heat must be administered to allow the stains to penetrate the spore or the stain must be left for a longer period to allow penetration. The identification of endospores is very important within clinical microbiology within the understanding and analysis of a “patient’s body fluid of tissue as there are very few spore forming areas. There are two extensive pathogenic spore forming groups which are bacillus and clostridium, together resulting in a variety of different lethal disease such as tetanus, anthrax and botulism.”
The Bacillus species, Geobacillus species and Clostridium species all form endospores which develop within “the vegetative cell. These spores are immune to drying and have the purpose to survive. They develop in unfavourable conditions and are metabolically dormant and inactive until the conditions are favourable for the process of germination returning to their vegetative state.”
The Schaeffer Fulton method is a technique in which is “designed to isolate endospores through the process of staining. The Malachite green stain is soluble within the presence of water and has a small affinity for cellular material potentially resulting in the vegetative cells decolourising with water. Safranin is then applied to counterstain any of the cells which may have been decolourised. Resulting in the vegetative cells being pink in colour and the endospores being green.”
1. The bacteria that were used in this laboratory was Salmonella poona and Bacillus cereus. Both bacteria were identified as gram negative and are rod shaped cells. Other bacteria with identical shape characteristics as Salmonella poona and Bacillus cereus is Klebsiella pneumoniae which belongs to the Genus Klebsiella and the species K. pneumoniae. Another bacterium that has the same shape to those used in thos laboratory is Acinetobacter baumannii which belongs to the genus Acinetobacter and the species Acinetobacter.
2. The loop is sterilised within the Bunsen burner flame by placing the circular portion of the loop into the cold (blue) part of the flame and moving it up into the hot orange part of the flame until it is cherry red. If the loop is placed into the hot part of the flame first the material on the loop (including bacteria) might spurt out as an aerosol and some bacteria may not be destroyed. Once the loop is cherry red this indicates it is sterilised by incineration through dry heat and is then ready for immediate use. If the loop is then laid down or touched against anything it will need to be desterilised again however loops should never be laid on benches.
3. There are many possible problems that could affect a slide smear, for example excessive heat during fixation can result in altering the cell morphology making the cells much easier to decolourise. Another problem could be having a low concentration of crystal violet; this could result in stain cells which are easily decolourise. A third possible problem affecting the slide smear could be excessive washing between the steps as the crystal violet has the ability to wash out with the addition of water when exposed for too long. The last possibility which could affect a slide smear results is excessive counterstaining as it is a basic dye it is possible to replace the crystal violet-iodine complex within gram positive cells with an over exposure to the counter stain.
4. Hand hygiene is a necessity within the laboratories. It is the first line of defence and is considered the most crucial procedure from preventing the spread of hospital acquired infection.
The following steps is the appropriate hand washing technique:
• Wet hands with warm running water
• Enough soap must be applied to cover all surfaces
• Thoroughly wash all parts of the hands and fingers up to the wrist, rubbings hands together for at least 15 seconds
• Hands should then be rinsed under running water and dried thoroughly with paper towels
• Paper towels should be used to turn off taps before discarding the towels in the waste bin.
1. An example of a gram positive bacteria is Propionibacterium propionicus which belongs to the genus Propionibacterium and the species P.propionicus.
An example of gram negative bacteria is Yersinia enterocolitica which belongs to the genus Yersinia and the species Y. enterocolitica.
2. The gram stain has the ability to differentiate between gram positive and gram negative bacteria. Gram positive bacteria possess a thick layer of peptidoglycan within their cell walls but the lipid content of the cell wall is low resulting in small pores which are closed because the cell wall proteins are dehydrated from the alcohol resulting in the CV-I complex being retained within the cells which remain blue/purple. However Gram negative bacteria possess a thinner peptidoglycan wall and a high volume of lipid within their cell walls resulting in large pores that remain open when acetone-alcohol is added. “The CV-I complex is then lost through these large pores. The gram-negative bacteria then appear colourless. Once the counterstain is applied to the bacteria the cells turn pink. This is due to the counterstain entering the cells through the large pores in the wall.”
3. There are many problems which could arise during the production of a bacterial smear. These include having a dirty slide which is greasy or perhaps coated with dirt and dust. Having this will result in unreliable results due to the smear containing the desired microbes washing off the slide during the staining process or when the bacterial suspension is placed on the microscope slide it will not spread out evenly. Another possible problem could be having a smear that is too thick which results in too many cells being on the slide and the penetration of the microscope light through the smear is poor. However, if the smear is too thin then seeking for the bacteria cells is time-consuming.
Germination is also a complex process and is normally triggered by the presence of nutrients (although high temperatures are also sometimes required to break the dormancy of the spore). The events during germination include:
♣ Swelling of the spore
♣ Rupture or absorption of spore coat(s)
♣ Loss of resistance to environmental stresses
♣ Release of the spore components
♣ Return to metabolically active state
Outgrowth of the spore occurs when the protoplast emerges from the remains of the spore coats and develops into a vegetative bacterial cell.
Introduction
The human body and the environment both consist of a vast number and variety of bacteria that are within mixed populations such as within the gut and soil. The bacteria being mixed with such a different variety of populations must be separated in pure culture to investigate and diagnose the identify of each bacterium. The aim of pure culture for bacteria requires that the number or organisms present is decreased until single, isolated colonies are obtained. This can be accomplished through the process of successful streak plate technique or through liquid culture dilutions on a spread plate.
The streak plate technique is used to analyse the purity of cultures that must be managed over long lengths of time. Contamination “by other microbes can be seen through the process of regularly sampling and streaking. The streak plate technique is used in several different aspects such as expert practitioner to begin a new maintained culture through selecting an appropriate isolated colony of an identifiable species with a sterile loop and then going on to grow those cells in a nutrient” broth.
When bacteria in a mixed population are streaked onto a general-purpose medium for example nutrient agar this results in the production of single, isolated colonies however the morphology of the colony does not indicate immediate, reliable means of identification. In practice, microbiologists use differential and selective media in the early stages of separation and provisional identification of bacteria before sub culturing the organisms to a fitting general purpose medium. The identity of the sub-cultured organisms can then be approved using a range of suitable tests.
Selective and differential media are used for the isolation of identification of particular organisms. A variety of selective and differential media are used within medical diagnostics, water pollution laboratorie and food and dairy laboratories.
Differential media normally contain a substrate that can be broken down (metabolised) by bacterial enzymes. The effects of the enzyme can then be observed visually in the medium. Differential media may possibly contain a carbohydrate for example glucose or lactose as the substrate.
Selective media are media that consist of one or more antimicrobial chemicals these could be salts, dyes or antibiotics. The anti-microbial chemicals can select out the specific bacteria while inhibiting the growth and development of other unwanted organisms.
Cysteine lactose electrolyte deficient agar (CLED) is a differential culture medium which is used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED Agar sustains the growth and development of a variety of different contaminants such as diphtheroids, lactobacilli, and micrococci.
CLED can be used to differentiate between naturally occurring gut organisms e.g. E.coli and gut pathogens e.g. Salmonella poona in a sample of faeces. There are many advantages of using CLED agar for urine culture, one being that CLED agar is a good discrimination of gram negative bacteria through the process of lactose fermentation and on the appearance of the colonies. Another advantage of using CLED Agar is it is very cost effective and also it inhibits the gathering of Proteus spp which is frequently involved in urinary tract infections.
CLED also possesses lactose as a substrate and a dye names Bromothymol Blue which demonstrates changes in pH. The pH of CLED plates neutral resulting in the plates being pale green in colour. Bacteria such as E. coli that produce the enzyme β galactosidase break down lactose by fermentation to produce a mixture of lactic and formic acid for the pH to become acidic. The colonies and medium then transform into a yellow colour which indicates lactose positive. Lactose negative bacteria cannot ferment lactose due to not possessing the ability to produce β galactosidase resulting in pale colonies on CLED.
MacConkey Agar (MAC) is a selective medium due to the presence of bile salts and crystal violet which inhibits most gram-positive cocci. The bile salts and crystal violet encourage the growth and development of gram positive organism with lactose providing a source of fermentable carbohydrate. MacConkey is designed to isolate and differentiate enterics based on their ability to ferment lactose. Bile salts and crystal violet inhibit the growth of Gram positive organisms. Lactose provides a source of fermentable carbohydrate, allowing for differentiation. Neutral red is a pH indicator that turns red at a pH below 6.8 and is colourless at any pH greater than 6.8.
Organisms that ferment lactose and thereby produce an acidic environment will appear pink because of the neutral red turning red. Bile salts may also precipitate out of the media surrounding the growth of fermenters because of the change in pH. Non-fermenters will produce normally-colored or colourless colonies. In MacConkey agar, the substrate is lactose which is fermented by lactose positive bacteria e.g. E. coli to lactic acid and formic acid resulting in the medium being acidic. The dye neutral red then changes colour and colonies of E. coli are now violet red. Lactose negative bacteria are describes as possessing pale colonies and therefore mac can be used to select out and differentiate between naturally occurring gut organisms and gut pathogens.
1. Figure 13 elicits how the majority of the colonies used in laboratory 3 took up the entire colony edge and where flat in the elevation of the colonies. The streak plate method can obtain single colonies through firstly streaking the portion of the agar plate with an inoculum and then streaking successive areas of the plate to dilute the original inoculum so that single colony forming units (CFUs) will give ruse to isolated colonies.
2. Potential problems that could lead to the production of unsuccessful plates or slants could be that when sterilising the loop, it was placed in the inner blue flame and not given time to cool down being instantly placed directly into the plate, killing all bacteria within the plate. Another problem could be insufficient flaming between the quadrants leading to the loop not being sterile leading to contamination of organisms.
3. A Bacterial cell is a microscopic single- celled organism which thrive in diverse environments. A Bacterial colony is a discrete accumulation of a significantly large number of bacteria, usually occurring as a clone of a single organism or of a small number.
4. Refer to Figure 15 and 16
5. Cled it is a solid medium used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED contains lactose as a substrate and a dye called Bromothymol blue which indicates the changes in pH. Prior to inoculation, plates of cled are pale green in colour. This is due to the pH of the plates being neutral. Bacteria such as E. coli that produce the enzyme β Galactosidase break down lactose through the process of fermentation to produce a mixture of lactic and Formic acid so that the pH is acidic. Resulting in colonies and medium turning yellow (lactose positive). Lactose negative bacteria e.g. Salmonella poona is unable to ferment lactose due to them not being able to produce β Galactosidase and usually producing pale colonies on cled. Cled can therefore be used to differentiate between naturally-occurring gut organisms.
6. From the results, we can conclude that the bacterium that fermented lactose was Escherichia. Coli and the none fermenting bacterium was Salmonella poona.
7. Mannitol Salt Agar (MSA) is utilised as a selective and differential medium in the process of isolating and identifying Staphylococcus aureus from clinical and non clinical specimens. Mannitol Salt Agar contains the carbohydrate mannitol, 7.5% sodium chloride and the pH indicator phenol red. Phenol red is yellow below p.H 6.8, red at pH 7.4 to 8.4 and pink at 8.4. the sodium chloride makes this medium selective for staphylococci since due to most bacteria not being able to live in such levels of salinity.
The pathogenic species of staphylococcus ferment mannitol and thus produce acid. This acid then turns the pH indicator to a yellow colour. Non-pathogenic staphylococcal species grow however there is no colour changed produced.
The formation of yellow halos surrounding the bacterial growth is the predicted evidence that the organism is a pathogenic Staphylococcus. Significant growth that produces no colour change is the presumed evidence for none pathogenic Staphylococcus. Those staphylococci that do not ferment mannitol produce a purple or red halo around the colonies.
A viable count
A viable count is a method for estimating the number of bacteria cells in a specific volume of concentration. The method relies on the bacteria growing a colony on a nutrient medium. The colonies then become visible to the naked eye and can then be counted. For accurate results the total number of colonies must be between 3-300. Fewer than 30 indicate the results are not stastically valid and are unreliable. more than 300 colonies often indicate an overlap in colonies and imprecision in the count. To establish that there is an appropriate final figure for the total colony count several dilations are normally cultured. The viable count method is used by microbiologists when undergoing examination of bacterial contamination of food and water to ensure that they are suitable for human consumption.
Serial Dilution
A serial dilution is the process of consecutive dilutions which are used to reduce a dense culture of cells to a more applicable concentration. within each dilution the concentration of bacteria is reduced by a certain amount. through calculating the total dilution over the entire series the number of initial bacteria can be calculated. After the dilution of the sample an estimation of the number of bacteria visible is carried out using the surface plate count known as the spread plate technique and the pour plate technique. Once incubated the colonies are then counted and an average is calculated. The number of viable bacteria per ml or per gram of the original sample is also calculated however this is calculated on the principle that one visable colony is the direct result of the growth of one single organism. Nonetheless bacteria has the capability of clumpimg together and this could result in a colony being produced from a clump. For that reason counts are expressed as colony forming units (cfu) per ml or per gram as this gives the explanantion as to why counts are estimations.
Spread plate technique
The spread plate technique is used for viable plate counts for when the total number of colony forming units on a single plate is counted. There are many reasons as to why the spread plate technique is so useful within microbiology for example is can be used to calculate the concentration of cells in a tube from which the sample was initially plated. The spread plate technique is also routinely used in enrichment, selection and screening of experiments. However, there are some disadvantages for when using this technique such as crowding of the bacterial colonies could make the enumeration much more challenging.
Pour plate technique
The pour plate method is used in the counting of the coloy-forming bacteira present in a liquid form. The pour plate has many advantages fo example it allows the growth and quanitifation of microaerophiles as there is little oxygen within the surface of the agar, and identification of anaerobes, aerobe or facultive aerobes is much easier as they have the ability to frow within the media. Howver there are a feew disadvatges in using the pour plate technique for example the temperature of the medium needs ti be tightly regulated. If the temperature is too warm the mirocrogansims will die and if the temperature is too cold the agar will clump together which can sometimes be mistaken for colonies.
Introduction
In microbiology understanding the characteristics that bacteria possess is critical to the knowledge and understanding of microbiology. To enable a full understanding of the characteristics bacteria possess they undergo simple tests named primary tests which can be used to establish if the cells are gram negative or gram negative cells, if the cells are rods or cocci shape and if the bacteria is catalase positive or catalase negative.
The catalase test is a primary test which is used in the detection of catalase enzymes through the decomposition of hydrogen peroxide resulting in the release of oxygen and water as demonstrated by the equation below:
2 H2O2→2 H2O + O2
Hydrogen peroxide is produced through various bacteria as an oxidative product of the aerobic breakdown of sugars. However, it is highly toxic to bacteria and could lead to cell death. The catalase test serves many purposes such as differentiating between the morphologically similar Enterococcusor Streptococcus which is catalase negative and Staphylococcus which is catalase positive. The test is also valuable within differentiating between an aerobic and obligate anaerobic bacterium and can be used as an aid within the identification of Enterobacteriaceae.
The oxidase test is also an example of a biochemical primary test which is used in the identification of if bacteria produce cytochrome c-oxidase which is an enzyme of the bacterial electron transport chain.
Oxidase positive bacteria possess cytochrome oxidase or indophenol oxidase which both catalyse the transport of electrons from donor compounds such as NADH to electron acceptors which is usually oxygen. If present, the cytochrome c oxidizes the reagent (tetramethyl-p-phenylenediamine) to (indophenols) producing a purple color as the end product. When the enzyme is not present, the reagent remains reduced and is colourless.
Organisms which are known as oxidase positive are- Pseudomonas, Vibrio, Brucella, Pasturella, and Kingella. Organisms which are oxidase negative are Acinetobacter, Staphylococci, Streptococci and all Enterobacteriaceae.
Primary tests are helpful in the understanding of the initial characteristics bacteria possess. However more advanced methods may be used to finalise the identification to the level of Genus and Species to enable treatment for patients and to enable appropriate action to be taken to prevent any further transmission of infection. Laboratories today now rely on rapid id kits which analyse the biochemical aspects of bacteria and this is known as bio typing.
Rapid identification kits are used for the identification and differentiation of different bacteria. The ID32E is commonly used in the identification of members of the Enterobacteriaceae. There are two types of kits one is IDSTAPH which us used in the identification of members of the staphlycococcaceae while the IDSTREP strip is used in the identification of streptooccaceae. The kits consist of wells which contain dried substrates such as sugars or amino acids. These dried substrates are then reconstituted through the addition of saline suspension of bacteria. The results are then read on a computer profile which is linked to an identification software. From then the genera and species can be analysed and differentiated from each other.
2018-4-19-1524112621
The genocide of Darfur: college essay help
How would you feel to be without a home, family, and basic needs? What about having to struggle everyday just to live your life? If that is not bad enough, imagine being in a constant state of danger. The genocide of Darfur is rooted in decades of conflict and has lasting effects on the community that have resulted in an unstable environment. “The Sudanese armed forces and Sudanese government-backed militia known as Janjaweed have been fighting two rebel groups in Darfur, the Sudanese Liberation Army/Movement (SLA/SLM) and the Justice and Equality Movement (JEM),”(www.international.ucla).
The first civil war ended in 1972 but broke out again in 1983. This is what really initiated the genocide. However, the genocide escalated and was credited for starting in February of 2003. It was considered to be the first genocide of the 21st century. “The terrible genocide began after rebels, led mainly by non-Arab Muslim sedentary tribes, including the Fur and Zaghawa, from the region, rose against the government.”(www.jww). “This genocide is the current mass slaughter and rape of Darfuri men, women, and children in western sudan.” (www.worldwithoutgenocide). Unrest and violence continue today. The group that is carrying out the genocide is the Janjaweed . They have destroyed those in Darfur by “burning villages, looting economic resources, polluting water sources, and murdering, raping, and torturing civilians.”(www.worldwithoutgenocide).
Believe it or not, this genocide is still going on today. As a result, Darfur is now facing very great long term challenges and will never be the same. There are millions of displaced people who depend on refugee camps. However, at this point these camps are not much a source of refuge, but more so a danger themselves. The cause of this is severe overcrowding. (3). It is often unsafe for anyone to leave the camps. Women who would normally go in search of firewood cannot anymore because they may end up being attacked and raped by the Janjaweed militias (www.hmd). The statistics of this genocide show how bad it really is. Since 2003 when it began, it has accumulated over 360,000 Darfur refugees in Chad, been the cause of death for over 400,000 people, and has affected 3 million people in some way. (www.jjw) On top of that, more than 2.8 million people have been displaced (www.worldwithoutgenocide). An interview that was taken suggests that 61% of the respondents had witnessed the killing of one of their family members. In addition, 400 of Darfur’s villages have been wiped out and completely destroyed (www.borgenproject). To prove that this is a real problem, here is a personal experience. “Agnes Oswaha grew up as part of the ethnic Christian minority in Sudan’s volatile capital of Khartoum. In 1998, Agnes immigrated to the United States, specifically to Seattle. She has now become an outspoken advocate for action against the atrocities occurring in Darfur” (www.holocaustcenterseattle). Agnes has used her struggles to inspire others. She is a prime example that you can make something good out of something so devastating and wrong.
There are many help groups that are working to inform people about this problem. The two that I am going to highlight are the Darfur Women Action Group (DWAG) and the Save Darfur Coalition. The first group, or the Darfur Women Action Group (DWAG) is an anti-atrocities nonprofit organization that is led by women. They envision a world with justice for all, equal rights, and the respect for human dignity. They provide the people of Darfur with access to tools that will allow them to oppose violence. This group also addresses massive human rights abuses in their societies and works with others to prevent future atrocities. They do this all while promoting global peace. Along with that, they ask us to speak out and spread the word. Their ultimate goal is to bring this horrific situation to the attention of the world to end it for good (www.darfurwomenaction). The next help group is the Save Darfur Coalition. They have helped develop strategies and advocate for diplomacy to encourage peace.They have also helped conquer the deployment of peacekeeping forces in Darfur. Because of them, there have been billions of dollars in U.S. funding for humanitarian support. Violence against women has been used as a weapon of genocide and because of them, the awareness in Congress of this issue has grown (www.pbs).
As Americans, we can do many things to stop this issue. First, we must put aside domestic politics and help those even if they are not a part of our country. The growing genocide in Darfur is not a partisan issue but one that strecthes across a wide variety of constituencies, or bodys of voters and supporters. Some of these include religious, human rights, humanitarian, medical, and legal communities. All of these, and some others, are advocating a pugnacious worldwide response to the crisis (www.wagingpeace).
The genocide of Darfur is atrocious. It is rooted in decades of conflict and has lasting effects on the community of Darfur. This conflict has resulted in an unstable environment for all those who belong to the country of Sudan. It has made normal people live in fear every day. Millions of people are affected, and 2.8 million displaced people. Also, 400,000 innocent people have been killed. This is all because of the actions of the Janjaweed gang. This genocide is an overall horrendous thing that is actually going on in the world around us. There is much that can be done to help, but can we, being in the good situations that we are in, take time out of our own lives to think about those who really need our help? Do we care enough to spend time and money on people we don’t even know? If we choose to do so, we could be making a huge difference in the lives of people. Even though they might live across the world from us and live very different lives than us, they are very similar to us in many ways.
2021-12-4-1638659036
Status of income groups and housing indicators: essay help
1. Introduction
Buying a house is often the biggest deal for a family in its lifetime. Furthermore, economic, social, and physical properties of the neighborhoods have short term and long term impacts on the residents’ physical and psychological status (Ellen et al., 1997). Accordingly, inappropriate housing would bring about many health risks, in a way that it would inflict adults, as well as children, with a variety of mental and physical disorders (Bratt, 2000; Kreiger & Higgens, 2002). Instable housing conditions, moreover, lead to stress and thus have manifold negative impacts on people’s education and professions (Rothstein, 2000). Despite the importance of housing in human life, the provision of adequate and affordable housing for all people is one of the current problems of human society, since almost half of the world’s population lives in poverty and about 600 million to 800 million people are residing in sub-standard houses (Datta & Jones, 2001). Despite poor housing in developing countries, there are no organizations and institutions in order to supply services and organize institutional developments so as to strengthen different classes of the society (Anzorena, 1993; Arrossi et al., 1994). For example, 15% of people in Lagos, 51% in Delhi, 75% in Nairobi, and 85% in Lahore live in substandard housing. It has been estimated that thousands of low-income residents do not use healthy water transported from pipes and thus are pushed to use infected or substandard water (Hardoy, Mitlin & Satterthwaite, 2001). For instance, 33% of people in Bangkok and 5 million in Kolkata don not have access to healthy water and 95% of people in Khartoum live without sewage system. According to a report by the World Health Organization, the probability of death in children who live in substandard settlements is 40 to 50 percent more than the children in European and North-American children (Benton-short and Short, 2008). That is because where they live lacks security and essential infrastructures and facilities like water, electricity and sewage; in addition, they are also vulnerable to numerous risks (Brunn, Williams and Ziegler, 2003). In 2005, about 30 environmental disasters led to a death toll of almost 90 thousand people, a majority of whom were from poor countries and low-income people (Chafe, 2006).
Planning in the housing sector in Iran lacks an efficient statistical system. Despite the paradoxes, gaps and inconsistencies in the data and statistics from the housing sector, reaching to a comprehensive and clear plan to address the problems of this sector is almost beyond any possibility. Lack of integrity among organizations responsible for collecting and arranging housing index information (Statistical Center of Iran, the Central Bank, Ministry of Housing and Urban Development, municipalities, etc.) should be considered as a serious problem. Aiming at evaluating the status of income groups and housing indicators — such as the average level of infrastructures, the average level of income, etc. — in existing deciles, the present study, therefore, has estimated housing demand and evaluated the financial power of low-income groups in the city of Isfahan so as to apply the given results in accurate planning of housing for the low-income groups in the city if Isfahan.
2. Theoretical framework
Housing is the smallest component of accommodations and is the concrete representation of development. According to Williams (2000), cities embracing social justice are those cities that have the greater share of high-density housing and provide services and facilities. Rappaport (1969) maintains that the factors of culture and human understanding of the universe, together with life, have played a crucial role in housing and its spatial divisions. According to Le Corbusier’s viewpoint, a house must response to both physical and spiritual needs of people (Yagi,1987). Housing is the basic environment of family, and it is a safe place to rest away from the routines of work and school, and is a private space. According to Fletcher, home is a paradoxical ground of both tenderness and violence. Gaston Bachelard in the book, The Poetics of Space, has called home as an ”atmosphere of happiness”, wherein rest, self-discovery, relaxation and maternity becomes important. According to Short (2006), housing is the nodal point of all dualities and paradoxes. Housing and housing planning has been analyzed from different perspectives. Development and growth pole theory put the acute housing problems as something transitional and as parts of development programs (Shefa’at, 2006). On the contrary, the theory of dependency counter-urbanization theories have recognized inequality and the one-sided product distribution from margin to the center as the main reason for housing deterioration (Athari, 2003). From an economic viewpoint of the market, housing issues should be left to the market mechanism (Dojkam, 1994), and housing needs of the market system should be provided by the private sector (Seifaldini, 1994). The government should also avoid spending funds for low-income housing (Chadwick, 1987). Urban management approach, which, from the point of view of political economy, is a very important orientation, believes that wider social and economic contexts play a role in the formation of urban residential patterns. One of the most important parts of urban planning is the planning of housing development; economic factors such as cost of living, employment bases and instability of income play a very important role in the housing planning. Beside the economic factor, architectural style is the most determining factor in housing planning. Regional indigenous languages, stylistic trends, weather, geography, local customs, and other factors influence the development of housing planning and housing design. The five characteristics of housing are: the type of building, style, density, the size of project, and location (Sendich, 2006). Housing planning should be designed in such a way that in addition to adequate housing, basic ecological variables also to be included in (Inanloo, 2001). Governments often do housing planning in three categories of national, regional and municipal levels so as to be able to employ it as a technique to solve the housing problems of its citizens (Ziari & Dehghan, 2003). The fundamental goal of housing planning in national level is to balance the housing supply and demand regarding its position in macroeconomics (Sadeghi, 2003). In regional planning of housing, supply and demand are evaluated in regional level and the aim is to balance them. The difference between housing planning in regional level and national level is that in regional level the relation between housing and macroeconomics is not considered, but it emphasizes on the economic potentials inside the regions (Zebardast, 2003). Local planning of housing is conducted in three scales of town, city and urban areas. Housing planning can be approached in two different ways. The first approach is the distribution of the goals and credits of national and regional plans to smaller geographic units of region, city or town. The second approach is to investigate the housing status in local levels and to estimate the needed land for future housing development and suitable differentiation of the lands (Tofigh, 2003). The other approach is concerned with the low-income housing and presents three programs: 1. Programs that provide subsidies for rental housing, either as individual or in complexes; 2. Providing tax credits that result in the production of low-rent housing units; 3. Supportive programs for affordable housing of the lower classes (Mills et al, 2006). Such a policy is along with tools such as tax-deductibility, long-term loans, insurance and so on and so forth. The UN addresses the housing of those in need through the Commission on Human Settlements in the form of Habitat Program. In 1986, the UN codified the Global Shelter Strategy for the Homeless until the year 2000 with an empowerment approach. In 1992, there were addressed in Habitat 2 the security of the right for housing, particularly for the low incomes. In 2001, in the special session of the UN General Assembly in New York, the need to address the issue of urban poverty and homelessness received serious considerations.
3. Methodology
This study is a basic-applied piece of research adopting a descriptive-analytical methodology. The geographic area of the research is the political and administrative area of the city of Isfahan in 2014. Variables of the research are the income deciles, housing quantity developments, land and housing prices, the system of housing finance, housing status in the expenditure basket of the low-income households, the Gini coefficient of housing costs, the effective demand for housing in the income deciles considering the area of infrastructure and the access to housing index. The Statistical Centre of Iran provided the statistics and the city of Isfahan provided the cost/income scheme. Methods used include statistical techniques of the population deciles and for the financial power of the groups there has been used the indirect method function.
4. Results
4.1. Changes in the quality of housing in Isfahan
According to the statistics, the population and urban growth had a great growth in decade from 1996 to 2006. The average of population growth in the years from 1996 to 2006 was 1.37%, the growth of the households was 52.3% and the growth of housing was 7.3% (Table 1).
According to the 2011 census, the population was about 1,908,968 people residing in 602,198 households, hence the size of each household is 3.17 individuals. However, the number of housing units constructed in the same year for 602,198 households in the city of Isfahan was not standard and there was a housing shortage to the level of 0.193. But with the growth in the number of housing units in 2011, it can be stated that the 215000-unit policy of the Mehr Housing Project has had a large impact on the number of housing units of the city of Isfahan (see Table 1).
Table (1): Changes in the quantity of housing in the city of Isfahan compared to its population changes (period 1996-2011)
Period
Population number of households size of the household Number of typical residential units Proportion of households to residential units Population growth rate Urbanization rate in percent Shortage of housing regarding the households Percent of housing shortage to the existing ones
1996 1310659 326581 4.03 325225 1.239 1.51 74 1356 0.416
2006 1642996 466760 3.52 458852 7.541 1.37 83 7908 1.723
2011 1908968 602198 3.17 601035 5.264 1.3 85 1163 0.193
Source: the researcher’s calculations, 2014
4.2. Evaluation of the price changes of land and housing in the city of Isfahan
The value of properties (land, housing, and rent prices) is one of the main factors in determining the quality and quantity of people’s housing; while housing and land become the playground of one’s capital (as in today’s Iran), the tendency to have private housing increase and this leads to an increase in demand. Considering the instable status and the risks following certain other investment areas (such as manufacturing and agriculture, etc.), the tendency towards investing in housing sector has always been a safer investment and this has led to increase in the prices and widen the gap between the effective demand and the potential demand.
During the evaluation, we encounter huge fluctuation in the value of housing lands, both dilapidated and new, and an increase in the housing rent prices, as has been presented in detail in Table 2.
Table (2): changes in prices of land, housing, and rents, 2003-2013
Year A square meter of dilapidated residential building A square meter of residential units Rent prices for one square meter
Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate
2003 2632 – 3007 – 13034 –
2004 3562 35.3 3373 10.8 13177 1.08
2005 4035 11.7 4251 20.6 14582 9.6
2006 3839 -5.1 4702 9.5 16313 10.6
2007 5706 32.7 8181 42.5 20975 22.2
2008 7278 21.5 8485 3.5 24600 17.5
2009 4929 -47.6 8211 -3.3 25195 18.2
2010 4612 -6.8 8676 5.3 28333 11.07
2011 4978 7.3 9549 9.1 30075 5.7
2012 6332 10.2 12385 22.8 35809 16.01
2013 8571 26.1 16624 25.4 45261 20.8
Source: Statistical Center of Iran, Statistical Yearbook of Isfahan Province, 2003-2004; and author’s calculations, 2014
Considering the price per square meter of housing units in the city of Isfahan, we see that the market of such land had a great fluctuation between the years of 2004 and 2006 and immediately fell in 2007. Bu the most fluctuation was in the year 2009, when the price of dilapidated units had a decrease of -47.6% and this decrease continued in the following year and it then found an increase process. The outset of the Mehr Housing Project, the economic downturn in the investing countries (foreign participation sharply declined), and political issues can nevertheless be assumed as the main reasons of the price drop in the years between 2009 and 2010.
However, during the period, the price of housing units per square meter was with a greater stability, in a way that except the year 2009, when a 3.3% negative growth was experienced, there experienced no more descending in prices. One of the characteristics of the housing market during the research period was the increase in the prices. Evaluation of the inflation indexes, the prices of manufacturing factors, and housing prices show that the housing market in Isfahan has not been mainly exempted from self-seeking interventions and the bulk of the increase in housing prices were resulted from rising prices of production factors and the overall inflation.
An important issue here is that the difference between the rent prices and housing prices was its continuous increase during the years under study.
Figure 1: changes in the prices of housing, land and rent during the period 2003-2013
Source: Statistical Center of Iran and author’s calculations, 2014
4.3. Study of the changes in the production of housing in the city of Isfahan
Increase in land prices, as one of the most important components of the production of housing, has brought about the grounds for a decrease in the production of housing on the one hand, and an increase in building density factor on the other hand. The cost of housing in the city of Isfahan per square meter is another indicator that can be objectively applied in the analysis of access to adequate housing. Of course, the cost of housing will increase over time, but the process of the increase is of importance. At the beginning of the period under study the cost of making one meter square of housing in the city of Isfahan was 430,000 IRR, while this amount was 3,102,000 IRR at the end of the period. Although changes in housing investment, in short-term, are affected by the changes in the factors affecting the demand, such as housing prices and the granting of loans, the long-term factors such as land prices, construction costs and inflation also affect these changes. The cost of housing production during the period has had always an upward trend, but the acceleration of this growth has been different. Much of the increase in the construction cost was due to the rising prices of the land, materials, and labor. Some have been affected by the decline in productivity.
Table 3: Changes in the cost of housing construction per square meter, 2001-2012 (one thousand IRR)
Year
index 2001 2006 2008 2009 2010 2011 2012
The cost of one square meter construction 430 1308 1542 2030 2441 2582 3102
Percent of growth comparing to the previous year – 204 17.8 31.6 20.2 5.7 20.1
Source: Statistical Center of Iran, the Central Bank of the Islamic Republic of Iran, 2001-2012 and author’s calculations
Chart 2: Changes in the cost of housing construction per square meter 2001-2012 (percent)
4.4. Evaluation of housing finance system in the city of Isfahan
4.4.1. Household savings
In the area of private housing, the greatest and most reliable source in providing housing is a household’s savings. Household’s savings are part of their disposable income that are not used for consumes of the family and are used for income purposes. In this study, to achieve cost savings level there has been used the difference and household income is the difference between the cost and revenue (Table 4).
Table (4): the average of income, cost, and savings of an urban household over the years 2006-2012 (IRR)
Year Price Income Savings
2006 69059825 57289929 -11769896
2009 88508931 74529939 -13978992
2009 101319582 87730581 -13589001
2010 114495202 93390015 -21105187
2011 137279114 109217181 -28061933
2012 157761405 145924872 -11836533
Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014
During the study period, household savings have been negative for most of the period. Accordingly, assuming the consumption pattern of households to be constant, households’ savings cannot be a source of funding for housing. But it is possible that by changing the consumption patterns and lifestyles, and by increasing the savings, it can be insisted on as a source of housing.
4.4.2. Bank credits
Credit and banking facilities are financing or guaranteeing the obligations of the applicants based on the interest-free banking law, where the methods of banking facilities include: loan, civic management, legal management, direct investment, partnership, forwards, sales installment, hire-purchase, contract of farm letting .
Table (5): the amount of the grants by the banks of Isfahan Province to private sector based upon major economic sectors, 2001-2012
Index 2001 2006 2008 2009 2010 2011 2012
The number of facilities and bank credit 170128 384418 262110 160060 308439 567426 328695
Percentage of total housing per year 28.67 35.21 16.5 9.49 12.18 21.45 12
Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014
In total, the annual average of 20 percent of bank credits has been allocated to the housing sector which is a substantial amount. Therefore, considering the financial sources and potentials and absorption rate of bank deposits by the banks of the province, 20% of them can be considered as sources of investment in the housing sector.
4.4.3. Bank development credits
Government’s development credits are the budget allotted annually and based on annual budget rules for implementing development plans and for expanding current expenditure on the government’s economic and social plans, nationally and provincially. This budget is divided into the three general, economic and social categories.
Table (6): Government credits as divided by budget seasons (2006-2012)
Year Sum General affairs Social affairs Economic affairs
Credit percentage Credit percentage Credit percentage
2006 8928397 6947286 77.8 556603 6.2 1424508 15.9
2008 5461355 1175093 21.5 1250856 22.9 3035406 55.5
2009 3177898 1866020 58.7 544190 17.1 767688 24.1
2010 3996923 2074964 51.9 700944 17.5 1221015 30.5
2011 4031126 2338934 58.01 605371 15.01 1086821 26.9
2012 2888659 2304195 79.7 161633 5.5 422831 14.6
Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014
Table (7): Share of credits of the housing sector in social affairs program 2006-2012 (million rials)
Year Sum of credits Share of credits of the economic sector in the total credits Share of credits of the housing sector from the economic credits Share of credits of the housing sector in the total credits
Credit Percentage Credit Percentage Percentage
2006 8928397 1424508 15.9 513921 36/07 5.7
2008 5461355 3035406 55.5 675548 22.2 12.3
2009 3177898 767688 24.1 207621 27.04 6.5
2010 3996923 1221015 30.5 419433 10.4 10.4
2011 4031126 1086821 26.9 419154 38.5 10.3
2012 2888659 422831 14.6 126148 29.8 3.4
Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014
Considering the results, it can be argued that the credit of the sector has varied over years, and the these changes ranges from 5 to 12 percent of the entire budget, which is an important figure in its own right.
Of course, this might be due to the role of government and private sector in housing investment. Although in the recent years the government’s role has become more serious with the emergence of plans such as Mehr Housing, retroffiting plan, and renovation of distressed areas. (Of course, one must notice that as of 2006 the economic sector has always received the highest credits. And, in the economic sector, the housing, and urban and rural sectors received the greatest amounts of credits).
4.4.4. Determining the position of housing in the expenditure basket of low-income households of Isfahan city
To investigate housing costs in each of the income groups, first the study households were ordered in descending order during the years 2005-2011based on the amount of income. Afterwards, households were divided into equal groups (indicating Deciles). In the next step, based on the data regarding housing costs and the entire food and non-food expenditure of each household, the amount of housing costs and the entire food and non-food expenditure of each of the income Deciles was calculated for different years. Finally, the mean income and housing costs and the total food and non-food costs of households in each Decile were investigated. In order to present authentic and real analyses, all variables were calculated considering the ratio of price index in the city of Isfahan to the fixed price of the year 2005.
Table (8): Mean housing cost of low-income household of Isfahan City, 2005-2011.
Year Mean cost Proportion of growth compared to previous year
2005 14124382 –
2006 17433765 18.9
2007 21248483 17.9
2008 26704381 20.4
2009 26390670 -1.18
2010 28886146 8.6
2011 33238345 13.09
Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011.
As displayed in Table 8, housing costs for Isfahan city have an increasing trend (except for the year 2009). The highest increase in housing costs took place in 2008 (.4) and the lowest increase occurred in 2010 (%8.6).
In this regard, it should be noted that the government’s policies to establish stability and regulate market prices and prevent unduly increase have been very effective, such that the rate of price increase in relation to the previous year has been usually in the same price range. However, to present precise results of estimation of housing costs of low-income groups in Isfahan city, the results were analyzed in income Deciles (Table 9).
Table (9): Variation in the mean housing cost of urban households of Isfahan city, 2007-2011
Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10
2007-2011 25.14 28.2 28.9 25.7 32.5 33.68 33.3 35.58 34.33 36.1
Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011
As can be seen, enforcement of various economic policies in the housing sector and establishment of a balance between housing supply and demand in Isfahan city during the study period has been such that the average housing costs for high-income households was higher than low-income households. The highest increase belonged to the 10th Decile (.1) and the lowest increase belonged to the 4th Decile (.7).
In the following sections, to present precise results, the share of housing costs in the total costs of urban households will be analyzed in income Deciles (Table 10).
Table (10): Share of housing costs in the total costs of urban households of Isfahan, 2007-2011.
Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10
2007-2011 47.4 46.49 44.15 42.2 40.38 38.6 35.56 34.33 33.9 31.5
Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011
According to the results, in general, the share of housing costs in the high-income households (31.5), the share of housing costs is lower than that in the low-income households (47.4). Put otherwise, Isfahan urban households belonging to the lower Deciles of the society spend a great proportion of their income (food and non-food expenses) on housing costs. In contrast, households of upper Deciles devoted a smaller proportion of their income on housing costs. Hence, in the upper income Deciles, the share of housing costs compared to the total costs of households is lower than that in households of the lower Deciles of Isfahan city.
Diagram (3): Share of housing costs in the total costs of urban households in Isfahan city within the framework of income Deciles 2007-2011
4.4.5. Estimation of Gini coefficient of housing costs of households of Isfahan city
This index id usually used to investigate class difference and income distribution among the society’s Deciles. Therefore, the closes the value to 1, the less unequally distributed of the index, and the closer it is to zero, the more equally distributed the index.
Here, the Abunoori equation is used to calculate Gini coefficient.
Where ‘y’ stands for the upper limit of expenditure groups, f(y) is the relative cumulative frequency of households with expenditure up to ‘y’, and ‘u’ stands for regression error. Table 11 presents the values of Gini coefficient for the housing costs of households of Isfahan city.
Table (11): Gini coefficient of housing costs of urban households in Isfahan city
Year 2003 2004 2005 2006 2007 2008 2009 2010 2011
Gini coefficient 0.323 0.306 0.294 0.309 0.317 0.343 0.370 0.400 0.405
Source: Author’s calculations based on Plan for costs and income of urban households of the Province 2003-2011.
During the study period, the Gini coefficient for housing costs for urban households had a generally descending order, which indicates a decrease in housing costs among urban households. Of course, this decrease per se could not be a source of optimism about the housing costs of lower income groups, because unless such a decrease is accompanied by decreased equality in the income of households, it indicares an increase in the share of housing in the budget of household and hence greater pressure on them. This gap has been widening as of 2006. Lorenz curve, which was obtained in this study by drawing households’ cumulative frequency for cumulative percentage of housing costs, was used to further explain the degree of inequality. The more the Lorenz curve distance from the equal distribution line, the more inequality.
Diagram (4): Lorenz curve of the gap between housing costs in income Deciles of urban households in 2011.
4.4.6. Effective housing demand in income Deciles in terms of substructure area in Isfahan city
The following equation was used to estimate the effective demand of housing units in income Deciles of Isfahan urban households.
In the above equation:
Q stands for the amount of effective demand per square meter
CH represents housing costs of household
Bu stands for the substructure of the housing unit of household
Table 12 presents in detail the average amount of effective demand in different years in each of the income groups in Isfahan city.
Table 12. Amount of effective demand in different years in each income Decile of Isfahan city (square meter)
Year
Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011
Decile 1 4.6 7 4.5 6 5 4 2 1.3 1.9
Decile 2 7.5 12 8 7.8 5.5 4.5 3.7 2.4 2.5
Decile 3 9.4 14 8.5 9 5.6 5.6 5.4 3 3.8
Decile 4 11 15.5 10 10.1 6.6 7 6 5 4.8
Decile 5 13 17 12.4 12 7.8 8 8 4.9 5.5
Decile 6 15 21 13 14 10.5 9 9 5.6 6.7
Decile 7 17 26.2 14 17 12 10.1 11 1.7 6.4
Decile 8 21 28 9 33 14.8 10.6 17 8 7.4
Decile 9 26 31 23 39 26.2 13 19 12 9
Decile 10 49 45 41 45 31 23 25.6 24 17
Source: Author’s measurements based on the Plan for Costs and Income of Urban Households of the province, 2003-2011
Based on the combination of variations of the two factors of income of households of Isfahan city and variation of housing price in urban areas, the outcome of changes in the effective demand among different income Deciles of Isfahan city were presented. As displayed by Table 12, investigation of effective demand among income Deciles of the city indicates a wide gap in terms of affording housing between high-income groups and low-income groups of households of Isfahan city.
A more important point is that based on the results, effective demand for housing units had experienced a decrease in all income Deciles in the end years of the study period. Besides, with respect to the ability of effective demand among low-income groups, while Deciles 1 to 4 could afford 4 to 11 square meters of housing in 2003, these numbers reduced to 1.9 to 5 square meters in 2011.
Diagram (5): Amount of effective demand in the 4 lowest-income Deciles of Isfahan city
Diagram (6): Sum of amount of effective housing demand of income docuiels of Isfahan city, 2003-2011
4.4.7. Housing accessibility index in different income groups of Isfahan
Housing accessibility index is obtained by dividing each unit of goods by the consumer income in a certain time unit. It shows how many time periods does the consumer have to work to obtain one unit of the intended commodity. Given the fact that the annual consumer’s income is considered, assuming that the income is distributed equally among all days of the year, the accessibility index shows how many days’ income of a household can buy one square meter of a housing unit.
Accordingly, in upper income dociels, households can own housing units by saving their income in a smaller number of days than the households in the lower Deciles do. Of course, considering the annual increase in the price of each square meter of housing unit, the days whose income must be saved to buy a square meter of housing unit increased in all income Deciles. The results show that in the low-income Decile in 2003, a household can afford a square meter of housing unit by saving the complete income of 75 days, while this number rose up to 206 days in late 2011.
Table (13): Number of household days whose income is set aside in order to purchase one square meter of housing unit in Isfahan city for income Deciles in different years.
Year
Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011 Mean of period
Decile 1 75 65 81 94 119 122 135 175 206 107.2
Decile 2 68 58 52 77 80 93 103 143 129 80.3
Decile 3 63 53 42 56 63 81 96 121 104 67.9
Decile 4 58 47 37 43 55 67 82 101 88 57.7
Decile 5 50 41 32 40 48 52 63 80 76 50.4
Decile 6 49 36 28 34 41 45 50 73 66 42.2
Decile 7 43 30 25 30 34 40 43 65 57 39.9
Decile 8 34 25 19 27 31 34 33 55 48 30.6
Decile 9 29 19 16 22 22 28 28 34 39 23.7
Decile 10 10 13 13 14 15 17 17 19 23 13.7
Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.
Besides, while in 2001 the highest-income Deciles of Isfahan urban society needed to save 10 days’ income to afford one square meter, this number rose up to 23 days at the end of the period. The important point is the wide gap between high-income and low-income Deciles in waiting (saving) days for obtaining a square meter, which is 10.5 times between the 1st and 10th Deciles. This trend indicates increased inconvenience and inability in provision of housing.
Diagram (7): Number of days whose income is set aside to buy one square meter of housing unit.
As can be seen, in average, the households of the three upper income Deciles of Isfahan can buy a square meter of land by spending the savings of less than one month of their income, while the households of the three lower Deciles have to spend the savings of 65 days of their income to purchase a square meter of land.
Besides, housing unit accessibility index has also been determined as divided by income groups per year. Table 36.6 shows the trend continuing from 2003 to 2011.
In this regard, it should be noted that by assuming that one-third of the income of households of each income group is saved for obtaining housing, the years required to obtain an average housing unit (75 square meters) in Isfahan city for each income group is as follows:
Table 14. Number of years needed for household income (waiting period) to buy a housing unit (75 square meters in average) as divided by income Deciles 2003-2011.
Year
Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011
Decile 1 48.4 33 36 58 70 75 84 91 103
Decile 2 30 21 30 38 48 50 71 85 97
Decile 3 24 18.6 26.2 29 43 42.3 63.6 64 87
Decile 4 20.2 16.5 21.8 23.4 36 35.5 50.5 58.6 68.6
Decile 5 17.2 15 17.1 20 29.6 30.7 35.4 48 53
Decile 6 15 12 16.5 18.5 26 27.8 31 40.7 47.4
Decile 7 14 10.7 15 16 22.3 23.6 25.6 34 35
Decile 8 11.1 10 14 13.3 18 21 22.2 27.8 29.6
Decile 9 9.8 8.6 10.1 9.3 15 16.1 17.5 21 24
Decile 10 5.5 5 6 5 7.5 9.5 9.2 9.2 13
Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.
In sum, investigation of housing accessibility index in Isfahan city shows a substantial difference among different income groups of this city in affording housing. In addition to the fact that, compared to other income groups, the low income groups of the city have to spend a longer period in order to obtain housing, the housing accessibility index have even worsened in low-income groups over time. For instance, whereas in 2003 the households of the first Decile could afford housing by working 49 years, this number rose up to 103 years in 2011, which means this group can never afford housing under current circumstances. Besides, investigation of the conditions of the beginning and end of the period shows that the waiting period for obtaining housing has almost tripled for all Deciles, although there have been fluctuations in the increase and decrease of the waiting period.
4.5. Conclusion
Generally the trend of formation of economic and spatial duality and diversity in cities and regions commenced and was gradually consolidated after the industrial revolution in Europe and then the emergence of modernity in the periphery countries. In Iran, this trend commenced in the early 20th century, and, as a consequence, increased urbanization resulting from this growth led to the accumulation of various problems such as poverty, homelessness and poor housing in Iran’s cities. Faced with horizontal development restrictions, Isfahan city, as one of the province capitals, concentrated a great population of low-income groups in itself. Geographical distribution of low-income groups in Isfahan city is such that most of them reside in dilapidated areas. These areas include old areas containing or non-containing monuments, as well as neighborhoods with informal settlements.
The findings demonstrate that in 2011, the cost of constructing one square meter of lands in Isfahan city is 430,000 rials, whereas in the end of the study period this amount rose to 3,102,000 rials. The greatest increase in housing costs occurred in 2008 (.4), and the lowest in 2010 (%8.6). Mean housing costs for households of the upper Deciles were higher than those of households of the lower Deciles, such that the greatest increased belonged to the 10th Decile (.1) and the lowest increase belonged to the 4the Decile (.7). Generally, by 2005, the Gini coefficient of housing costs for urban households followed an increasing trend, and the gap has been widening as of 2006. Investigation of effective demand among income Deciles of the city demonstrated a wide gap between the ability to obtain housing among upper and lower income groups of Isfahan city. With respect to effective demand ability among low-income groups, it can be argued that while in 2003 Deciles 1 to 4 owned 4 to 11 square meters of land, these numbers decreased to 1.9 to 5 square meters in 2011. Based on housing accessibility index, the results demonstrated that the low income Decile couled afford one square meter of housing by saving the income of 75 days; however, at the end of the study period (2011), this number rose up to 206 days. Moreover, in 2001 while the households of the highest-income Decile of Isfahan urban society needed to save the income of 10 days to afford one square meter, this number reached up to 23 days. The important point is the gap between high-income and low-income Deciles in the expected days (saving) for obtaining one square meter which was 10.5 times between the 1st Decile and the 10th one. Furthermore, while in 2003 the households of the 1st Decile could afford their required housing by working 46 years, this number rose up to 103 years in 2011, which meant that it was practically impossible for these households to afford housing.
2016-10-23-1477206815
Fracture union
Management of trauma has always been one of the surgical subsets in which oral and maxillofacial surgeons over the years. The mandibular body is a parabola-shaped curved bone composed of external and internal cortical layers surrounding a central core of cancellous bone. The goals of treatment, are to restore proper function by ensuring union of the fractured segments and re-establishing pre-injury strength; to restore any contour defect that might arise as a result of the injury and to prevent infection at the fracture site. Since the time of Hippocrates, it has been advocated that immobilisation of fractures to some degree or another is advantageous to their eventual union. The type and extent of immobility vary with the form of treatment and may play an essential part in the overall result. In common fractures, a certain amount of time is required before bone healing can be expected to occur. This reasonable time may vary according to age, species, breed, bone involved, level of the fracture, and associated soft tissue injury.
Delayed union, by definition, is present when an adequate period has elapsed since the initial injury without achieving bone union, taking into account the above variables. The fact that a bone is delayed in its union does not mean that it will become nonunion. Classically the stated reasons for a delayed union are problems such as small reduction, inadequate immobilisation, distraction, loss of blood supply, and infection. Inadequate reduction of a fracture, regardless of its cause, may be a prime reason for delayed union or nonunion. It usually leads to instability or poor immobilisation. Also, a small reduction may be caused by superimposition of soft tissues through the fracture area, which may delay healing.
Nonunion defined as the cessation of all reparative processes of healing without a bony union. Since all of the factors discussed under delayed union usually occur to a more severe degree in nonunion, the differentiation between delayed and nonunion is often based on radiographic criteria and time. In humans, failure to show any progressive change in the radiographic appearance for at least three months after the period during which regular fracture union would be thought to have occurred is evidence of nonunion. Malunion is defined as a healing of the bones in an abnormal position; Malunions can be classified as functional or nonfunctional. Functional malunions are usually those that have small deviations from normal axes that do not incapacitate the patient. A minimum of at least nine months has to elapse since the initial injury, and there should be no signs of healing for the final three months for the diagnosis of fracture nonunion. There are a few different classification systems of nonunions, but nonunions are most commonly divided into two categories of hypervascular nonunion and avascular nonunion. In hypervascular nonunions, also known as hypertrophic nonunion, fracture ends are vascular and are capable of biological activity. Here is evidence of callus formation around the fracture site and it is thought to be in response to excessive micromotion at the fracture site. Avascular nonunions, also known as atrophic nonunion, are caused by avascularity, or inadequate blood supply of the fracture ends. There is no or minimal callus formation, and fracture line remains visible . is nonunion requires natural enhancement in addition to adequate immobilisation to heal.
Treatment of mandibular aims in achieving the bony union, right occlusion, preserve IAN and mental nerve function, to prevent malunion and to attain optimal cosmesis. Rigid plate and screw fixation have the advantage of allowing the patient to return to the role without the need of 4–6 weeks of IMF; but the success of rigid fixation depends upon accurate reduction. During adaptation of manipulating in a champys line of osteosynthesis in symphysis region, even main bar applied to the tooth for proper occlusion, but still, the bone fragments overlap bone prominence. Gaps will be present. To achieve bone contact for healing various method are devices for the same to hold the fracture segments together like Towel clamps, Modified towel clamps. Stress patterns generated by Synthes reduction forceps, orthodontic brackets, allis forceps, manual reduction, elastics internal traction reduction, bone holding forceps, tension wire method and vacuum splints, as without which there is always a gap and inability to fix using mini plate intraoperatively. Proper alignment and reduction are essential for mastication, speech, and normal range of oral motion.
Compression during plate fixation has been shown to aid in the stability and healing process of a fracture site. The primary mechanism is thought to be due to increased contact of bony surfaces. Reduction forceps can hold large segments of bone together to increase surface contact while plate fixation is performed. An additional benefit of using reduction forceps is that a single operating surgeon can achieve plating of body fractures because the forceps hold the fracture in reduction while the plates and screws are placed. Reduction gaps of more than 1 mm between fracture segments result in secondary healing, which occurs in callus formation and increases the risk of a nonunion irrespective of any fixation method. Direct bone contact between the fracture segments promotes primary bone healing, which leads to earlier bone regrowth and stability across the fracture site. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. Ingrowth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which is referred to as Haversian remodelling. Clinical experience shows that fractures that are not adequately reduced are at higher risk for malunion, delayed union and non-union and infection leading to further patient morbidity to achieve bone contact plate.
Studies by Choi et al. using silicon mandibular models have established the optimum position of the modified towel clamp–type reduction forceps relative to symphyseal and parasymphyseal fractures. Fractured models were reduced at three different horizontal levels: midway bisecting the mandible, 5 mm above midway, and 5 mm below midway. Besides, engagement holes were tested at distances of 10, 12, 14, and 16 mm from the fracture line. The models were subjected to heating up to 130°C for 100 minutes and then were cooled to room temperature. Stress patterns were then evaluated using a polariscope. Optimal stress patterns (defined as those distributed over the entire fracture site) were noted when the reduction forceps were placed at the midway or 5 mm below midway and at least 12 mm from the fracture line for symphyseal or parasymphyseal fractures and at least 16 mm for mandibular body fractures.
Shinohara et al. in 2006 used two modified reduction forceps for the symphyseal and parasymphyseal fractures. One was applied at the inferior border and another one in the subapical zone of the anterior mandible, to reduce lingual cortical bone sufficiently. In the other clinical studies, the reduction was achieved by using one clamp or forceps in the anterior and posterior region of the mandible.
One study describes that two monocortical holes were drilled, each 10 mm from the fracture line (Žerdoner and Žajdela, 1998). A second study describes monocortical holes at approximately 12 mm (Kluszynski et al., 2007) from the fracture line at midway down the vertical height of the mandible. The third study describes either monocortical or bicortical holes depending on difficulties. These difficulties are not described in detail. In this study, the distance of 5-8 mm from the fracture was chosen (Rogers and Sargent, 2000) at the inferior margin of the mandible.
Taglialatela Scafati et al., (2004) used elastic rubber bands stretched between screws placed across both sides of the fractured parts to reduce mandibular and orbit- maxillary fractures. Orthodontic rubber bands and two self-tapping monocortical titanium screws with 2 mm diameter and 9-13 mm length used. The heads of the screws protruded about 5 mm and the axis had to be perpendicular to the fracture line. It is similar in concept to other intraoperative methods of reduction used in orthopaedic or maxillofacial surgery such as the tension band technique or the Tension Wire Method (TWM), where EIT utilises rubber bands tightened between monocortical screws placed onto the fracture fragments
Vikas and Terrence Lowe in 2009 in their technical note on Modification of the elastic internal traction method for temporary inter-fragment reduction prior to internal fixation described a simple and effective modification of the Elastic Internal Traction method as previously described by Scafati et al.The modification utilizes 2 mm AO mono-cortical screws and elastomeric orthodontic chain (EOC) instead of elastic bands. 9–12mm length mono-cortical screws strategically placed to a depth of 4–5 mm approximately 7 mm either side of the fracture.
Based on studies by Smith at el in 1933 a series of 10 x 1 cm ‘turns’ of the elastic should resist a displacing force of between 30.-40 Newtons approximately.
Degala and Gupta, (2010) used comparable techniques for symphyseal, parasymphyseal and body fractures. Titanium screws with 2 mm diameter and 8 mm length were tightened at a distance of 10-20 mm from the line of fracture, and around 2 mm screw length remained above the bone to engage a 24 G wire loop. However, before applying this technique, they used IMF.
Rogers and Sargent in 2000 modified A standard towel by bending two ends of a clamp approximately 10 degrees outward and was done to prevent disengagement from the bone. Kallela et al. in 1996 modified a standard AO reduction forceps through shortening the teeth and made notches at the ends to grasp tightly in the drill holes. Shinohara et al. in 2006 used two modified reduction forceps: one was positioned at the inferior border and the other in the neutral subapical zone.
Choi et al. in 2005 included two treatment groups (reduction forceps and IMF group) and used a scale of 1 to 3 to assess the accuracy of anatomic reduction in the radiographic image. A score of 1 indicated a poorly reduced fracture which required a second operation, while a score of 2 indicated a slight displacement but an acceptable occlusion. A score of 3 indicated a precise reduction. The reduction forceps group had a higher number of accurate anatomic alignments of the fractures than the IMF group.
New reduction forceps were developed by Choi et al., 2001; Choi et al., 2005 for mandibular angle fractures based on the unique anatomy of the oblique line and body; one end of the forceps designed for positioning in the fragment medial to the oblique line, and another end was placed in the distal fragment below the oblique line . The reduction-compression forceps of Scolozzi and Jaques, in 2008 was designed similar to standard orthopaedic atraumatic grasping forceps.
Zerdoner and Zajdela in 1998 used a combination of self-cutting screws and a repositioning forceps which has butterfly-like shaped prongs. First, two screws fastened on each side of the fracture line and then the reposition forceps is placed over the heads of the screws
The use of reduction forceps has known for many years in general trauma surgery, orthopaedic surgery and plastic surgery. In OMF surgery traditionally the dental occlusion was used to perform and check reduction of mandibular fractures. Notwithstanding this historical background, reduction forceps can be used in mandibular fractures as in any other fracture as long as there is sufficient space and as long as the fracture surface permits stable placement and withstands the forces created by such a forceps.
George concluded by saying that the use of IMF for the management of angle fractures of the mandible is unnecessary provided there is a skilled assistant present to help manually reduce the fracture site for plating.
Other fracture reduction methods such as traction wire or elastic tension on screws are simple to use in the area of anterior mandibular fractures. This method may cause a gap at the lingual side of the fracture as an effect of the resultant of the force exerted on the protruding screws (Ellis & Tharanon 1992, Cillo & Ellis 2007). This lingual gap can occur as well when using reduction forceps but as they grab inside the bone and when they are positioned at a distance of the fractures site of at least 8-10 mm this should be prevented ( Žerdoner & Žajdela 1998, Rogers & Sargent 2000, Kluszynski et al. 2015). Choi et al., (2003) even suggested that tips of repositioning forceps should be placed at least 12mm from each site of the fracture line in case of symphyseal and parasymphyseal fractures. In the mandibular body fractures, adequate stress pattern at the lingual site found at least 16 mm from the fracture line.
Traditional wiring is a potential source of ‘needlestick’ type injury in the contaminated environment of the oral cavity and represents a health risk to surgeons and assistants. Conventional elastic or rubber rings may be difficult to place, and large numbers often need to be applied to prevent displacement of the fragments from the wafer. Such elastic exerts a pull of approximately 250-500 g per ‘turn’ depending on its specification (De Genova et al., 1985), and multiple ‘turns’ around anchorage points increase the firmness of retention. It is resilient, and even if displaced by stretching, tends to return the segments to their correct location in the splint or wafer, whereas wire ties once pulled, or if inadequately tightened become passive and allow free movement. The chain is relatively expensive, but the ease of use and the rapidity and flexibility with which it can be applied and retrieved save valuable operating time. It can be cold sterilised if desired and is designed to retain its physical properties within the oral environment. On removal, unlike wires and elastic rings which easily break or tear and may be difficult to retrieve from the mouth or wound, it can be recovered in one strip and, as an additional check, the holes can. The force exerted by elastic modules is known to decrease over time (Wong, 1976) and the strength decays by 17-70% (Hershey & Reynolds, 1975; Brooks & Hershey, 1976) over the first 24 h, depending on the precise material and format of the chain, and whether it has been pre-stretched (Young & Sandrik, 1979; Brantley et al., 1979).
Symphysis, parasymphysis, and mandibular body can be differentiated from other regions of the mandible because of a ridge of compact cortical bone (alveolar ridge) located on its cranial aspect that allows for tooth-bearing. This horizontally oriented tooth-bearing portion then becomes vertically oriented to form its articulation with the cranium. The change in orientation occurs at the mandibular angle, and subsequently, the mandible continues as the mandibular body and condyle as it travels Along the entire course of the mandible are muscle attachments that place dynamic internal forces on the mandible. These muscles can be divided into two primary groups: muscles of mastication and suprahyoid muscles. The muscles of mastication include the medial and lateral pterygoids, the temporalis, and masseter muscles. Together these muscles aid in chewing by generating forces along the posterior aspects of the mandible (angle, ramus, coronoid process).
Furthermore, two of the muscles of mastication, the medial pterygoid and masseter muscles, combine to form the pterygomasseteric sling, which attaches at the mandibular angle. Conversely, the suprahyoid group (digastric, stylohyoid, mylohyoid, and geniohyoid) functions, in part, to depress the anterior mandible by applying forces to the mandibular symphysis, parasymphysis, and a portion of the body. Together, these muscle attachments act to place dynamic vectors of force on the mandible that, when in continuity, allow for proper mandibular function, but when in discontinuity, as occurs with mandible fractures, can potentially disrupt adequate fracture healing. Works of literature looking at the relationship between the timing of surgery and subsequent outcomes have demonstrated no difference in infectious nonunion complications between treatment within or after three days status postinjury but did find that complication because of technical errors increased after this time. As a result, the authors commented that if surgery was to commence or more days after the injury, a technically accurate surgery by the surgeon is necessitated to overcome factors such as tissue oedema and inflammation. In cases where a delay in treatment is necessary, consideration should be given for temporarily closed fixation to reduce fracture mobility and patient pain.
Treating mandibular fractures involves providing the optimal environment for bony healing to occur: adequate blood supply, immobilisation, and proper alignment of fracture segments. Plate length is generally determined to allow for the placement of more than one screw on either side of the fracture to nullify the dynamic forces that act on the mandible. In ideal conditions, three screws are placed on either side of the fracture segments to allow for assurance against inadequate stabilisation, with screws placed at least several millimetres from the fracture site. Proper plate thickness determined by the forces required to stabilise fractured bone segments. Options for stabilisation can be divided into either load-sharing fixation or load-bearing fixation. Mandible that would only require monocortical plates to allow for stable fixation along the symphysis, parasymphysis, and angle of the mandible. These regions have subsequently been called Champy’s lines of tension, with the superior portion of lines also referred to as the tension band of the mandible.
A study by George Dimitroulis in 2002 in which he gave Postreduction orthopantomograph scoring criteria. These radiographs were assessed using a score of from 1 to 3. A score of 3 given to radiologic evidence of an accurate anatomic reduction in the fracture site. A score of 2 assigned to reduced fractures that were slightly displaced but had a satisfactory occlusion. The lowest score of 1 was for poorly decreasing fractures that required a second operation to correct the poor alignment and unacceptable occlusion.
The assessment of fracture healing is becoming more and more critical because new approaches used in traumatology. The biology of fracture healing is a complex biological process that follows specific regenerative patterns and involves changes in the expression of several thousand genes. Although there is still much to be learned to comprehend the pathways of bone regeneration fully, the overall paths of both the anatomical and biochemical events have thoroughly investigated. These efforts have provided a general understanding of how fracture healing occurs. Following the initial trauma, bone heals by either direct intramembranous or indirect fracture healing, which consists of both intramembranous and endochondral bone formation. The most common pathway is incidental healing, since direct bone healing requires an anatomical reduction and rigidly stable conditions, commonly only obtained by open reduction and internal fixation. However, when such conditions achieved, the direct healing cascade allows the bone structure to immediately regenerate anatomical lamellar bone and the Haversian systems without any remodelling steps necessary It is helpful to think of the bone healing process in a stepwise fashion, even though in reality there is an excellent overlap among these different stages. In general, it is possible to divide this process into an initial hematoma formation step, followed by inflammation, proliferation and differentiation, and eventually ossification and remodelling. Shortly after a fracture occurs, the vascular injury to periosteum, endosteum, and the surrounding so tissue causes hypoperfusion in the adjacent area. The coagulation cascade is activated which leads to the formation of a hematoma rich in platelets and macrophages. Cytokines from these macrophages initiate an inflammatory response, including increased blood ow and vascular permeability at the fracture site. Mechanical and molecular signals dictate what happens subsequently. Fracture healing can occur either through direct intramembranous healing or more commonly through indirect or secondary healing. The significant difference between these two pathways is that direct healing requires absolute stability and lack of interfragmentary motion, whereas, in secondary healing, the presence of interfragmentary motion at the site of fracture creates relative stability. In secondary healing, this mechanical stimulation in addition to the activity of the inflammatory molecules leads to the formation of fracture callus followed by woven bone which eventually remodelled to lamellar bone. At a molecular level secretion of numerous cytokines and proinflammatory factors coordinate these complex pathways. Tumour necrosis factor-𝛼 (TNF-𝛼), interleukin-1 (IL-1), IL-6, IL-11, and IL-18 are responsible for the initial inflammatory response. ReRevascularisationan essential component of bone healing, s achieved through different molecular pathways requiring either angiopoietin or vascular endothelial growth factors (VEGF) ).EGF’s importance in the process of bone repair hahas shown any studies involving animal models. S the collagen matrix invaded blood vessels, the mineralisation the so callus occurs through the activity of osteoblasts resulting in hard callus, which is remodelled into lamellar bone. Inhibition of angiogenesis in rats with closed femoral fractures completely prevented healing and resulted in atrophic non-unions
If the gap between bone ends is less than 0.01 mm and interfragmentary strain is less than 2%, the fracture unites by so-called contact healing. Under these conditions, cutting cones are formed at the ends of the osteons closest to the fracture site. The tips of the cutting cones consist of osteoclasts which cross the fracture line, generating longitudinal cavities at a rate of 50–100 μm/day. The primary bone structure is then gradually replaced by longitudinal revascularized osteons carrying osteoprogenitor cells which differentiate into osteoblasts and produce lamellar bone on each surface of the gap. This lamellar bone, however, is laid down perpendicular to the long axis and is mechanically weak. This initial process takes approximately 3 and eight weeks, after which a secondary remodelling resembling the contact healing cascade with cutting cones takes place. Although not as extensive as endochondral remodelling, this phase is necessary to fully restore the anatomical and biomechanical properties of the boneDirect bone healing first described in radiographs after complete anatomical repositioning and stable fixation. Its features are lack of callus formation and disappearance of the fracture lines. Danis (1949) described this as soudure autogène (autogenous welding). Callus-free, direct bone healing requires what is often called “stability by interfragmentary compression” (Steinemann, 1983).
Contact healing of the bone means healing of the fracture line after stable anatomical repositioning, with perfect interfragmentary contact and without the possibility for any cellular or vascular ingrowth. Cutting cones can cross this interface from one fragment to the other by remodelling the Haversian canal. Haversian canal remodelling is the primary mechanism for restoration of the internal architecture of compact bone. Contact healing takes place over the whole fracture line after perfect anatomical reduction, osteosynthesis, and mechanical rest. Contact healing is only seen directly beneath the miniplate. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. In- growth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which referred to as Haversian remodelling. After ten weeks the fracture is replaced by newly reconstructed cortical bone. Gap healing is seen, for example, on the inner side of the mandible after miniplate osteosynthesis. Gap healing plays a vital role in direct bone healing. Gaps are far more extensive than contact areas. Contact areas, on the other hand, are essential for stabilisation by interfragmentary friction. Contact areas protect the gaps against deformation. Gap healing was seen far from the plate.
Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. Moed conducted a larger prospective study which showed that ultrasound findings at 6 and nine weeks have a 97% positive predictive value (95% CI: 0.9-1) and 100% sensitivity in determining fracture healing in patients with acute tibial fractures treated with locked intramedullary nailing [52]. Time to the determination of healing was also shorter using ultrasound (6.5 weeks) compared to a nineteen-week average of radiographic data (𝑃 < 0.001). Ultrasound has additional advantages over other imaging modalities including lower cost, no ionising radiation exposure, and bisnoninvasive. HHowever, ts use and interpretation of findings are thought to be highly dependent on operator’s expertise. Furthermore, thick layers of such tissue can obscure an adequate view of bones with ultrasound. CT scans showed some advantages over radiographs in early detection of fracture healing in radius fractures. A limitation of CT is a beam-hardening artefact from internal and external fixation. Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. The author concluded that When used to evaluate hindfoot arthrodeses, plain radiographs may be misleading. CT provides a more accurate assessment of the healing, and we have devised a new system to quantitate the fusion mass. In seven cases MDCT led to operative treatment while on X-ray the treatment plan was undecided. Bhattacharyya et al. examined the evaluation of tibial fracture union by CT scan and determined an ICC of 0.89, which even indicates excellent agreement. These studies suggest that using CT scan has high inter-observer reliability, better than the inter-observer reliability of plain radiography. According to the authors, interobserver reliability of MDCT scan is not higher than conventional radiographs for determining non-union. However, an MDCT scan did lead to a more invasive approach in equivocal cases.MDCT provides superior diagnostic accuracy to panoramic radiography and has been to characterise mandibular fracture locations with greater certainty. Because of the high soft tissue contrast, MDCT may reveal the relation of a bone fragment and adjacent muscle, needing and the existence of some foreign bodies in traumatic injury. So in cases of severe injuries of soft tissue, an MDCT is mandatory. A 33% CT fusion ratio threshold could accurately discriminate between clinical stability and instability By 36 weeks, healing was essentially complete according to both modalities, although there still were small gaps in the callus detectable on computed tomography but not on plain films. Authors concluded by stating that Computed tomography may be of value in the evaluation of fractures of long bones in those cases in which clinical examination and plain radiographs fail to give adequate information as to the status of healing. A study in 2007 used a PET scan with Fluoride ion in the assessment of bone healing in rats with femur fractures. Fluoride ion deposits in regions of the bone with high osteoblastic activity and high rate of turnover, such as endosteal and periosteal surfaces. They concluded that Fluoride ion PET could potentially play an essential part in the assessment of fracture healing given its ability to quantitatively monitor metabolic activity and provide an objective evaluation of fracture repair. 18F-fluoride PET imaging, which is an indicator of osteoblastic activity in vivo, can identify fracture nonunions early point and may have a role in the assessment of longitudinal fracture healing. PET scans using 18F-FDG were not helpful in differentiating metabolic activity between successful and delayed bone healing. Moghaddam et al. conducted a prospective cohort study to assess changes in serum concentrations of a few serologic markers in normal and delayed fracture healing. He was able to show significantly lower levels of tartrate-resistant acid phosphatase 5b (TRACP 5b) and C-terminal cross-linking telopeptide of type I collagen (CTX) in patients who developed non-unions compared to patients with normal healing. TRACP 5b is a direct marker of osteoclastic activity and bone resorption, while CTX is an indirect measure of osteoclastic activity by reflecting collagen degradation Secretion of many of the cytokines and biologic markers is also influenced by other factors. For example, systemic levels of TGF-𝛽 were found to vary based on smoking status, age, gender, diabetes mellitus, and chronic alcohol abuse at different time points. On plain radiography, it is difficult to distinguish between desired callus formation and pseudoarthrosis. Therefore CT is an essential objective diagnostic tool to determine healing status. Computed tomography (CT) is superior to plain radiography in the assessment of union and visualising of fracture in the presence of abundant callus or overlaying cast. There have been studies to test the accuracy and efficacy of computed tomography in the assessment of fracture union in clinical settings. Bhattacharyya et al. showed that computed tomography has 100% sensitivity for detecting nonunion; however, it’s limited by a low specificity of 62%. Three of the 35 patients in the study were misdiagnosed as tibial nonunion based on CT scan findings but were healed when the fracture was visualised during surgical intervention. Seventy-seven studies involved the use of clinical criteria to define fracture union. The most common clinical standards were the absence of pain or tenderness (49%), the lack of pain or tenderness on palpation or physical examination (39%), and the ability to bear weight. The most common radiographic definitions of fracture-healing in studies involving the use of plain radiographs were bridging of the fracture site by callus, trabeculae, or bone (53%); bridging of the fracture site at three cortices (27%), and obliteration of the fracture line or cortical continuity (18%). Most commonly reported criteria for radiographic assessment of fracture union according to the location of the fracture. Two studies did not involve the use of plain radiographs to assess fracture-healing. In the study in which computed, tomography was used, the union defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, a union defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment.
Plain radiography is the most common way in which fracture union assessed, and a substantial number of studies defined fracture union by radiographic parameters alone. Hammer et al. combined cortical continuity, the loss of a visible fracture line, and callus size in a scale to assess fracture- healing radiographically but found conventional radiographic examination challenging to correlate with fracture stability and could not conclusively determine the state of the union. In animal models, cortical continuity is a good predictor of fracture torsional strength, whereas the callus area is not. Also, clinicians cannot reliably determine the concentration of a healing fracture by a single set of radiographs and are unable to rank radiographs of healing fractures in order of strength. Therefore, we rely heavily on a radiographic method without proven validity for predicting bone strength in the assessment of fracture union.
Computed tomography eliminates the problem of overlapping structures and allows axial sections to limit imaging of bone bridging CT directly in the evaluation. However, in fractures treated with external fixators, CT can determine the increasing amount of callus formation, which indicates favourable fracture healing . in this study CT was correlated with fractionmetry in the assessment of fracture healing of tibial shaft fractures. The amount of callus was serially quantified and correlated with fractionmetry . after axial imaging, two equal slices at two points of the fracture were analysed 1, 6, 12, 18 weeks after stabilisation. The principal fracture line was selected for longitudinal measurement because maximum callus formation was expected at that level . a rectangular region of interact within 200-2000 and 700-2000 HU. The callus was measured automatically after marking the area of interest. Multiple measurements after repositioning the limb were performed to evaluate the short-term precision of the method. The new formation of callus indicated stability of the fracture healing on CT after 12 weeks. Although the amount of callus is only an indirect indicator of fracture union, CT was able to assess the fracture stability. The ROC analysis showed that an increase > 50% of callus formation after 12 weeks indicated stability with a sensitivity of 100% and a specificity of 83 %.
In the study in which computed, tomography used, the union was defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, the union was defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment. One hundred and twenty-three studies proved to be eligible. Union was defined by a combination of clinical and radiographic criteria in 62% of the reviews, from radiographic criteria only in 37%, and by clinical tests just 1%. Twelve different approaches were used to define fracture union clinically, and the most common rule was the absence of pain or tenderness at the fracture site during weight-bearing. In studies involving the use of plain radiographs, eleven different approaches were used to define fracture union, and the most common criterion was bridging of the fracture site.
Several factors predispose a patient to be nonunion of bones, including mechanical instability, loss of blood supply, and infections. Bone production has been estimated to occur within 15 weeks after osteotomy; complete bone healing may take 3–6 months or even longer. The reliability of conventional radiographs for the determination of fracture healing has questioned in previous studies. CT has been used for the monitoring of bone production and fracture healing, and its advantages over conventional radiography in early fracture healing have reported. To avoid stairstep artefacts in CT, the isotropic or near-isotropic resolution is necessary and has become attractive with the introduction of MDCT scanners. Experimental studies have shown that MDCT reduces stairstep artefacts with multiplanar reconstruction when compared with single-detector CT From these data, authors reconstructed thin axial slices with 50% overlap to yield near-isotropic voxels (almost identical to the length of the voxel in the x, y, and z-axes) for further processing. This allows 2D and 3D reconstructions with a resolution similar to the source images that form the basis of good-quality multiplanar reconstructions (MPRs). MPRs reconstructed from contiguous axial slices ranging from 1.5 to 3 mm thick, depending on the anatomic region. Orthogonal to the fracture or arthrodesis plane. Fusion of osseous structures was scored with a semiquantitative approach for both techniques (MDCT, digital radiography) as complete (c), partial (p), and no bone bridging (0). Definitions of fusion were as follows: full, bone bridges with no gap; partial, some bone bridges with gaps between; and no bridging, no osseous bridges. Two musculoskeletal radiologists assessed all MDCT examinations and digital radiographs in a consensus interpretation.
Conventional tomography has been used for many years for the evaluation of the postoperative spine after posterior spinal arthrodesis. Thin-section tomography had good correlation with surgery in the diagnosis of pseudarthrosis after fusions for scoliosis and was superior to anteroposterior, lateral, and oblique radiography. However, conventional tomography also suffers from certain disadvantages. The standard linear movement is mechanically easy to produce but will give rise to rather thick tomographic sections and a short blurring path (the length of the tomographic section). If thinner parts are required, more complex movements are needed. Because conventional tomography does not entirely blur out all distracting structures, the inherent lack of sharpness of the traditional tomographic image could assess bone bridges problematic. Thinner sections of conventional tomography, in particular, suffer from greater background blur. In dentistry radiology, the technique is called orthopantomography and is still widely used, although for practical reasons other conventional tomographic methods have been mostly replaced by CT, and the commercial availability of traditional tomography scanners has decreased substantially.
CT eliminates the blurring problem of conventional tomography and increases the perceptibility of fracture healing. MDCT has the advantage that the X-ray beam passes through the whole volume of the object in a short time, and, when using isotropic or near-isotropic resolution, volumetric imaging with the reconstruction of arbitrary MPRs is useful. The CT technique also has an essential impact on the severity of artefacts, with high milliampere-second and high peak kilovoltage settings leading to the reduction of artefacts. With MDCT and low pitches, the high tube current is achieved, which is the basis for good-quality MPRs. With 16-MDCT scanners, the trend is first to reconstruct an overlapping secondary raw data set and then to obtain MPRs of axial, coronal, or arbitrarily angulated sections with a predefined section width. Bone bridges are high-contrast objects and are reliably detected on 1.5- to 3-mm-thick MPRs, depending on the anatomic region, with thicker MPRs preferable for the lumbar spine and somewhat thinner MPRs superior for the hand region
The use of computed tomography (CT) scanning technology improves anatomical visualisation by offering three-dimensional reconstructions of bony architecture and has contributed to the assessment of healing in certain fractures. However, CT scans and plain radiographs detect mineralised bone formation, which is the late manifestation of the fracture healing process.
Moreover, CT scans demonstrate low specificity in the diagnosis of fracture nonunions in long bones
MRI has not been useful in evaluating delayed fracture healing in the long bones. Scintigraphic studies with 99mTc-labeled compounds have also been used to assess carpal bones; however, multiple studies have demonstrated no significant differences in tracer uptake between tibia fractures that usually heal and those that form nonunions.
In our study 48 patients were divided equally into two groups: group A (study group) and group B (control group) based on reduction method to compare the accuracy of reduction and bone healing of mandible fractures using elastic guided reduction v/s bone reduction forceps. Where both groups were evaluated based on sex, type of mandible fracture, confined or nonconfined , intermaxillary fixation method , type of reduction method used , postoperative opg scores , and ct scan assessment scores after 6 weeks for lingual , buccal cortices and medullary bone , calculation of fusion percentage using ct scan , and development of any late post-op complication .
So based on sex, fracture types, intermaxillary fixation method, late post-op complications post-op opg assessment scoring, ct scan assessment scoring fusion percentage results were nil significant (P <0.5). Whereas based on the confined or non-confined type of fractures results were substantial Which denotes that use of bone holding forceps for non confined type of fracture (P 0.011)
2018-10-6-1538819225
Biological development: college admission essay help
Biological Beginnings:
Each human cell has a nucleus which contains chromosomes made up of deoxynucleic acid, or DNA. DNA contains the genetic information, or genes, that are used to make a human being. All typical cells in a human body have 46 chromosomes arranged in 23 pairs, with the exception of the egg and sperm. During cell reproduction, or mitosis, the cell’s nucleus duplicates itself and the cell divides and two new cells are formed. Meiosis is a different type of cell division in which eggs and sperm, or gametes, are formed. During meiosis, a cell duplicates its chromosomes, but then divides twice, resulting in a cell with 23 unpaired chromosomes. During fertilization, an egg and sperm combine to form a single cell, zygote, with information from both the mother and the father.
The combination of the unpaired chromosomes leads to variability in the population because no two people are exactly alike, even in the case of identical twins. A person’s genetic make-up is called their genotype; this is the basis for who you are on a cellular level. A person’s phenotype is what a person’s observable characteristics are. Each genotype can lead to a variety of phenotypes. There are dominant and recessive genes contained in the genetic material that we acquire. For example, brown eyes are dominant over blue eyes, so if the genetic code is available for both, brown eyes will prevail.
Abnormalities can also be linked to the chromosomes and genes that are inherited from your parents. Some examples are down syndrome, cystic fibrosis, and spina bifida. This is caused when either chromosomes or genes are missing, mutated or damaged.
Genetically, I received my height and brown eyes from my mother, and my brown hair from both my parents. As far as I know, I don’t have any abnormalities linked to my chromosomes or genes that were passed down during my conception.
Prenatal/Post-partum:
The prenatal stage starts at conception, lasts approximately 266 days, and consists of three different periods: germinal, embryonic and fetal. This is an amazingly complex time that allows a single cell composed of information from both the mother and the father to create a new human being.
The first period of the prenatal stage occurs in the first two weeks after conception and is called the germinal period. During this time the zygote (or fertilized egg) begins its cell divisions, through mitosis, from a single cell to a blastocyst, which will eventually develop into the embryo and placenta. The germinal period ends when the blastocyst implants into the uterine wall.
The second period of prenatal development that occurs in weeks two through eight after conception is called the embryonic period. During this time, the blastocyst from the first stage develops into the embryo. Within the embryo, there are three layers of cells that form: the endoderm, which will develop into the digestive and respiratory systems, the ectoderm which will become the nervous system, sensory receptors and skin parts, and the mesoderm which will become the circulatory system, bones, muscles excretory system and reproductive system. Organs begin to form in this stage also. During this stage, the embryo development is very susceptible to outside influences from the mother such as alcohol consumption and cigarette usage.
The fetal period is the final period of the prenatal stage which lasts from two months post conception until birth. It is the longest period of the prenatal stage. During this period, continued growth and development occur. At approximately 26 weeks post conception, the fetus would be considered viable, or able to survive outside the mother’s womb. If birth would occur at 26 weeks, the baby would most likely need help breathing at this point because the lungs are not fully mature, but all organ systems are developed and can function outside of mom.
The brain development during the prenatal period is also very complex, and if you think about it, an amazing thing. When a baby is born, they have 100 billion neurons that handle processing information. There are four phases of brain development during the prenatal period: formation of the neural tube, neurogenesis, neural migration and neural connectivity.
During the prenatal period, a wide variety of tests can be performed to monitor the development of the fetus. The extent to which testing is used, depends on the doctors’ recommendations as well as the mothers age, health and potential genetic risk factors. One common test utilized is the ultrasound. This is a non-invasive test that is used to monitor the growth of the fetus, look at structural development and determine the sex of the baby. Other tests that are available, but are more invasive and riskier for both the fetus and the mother, include Chorionic villus sampling, amniocentesis, fetal MRI, maternal blood screening.
The mother’s womb is designed to protect the fetus during development. However, if a mother doesn’t take care of herself, it can have a negative impact on the developing fetus. A woman should also avoid x-rays and certain environmental pollutants during the pregnancy. A woman should avoid alcohol, nicotine, caffeine, drugs and teratogens. They should also have good nutrition during the pregnancy as the fetus relies solely on the mother for its nutrients during development. Along with good nutrition, extra vitamins are also recommended during the pregnancy period, the main one recommended in folic acid. Emotional health is also very important. Higher degrees of anxiety and stress can also be harmful to the fetus and have long term effects on the child.
The birth of a child marks the transition from the prenatal to post-partum stage, which last from approximately 6 weeks, or until a mother’s body is back to her pre-pregnancy state. During this time a woman may be sleep deprived due to the demands of the baby and trying to take care of any other family members. There are also hormonal changes that woman experiences as well as the uterus returning to its normal size. Emotional adjustments are also occurring during this stage. It is common for most women to experience the post-partum blues, in which they feel depressed. These feelings can come and go, and usually disappear within a couple of weeks. If major depression occurs beyond this time, this is referred to as postpartum depression and it is important for a woman to get treatment to protect herself and her baby.
My prenatal development and delivery were fairly uneventful for my mother. The only complication that was had during her pregnancy was low iron levels which would cause her to pass out. Once she started on iron pills, this problem was eliminated. During her pregnancy, since it was in the early 1970’s, it wasn’t common for any testing or ultrasounds to occur unless there were major complications. As my mom said, you get pregnant and have a baby. After I was born, my mom said that she had no complications from post-partum depression or baby blues.
Infancy:
Infancy is the period of time between birth and two years of age. During this time, extraordinary growth and development occur following a cephalocaudal pattern (or top down) and a proximodistal pattern (center of body to extremities). A baby can see before it speaks, move its arms before its fingers, etc. An infant’s height increased by approximately 40 percent by the age of 1. By the age of 2, a child is nearly one-fifth of the its weight and half its height as they will be as an adult. Infants require a great deal of sleep, with the average in this period of 12.8 hours a day. The sleep an infant gets can have an impact of their cognitive functions later in life, such as improved executive function (good sleep) or language delays (poor sleep).
Proper nutrition during this period is also imperative for infant development. Breast feeding an infant exclusively during the first six months of life provides many benefits to both the infant and the mother including appropriate weight gain for the infant and a reduction in ovarian cancer for the mother. However, both breast feeding and bottle feeding are appropriate options for the baby. As the infant gets older, appropriate amounts of fruits and vegetables are important for development as well as limiting junk food.
Motor skills development is thought to follow the dynamic systems theory in this the infant assembles skills based on perceptions and actions. For example, if an infant wants a toy, he needs to learn how to reach for that toy to grasp it. An infant is born with reflexes, which are required for them to adapt to their environment before they learn anything, such as the rooting reflex and sucking reflex for eating. Some of these reflexes are specific to this age, some are permanent throughout their life, such as blinking of the eyes. Gross motor skills are the next major skill that an infant develops. These involve the large muscle groups and are skills such as holding their head up, sitting, standing, and pulling themselves up on furniture. The first year of life, the motor skills help the infant provide themselves independence, while the second year is key to honing in the skills they have learned. Fine motors skills develop secondary to gross motor skills. These include activities such as grasping a spoon and picking up food off of their high-chair tray.
Infant senses are not developed during the prenatal period. Visual acuity in the infant that is comparable to an adult, occurs by about 6 months of age. A fetus can hear in the womb, but is unable to distinguish loudness and pitch which is developed during infancy. Other senses are present, such as taste and smell, but preferences are developed throughout infancy.
Jean Piaget’s theory on cognitive development is one that is widely used. This theory stresses that children develop their own information about their surroundings, instead of information just being given to them. The first stage of Piaget’s theory is the sensorimotor stage. This stage involves infants using their senses to coordinate with their motor skills they are developing. There is some research that has been completed that states that Piaget’s theories may need to be modified. For example, Elizabeth Spelke endorses a core knowledge approach, in which she believes that infants are born with some innate knowledge system in order for them to navigate the world in which they are born into.
Language development begins during this stage also and all infants follow a similar pattern. The first sounds from birth is babbling, crying and cooing which are all forms of language. First words are usually spoken by about 13 months with children usually speaking two word sentences by about two years. Language skills can be influenced both by biological and environmental considerations in the infant.
An infant displays emotion very early in life. The first six months of their life you can see surprise, joy, anger, sadness, and fear. Later in infancy, you will also see jealousy, empathy, embarrassment, pride, shame and guilt. The later developed emotions are emotions that require thought, which is why they don’t develop until after the age of 1. Crying can indicate three different emotions in an infant – basic cry – typically related to hunger, anger cry and pain cry. A baby’s smile can also mean different things – such as a reflexive smile or a social smile. Fear is an emotion that is seen early in a baby’s life. One that is often talked about is “stranger danger” or separation protest.
There are three classifications of temperaments of a child that were proposed by Chess and Thomas. These include an easy child, difficult child, and slow-to-warm up child. These temperaments can be influenced by biology, gender, culture and parenting styles. The remaining personality traits that are developed in the period include trust, developing sense of self and independence. Erik Erickson first stage of development occurs within the first year of life with his trust vs mistrust theory. The concept of trust vs mistrust is seen throughout the development of a person and is not limited to this age group. The second year of life Erickson’s theory of autonomy vs shame and doubt. As an infant develops his skills, they need to be able to do this independently or feelings of shame and doubt develop. The development of autonomy during infancy and the toddler years can lead to greater autonomy during the adolescent years.
Social interactions occur with infants as early as 2 months of age, when they learn to recognize facial expressions of their caregivers. They show interest in other infants as early as 6 month of age, but this interest increases greatly as they reach their 2nd birthday. Locomotion plays a big part in this interaction allowing the child to independently explore their surrounding and others that may be around them. Attachment theories are widely available. Freud believes attachment is based on oral fulfillment, or typically the mother who feeds them. Harlow said that attachment is based on the comfort provided based on his experiment with wire monkeys. Erikson’s theory goes back to the trust vs mistrust theory which was talked about earlier.
As a new baby is brought into a family, the dynamic of the household changes. There is a rebalancing of social, parental and career responsibilities. The freedom that was once had prior to the baby is no longer there. Parents need to decide if a parent stays home to take care of the child or if the child is placed into a daycare setting. Parental leave allows a parent to stay home with their child for a period of time after their birth, but then requires them to be place in some type of child care setting. Unfortunately, the quality of child care varies greatly. Typically, the higher the quality, also the higher the price tag. A parent needs to be an advocate for their child and monitor the quality of care they are receiving, no matter the location they are at. There has been shown to be little effect on the children who are placed in child care instead of being cared for by a full-time parent.
As an infant, I was a bottle-fed baby. My mother was able to be home with me full time, so I was not exposed to outside childcare settings. Unfortunately for my parents, I was very colicky until I was about 6 weeks old. This was very stressful for my parents as they were adjusting to life as a family with a new baby. After the colic ended, I was a very happy, easy baby when I wasn’t sick. I developed febrile seizures about the age of 7 months and they lasted until about 2 years when I was put on phenobarbital to control them. I talked and walked at a very young age (~9 months). I was very trusting of everyone and had no attachment issues. I was happy to play by myself if no one was around, but if company was over, my parents said I always wanted to be in the middle of the action, I was especially fond of adult interactions.
Early childhood:
The next developmental stage is early childhood which lasts from around the ages of 3-5. During this stage, height and weight are slowed from the infancy stage, but a child still grows about 2 ½ inches a year and gains 5-7 pounds per year during this stage. The brain continues to develop by combining the maturation of the brain with the external experiences. During this stage, increased. The size of the brain doesn’t increase dramatically during this or subsequent periods, but the local patterns within the brain do. The most rapid growth occurs in the prefrontal cortex which is key in planning and organization as well as paying attention to new tasks. The growth during this phase is caused by the increase in the number and size of the dendrites as well as the myelination continuing during this stage.
Gross motor skills continue to increase with children being able to walk easily as well as beginning movements of hopping, skipping, climbing, etc. Fine motor skills continue to improve as well with children being able to build towers of blocks, do puzzles, or writing their name.
Nutrition is an important aspect in early childhood. Obesity is a growing health problem in early childhood. Children are being fed diets that are high in fats and lower in nutritional value. They are eating out more that they have historically. Parents need to focus on better nutrition and more exercise for their children. Childhood obesity has a strong correlation to obesity later in life.
Piaget’s preoperational stage lasts from age 2 to 7, is the second stage in his theory of development. During this stage, children begin to represent things with words, images and drawings. They are egocentric and hold magical beliefs. This stage is divided into the symbolic function substage (age 2-4) and the intuitive thought substage (age 4-7). In the symbolic substage, the child is able to scribble designs the represent objects, and can engage in pretend play. They are limited in this stage by egocentrism and animism. The intuitive thought substage, the child begins to use primitive reasoning and is curious. During this time, memory increases as well as their ability to improve their attention span.
Language development during this phase is great. A child goes from saying two word utterances, to multiple word combinations, to complex sentences. They begin to understand the phonology of language as well as the morphology. They start to also apply the rules of syntax and semantics. The foundation for literacy also begins during this stage, using books with preschoolers provides a solid foundation for which the rest of their life successes can be based.
There are many early childhood education options available to parents. One option is the child centered kindergarten which focuses on the whole child. The Montessori approach allows the children more freedom to explore and the teacher is a facilitator rather than an instructor. There are also government funded programs such as Project Head Start available for low-income families to give their children the experience they need before starting elementary school.
Erickson’s stage of development for early childhood is initiative vs guilt. This stage the child has begun to develop an understanding of who they are, but also begin to discover who they will become. Usually children of this age describe themselves in concrete terms, but some also begin to use logic and emotional descriptors. Children also begin to perceive others in terms of psychological traits. Children begin to be more aware of their own emotions, understand others emotions and how they relate to them and are also able to begin regulating emotions during this stage.
Moral development also begins during this stage. Freud talks about the child developing the superego, the moral element of personality, during this stage. Piaget said children go through two distinct stages of moral reasoning: 1) heteronomous morality and 2) autonomous morality. During the first, the child thinks that the rules are unchangeable and judge an action by the consequence, not the intention. The autonomous thinker thinks about the intention as well as the consequence.
Gender identity and roles begin to play a factor during this stage. Social influences on gender roles provide a basis for how children think. This can be through imitation of what they see their parent doing or can be through observation of what they see around them. Parental and peer influences on modeling behavior is apparent. Group size, age, interaction in same-sex groups and gender composition all are important aspect of peer relations and influences.
Parenting style vary differently. Diana Baumrind describes four parenting styles in our book: authoritarian, authoritative, neglectful and indulgent. She shows a correlation between the different parenting styles and the behaviors in children.
Play is important in the child’s cognitive and socioemotional development. Play has been considered the child’s work by Piaget and Vygotsky. This allows a child to learn new skills in a relaxed way. Make-believe play is an excellent way for children to increase their cognitive ability, including creative thought. There are many way a child can play – including sensorimotor and practice play, pretense/symbolic play, constructive play, and games. Screen time is becoming more of a concern in today’s world. They are good for teaching, but can also be distracting/disruptive if screen time is not limited.
As a young child, I was very curious about thing and loved to play pretend. I attended preschool for two years, which aided in my cognitive development. My parents said I was able to read and do age advanced puzzles by the time I was 3. I was able to regulate my emotions and understand the emotions of others. My parents utilized an authoritarian style of discipline when I was younger, being the first child, they wanted their kids to be perfect. This relaxed as my siblings came along and as we got older.
Middle/late childhood:
During this period, children maintain slow, consistent physical growth. They grow 2-3 inches per year until the age of about 11, and gain about 5-7 pounds per year. The size of the skeletal and muscular systems is the main contributor to the weight gain.
The brain volume stabilizes by the end of this stage, but changes in the structures continue to occur. During this stage, there is synaptic pruning, in which some areas of the brain which are not used as frequently lose connection, while other areas increase the amount of connections. This increase is seen in the prefrontal cortex which orchestrates the function of many other brain regions.
Development of both gross and fine motor skills continue to be refined. Children are able to ride a bike, swimming, skipping rope; they can tie their shoes, hammer a nail, use a pencil and reverse numbers less often. Boys usually outperform girls in their gross motor skills, while girls outperform boys in the fine motor skills. Exercise continues to be area of concern at this age. Children are not getting the exercise they need. Studies have shown that aerobic exercise, not only helped with weight, but also with attention, memory, thinking/behavior and creativity.
Obesity is a continued health concern for this age group which leads to medical concerns such as hypertension, diabetes, and elevated cholesterol levels. Cancer is the second leading cause of death of children in this age group. The most common childhood cancer is leukemia.
Disabilities are often discovered during this time as many don’t show up until a child is in a school setting. There are learning disabilities, such as dyslexia, dysgraphia and dyscalculia; attention deficit hyperactivity disorder (ADHD), and autism spectrum disorders, such as autistic disorder and Asperger syndrome. Schools today are better equipped to handle children with these disabilities to help them receive the education they need.
This stage of development as described by Piaget Cognitive Development Theory is the concrete operational stage. The child in this stage can reason logically, as long reasoning can be applied to concrete examples. In addition, they can utilize conservation, classification, seriation and transitivity.
Long term memory increases during this stage, in part in relation to the knowledge a child has with a particular subject. Children are able to think more critically and creatively during this period as well as increases in their metacognition. Along with the topics already mentioned: self-control, working memory and flexibility are all indicators of school readiness/success.
Changes occur during this stage on how a child’s mental vocabulary is organized. They begin to improve their logical reasoning and analytical abilities. They also become have more of metalinguistic awareness, or knowledge about language. Reading foundations are important during this stage. Two approaches currently being explored are the whole-language approach and the phonics approach. The whole-language approach teaches children to recognized words or whole sentences. The phonic approach teaches children to translate written symbols into sounds.
The child during this stage, begins to better understand themselves and are able to describe themselves utilizing psychological characteristics and can describe themselves in reference to social groups. High self-esteem and self-concept are important for this age group. Low self-esteem has been correlated to instances of obesity, depression, anxiety, etc.
Erickson’s fourth stage of development, industry vs inferiority appears in this stage. Industry refers to work and children wanting to know how things are made and how they work. Parents who dismiss this interest can create a sense of inferiority in their children.
Emotional development during this stage involves the child becoming more self-regulated in their reactions. They understand what lead up to an emotional reaction, can hide negative reactions and can demonstrate genuine empathy. They are also learning coping strategies to learn to deal with stress. Moral development continues also during this stage as proposed by Kohlberg’s 6 stages of development.
Gender stereotypes are prevalent in this development phase. They revolve around physical development, cognitive development and socioemotional development of a child.
This stage of life, parents are usually less involved with their children although they continue to remain an important part of their development. They become more of the manager, helping the child learn the rights/wrongs of their behaviors. If there is a secure attachment between the parent and the child, the stress and anxiety that is involved in this phase is lessened.
Friendships are important during this stage of a child’s life. Friends are typically similar to the child in terms of age, sex, and attitudes towards school. School is a sign of new obligations to children. As with the younger age group, there are different approaches to school at this stage. A constructivist approach focuses on the learner and having the individuals constructing their knowledge. A direct instruction approach is more structured and teacher centered. Accountability in the schools is enforced through the application of standardized testing. Poverty plays a role in the learning ability of children, oftentimes creating a barrier to learning for the student, including parents with low expectations, not able to help with the homework of inability to pay for educational materials.
My parents said that by this age I was able to reason logically with them, and in my day to day life. I remained curious about what things were and how they worked. My mom told me about a test I took for an accelerated learning program (ULE) in my elementary school. I missed one question, I couldn’t answer what a wheelbarrow was. After that, my mom said I was interested in learning what they were and what they were used for. The ULE program helped me satisfy my curiosity above and beyond what was taught in school by providing additional learning opportunities.
Adolescence:
Adolescence lasts from about 12 – 18 years of age. The primary physical change during adolescence is the start of puberty. This is a brain-neuroendocrine process that provides stimulation for rapid physical changes that take place. This is when a child takes on adult physical characteristic, such as voice changes, height/weight growth for males and breast development and menstruation begins for females. Females typically enter puberty two years prior to males. The process is hormonal driven and include actions from the hypothalamus and pituitary gland. During this time, adolescents are preoccupied with their body image, as their bodies are rapidly changing. Females are typically more dissatisfied with their bodies than males, however, body image perception becomes more positive for both genders as they end the adolescent period.
Brain development during this time includes significant structural changes. The corpus callosum thickens, improving their ability to process information. The prefrontal lobes continue to develop, increasing reasoning, decision making and self-control. The limbic system, specifically the amygdala is completely developed by this stage.
This stage also marks a time of sexual exploration, forming a sense of sexual identity, managing sexual feelings, and developing intimate relationships. Most adolescents are not emotionally prepared for sexual experiences and can lead to high risk sexual factors. Contraceptive use is not prevalent in this age group, even though it can lessen or eliminate the risk of sexually transmitted diseases and unwanted pregnancy. Teen pregnancy, while reduced from years past, is still too high. Sex education continues to be a topic of discussion as to what is most appropriate for the schools – abstinence only or education that emphasizes contraceptive knowledge.
Health during this stage of development is of concern as bad health habits learned here, can lead to death in early adult life. Obesity due to poor nutrition and lack of exercise remains a consistent theme. Sleep is also important for this age group as most reported getting less than 8 hours of sleep per night. Substance use is also seen in this age group. Another health concern is eating disorders including both anorexia and bulimia, these disorders can take over a person’s life due to distorted body images.
Piaget’s final stage of cognitive development occurs during this stage – the formal operational stage. Adolescents are not bound by concrete thoughts or experiences during this stage. They can think abstractly, idealistically, and logically.
Executive function is one of the most important cognitive changes that occurs in this stage. This involves an adolescent ability to have goal directed behavior and the ability to exercise self-control.
The transition between elementary school to junior high school during this stage can be very stressful for adolescents. It occurs during a period of time when many other physical changes (puberty) are occurring at the same time. This can create stress and worrying for the child.
Erickson’s fifth developmental stage that corresponds to this period in life is Identity vs identity confusion. This stage is aided by a psychosocial moratorium, which is the gap between adolescence and adulthood. This period a person is relatively free of responsibility to determine what their true identity is. This is the path that one takes toward adult maturity. Crisis during this stage is a period in which a person is identifying alternatives. Commitment is a personal investment in an identity. It is believed that while identity is explored during this stage, finalization does not occur until early adulthood, with life review.
Parents take on a managerial role during this stage; monitoring the choices that are made regarding friends, activities, and their academic efforts. Higher rates of parental monitoring leads to lower rates of alcohol and drug use. The adolescents need for autonomy can be hard for a parent to accept. The parents feel like the child is “slipping away” from them. There is also gender differences as far as it relates to how much autonomy is granted, with males receiving more autonomy than females. Conflict during this escalates during the early adolescent stage, but then lessens towards the end of the stage.
Friendships during this stage are often fewer, but more intimate than in younger years and take on an important role of meeting social needs. Positive friendships are associated with positive outcomes, including lower rates of substance abuse, risky sexual behavior, bullying and victimization. Peer pressure at this stage in life is high, with more conformance to peer pressure if they are uncertain about their social identity. Cliques and crowds emerge and provide a more important role during this stage of development. Dating and romantic relationships begin to evolve. Juvenile delinquency is a problem that emerges, with illegal behaviors being noted. This can be due to several factors including lower socioeconomic status, sibling relationships, peer relationships, and parental monitoring. Depression and suicide also increase during this stage of life.
During this stage of my life, I was very goal oriented, more so academically than socially. I chose to take higher level classes that weren’t required and continued to work with a program that allowed me to do projects outside of school. During this time, I began to think about what direction my life would take. I decided that I would attend college to major in pharmacy, a decision that would later be reviewed and changed.
Early adulthood:
Becoming an adult involves a lengthy transition. Early adulthood occurs from 18 to 25 years of age. During this time, an individual is still trying to figure out “who” they are, exploring career paths, determining their identity and understanding what kind of lifestyle they want to live. Early adulthood is characterized by 5 key features as explained by Jeffrey Arnett. These include: identity exploration, instability, self-focused, feeling in-between and the age of possibilities – basically they can transform their lives. In the US, entry into adulthood is primarily characterized by holding a permanent, full-time job. Other countries consider marriage the marker for adulthood. Just as going from elementary school to middle school causes stress in adolescents, the transition from high school to college can evoke the same emotions.
Peak physical performance is often reached between the ages of 19 and 26. Along with physical performance decline, body fatty tissue increases and hearing begins to decline in the last part of early adulthood. Health during early adulthood is subpar. Although most know what is required to be healthy, many fail to apply this information to themselves. The bad habits started during adolescence are increased in early adulthood, including inactivity, diet, obesity, sleep deprivation and substance abuse. These lifestyles, along with poor health, also have an impact on life satisfaction. Obesity continues to be a problem in this developmental stage. Losing weight is best achieved with a diet and exercise program rather than relying on diet alone. Exercise can help prevent diseases such as heart disease and diabetes. Exercise can also improve mental health as well and has been effective in reducing depression. Alcohol use appears to decline by the time an individual reaches their mid-twenties, but peaks around 21-22 years of age. Binge drinking and extreme binge drinking are a concern on college campuses. This can lead to missing classes, physical injuries, police interactions and unprotected sex.
Sexual activity increases in emerging adulthood, with most people having experienced sexual intercourse by the time they are 25. Casual sex is common during this development stage involving “hook-ups” or “friends with benefits”.
Piaget’s stages of development ended with formal operational thought that was discussed in the adolescent stage. However, he believes that this stage covers adults as well. Some theorists believe that it is not until adulthood that formal thoughts are achieved. An additional stage has been proposed for young adults – post-formal thought. This is reflective, relativistic, contextual, provisional, realistic and influenced by emotion.
Careers and work are an important theme in early adulthood. During this time an individual works to determine what career they want to pursue in college by choosing a major. By the end of this developmental stage, most people have completed their training and are entering the work force to begin their career. Determining one’s purpose, can help ensure that the correct field of study/career choice is made. Work defines a person by their financial standing, housing, how they spend their time, friendships and their health. Early jobs can sometimes be considered “survival jobs” that are in place just until the “career job” is obtained.
Erickson’s sixth stage of development that occurs during early adulthood is intimacy vs isolation. Intimacy, as described by Erickson, is finding oneself while losing oneself in another person, and it requires a commitment to another person. Balancing intimacy and independence is challenging. Love can take on multiple forms in adulthood. Romantic love, or passionate love, is the type of love that is seen early in a relationship, sexual desire is the most important ingredient in romantic love. Affectionate love, or compassionate love, is when someone desires to have the other person near and has a deep, caring affection for the person, typically a more mature love relationship. Consummate love involves passion, intimacy and commitment, and is the strongest of all types of love.
Adults lifestyles today are anything but conventional. Many adults choose to live alone, cohabitate, or live with a partner of the same sex in addition to the conventional married lifestyle. Divorce rates continue to remain high in the US, with most marriages ending early in the course of their marriage. Divorced adults have higher rates of depression, anxiety, suicide, alcoholism and mortality. Adults who remarry usually do so within three years of their divorce, with men remarrying sooner than women. Making a marriage work takes a great deal of commitment from both parties. John Gottman determined some principals that will help make a marriage successful. These include: establishing love maps, nurturing fondness and admiration, turning towards each other instead of away, letting the partner influence you and creating shared meaning. In addition, to these a deep friendship, respect for each other and embracing the commitment that has been made are things that will help to make a marriage last.
During early adulthood, many become parents for the first time. Sometimes this is well planned out, other times it is a complete surprise. Parenting is often a hybrid of utilizing techniques that their parents used on them and their own interpretation of what is useful. The average age an individual has their first child is increasing and the number of children they choose to have is declining. This is due to women wanting to establish their careers prior to becoming a mom. The results of this is that parents are often more mature and able to handle situations more appropriately, may have more income, fathers are more involved in the child rearing but also children spend more time in supplemental care than when mothers stayed home to provide the child care.
During early adulthood, I went to college, decided that a pharmacy major wasn’t for me and ended up obtaining a degree in microbiology and a minor in chemistry. I met my first husband during college and we ended up marrying a couple months before I graduated. After graduation, we had a child and eventually ended up getting a divorce. I think the stress of going right from college to marriage to having a family took a toll on us. We were able to maintain civility to co-parent our son even though we were not able to make our marriage work. The first few years after our divorce were very hard, being a single-mom, trying to get a career established and make sure I was providing for our child. Thankfully I had a huge support system with my parents and siblings that were able to get us through the tough times. About 10 years later, I met my now husband and was able to find the intimacy again that was needed in my life. We both have brought children from previous relationships into our marriage and have also had two children together. This has created some conflict of its own, but we work through it all together. I feel that we are much more equipped and mature to be parents of our younger children than we were when our older ones were little.
2018-7-31-1532998660
Human identification using palm print images
CHAPTER ONE
INTRODUCTION
1.1. Background
The term “identification” means the act or process of establishing the identity of or recognising, the treating of a thing as identical with another, the act or process of recognising or establishing as being a particular person, but also the act or process of making representing to be, or regarding or treating as the same or identical.
Computerized human identity is one of the most essential and difficult tasks to meet developing call for stringent protection. The usage of physiological and/or behavioral characteristics of people, i.e., biometrics, has been drastically employed within the identity of criminals and matured as an essential tool for law enforcement departments. The biometrics-primarily based automatic human identity is now rather popular in a extensive range of civilian packages and has an end up effective alternative to traditional (password or token) identification systems. Human fingers are easier to offer for imaging and may screen a diffusion of statistics. Therefore, palmprint studies has invited a number of attention for civilian and forensic usage. But, like a number of the popular biometrics (e.g., fingerprint, iris, face), the palmprint biometric is likewise at risk of sensor level spoof assaults. Far flung imaging the usage of a excessive-resolution digicam may be hired to reveal important palmprint information for feasible spoof assaults and impersonation. consequently, extrinsic biometric capabilities are predicted to be greater vulnerable for spoofing with mild efforts. In precis, the blessings of clean accessibility of those extrinsic biometric tendencies additionally generate a few concerns on privateness and security. on the other hand, intrinsic biometrics characteristics (e.g., DNA, vessel systems) require greater hard efforts to accumulate with out the knowledge of an person and, consequently, extra tough to forge. but, in civilian applications it is also crucial for a biometrics trait to ensure excessive collectability at the same time as the consumer interacts with the biometrics tool. on this context, palm-vein popularity has emerged as a promising opportunity for non-public identification. It has the benefit of the high agility.
Biometrics is authentication using biological data. It is a powerful method for authenticating. The general purpose of biometry is to distinguish people from each other by using features that cannot be copied or imitated. There is less risk than other methods because it is not possible for people to change, lose, and forget their physical properties. The use of these features, defined as biometric metrics, in ciphers is based on an international standard established by INCITS (International Committee for Information Technology).
In recent years, many people have been able to distinguish amount of work has been done. Some of the patterns studied are characters, symbols, pictures, sound waves, electrocardiograms. Usually complex difficult to interpret due to calculations or human evaluations of overload problems are used in computerized identification. The path maps to the template. In this case, a template for each pattern class the set of templates is stored in memory in the form of a database. Unknown class of each class template. The classification is based on a previously determined mapping criterion or similarity criteria. Compare the pattern with the complete pattern it is faster to compare some features rather than the more accurate result most of the time. For this reason, pattern recognition process, feature extraction and classification examined in two separate phases.
In Picture 1.2 future extraction, makes some measurements on the pattern and turns the results into a feature vector. This feature may vary considerably depending on the nature of the problem. Also, the importance ratings and costs of the features may be different. For this reason, properties should be selected to distinguish the classes from each other and to achieve lower costs.
Features are different for every pattern recognition problem.
Based on the properties extracted in the classification stage, it is decided to which class the given object belongs to. Although the feature extraction does not differ according to the pattern recognition problem, the classifiers are collected in specific categories[6].
Template mapping is the most common classification method. In this method, each pixel of the view is used as a feature. The classification is done by comparing the input image to all the class templates. The comparison results in a similarity measure between the input information and the template, with the template, the pixel-based equivalence of the input image increases the degree of similarity, while the corresponding of the corresponding pixels reduces the similarity. After all the templates are compared, the class of the template giving the most similarity grade is selected. Structural classification techniques use structural features and decision rules to classify patterns. For example; line types in characters, holes and slopes are structural properties. Rule-based classification is performed by using these extracted features. Many pattern recognition systems are based on mathematical bases to reduce misclassification. These systems are pixel-based and use structural features. Examples include Gabor features, contour properties, gradient properties, and histograms. As a classifier, classifiers including discriminant function classifiers, Bayesian classifiers and artificial neural networks can be used[1].
In its simplest terms, it is a necessary tool to process darkness, manipulate images, and two important input-output niches are demanded image digitization and imaging devices. Due to the inherent nature of these devices, images do not create a direct source for computer analysis. Since computers work with numeric values rather than with image data, the image is transformed into a numeric format before processing begins. Picture-1.1 shows how a numbered array of numbers can represent a material image. The material image is divided into small regions called “shape elements” or “pixels”. The rectangular mesh cage device, which is the most comprehensive subdivision scheme, is also shown in Picture-1.1 In the digital image, the value placed on each pixel is given the brightness of that spot.
The conversion process is called numerical conversion. This situation is completely transferred to a diagram in Picture 1.2 The brightness of each pixel is used as examples and numerically. This part of the operation shows the brightness or darkness of each pixel in that place. When this process is applied to all pixels, the image is displayed in a rectangular shape. Each pixel has a full place or a trace (number of lines and columns), and at the same time has a full value called gray level. This sequence of numeric data is now available for processing on a computer. Picture 1.3 shows the numerical state of a continuous view.
1.2. Human Identity
Human beings cannot live without systems of meaning. Our primary impulse is the impulse to find and create meaning. But just as important, human beings cannot exist without an identity and often human identity is tied closely to the systems of meaning that people create. The systems of meaning are how people express their identity.
There are many elements that shape identity- family, community, ethnicity, nationality, religion, philosophy, science and occupation. For much of history, human identity has been oriented to small bands of extended families with belief systems that validated that lifestyle. With the movement toward domestication and state formation, along with the larger communities of such states, the boundaries of human identity were widened. But the small band mentality has persisted over subsequent millennia and is still evident even within modern states in the form of ethnic divisions, religious differences, occupation, social status, and even in the form of organizational membership[1].
The presence and origin of the small band mentality can be explained in terms of the inherited animal brain and its primitive drives. Animal life from the earliest time developed an existence of small groups of extended family members. This existence was shaped by the base drives to separate from others, to exclude outsiders, and to dominate or destroy them as competitors for resources. This in-group thinking and response was hardwired into the animal brain which has continued to influence the human brain. Unfortunately, small band mentality has long had a powerful influence on the creation of systems of meaning and the creation of human identity. People have long identified themselves in terms of some localized ethnic group, religion or nation in opposition to others who are not members of their group. This has led to the exclusion of outsiders and crusades to dominate or destroy them as enemies.
More recent discovery on human origins and development confirms the early Greek intuition. We now know that we are all descendent of a common hominid ancestor (the East Africa origin hypothesis). Race is now viewed as a human construct with little if any basis in real biology. So-called racial differences amount to nothing of any real distinction in biology. One scientist has even said that, genetically, racial features are of no more importance than sunburn.
This information points to the fact that we are all descendent of Africans. In the great migrations out of Africa some early hominids moved to Europe and endured millennia of sunlight deprivation and this led to a redistribution of melanin in their skin. They still possessed the same amount of melanin as that of darker skinned people but it was not as visible.
All this is to say that the human race is indeed one family. And modern human identity and meaning must be widened to include this fact. The small band mentality of our past which focuses human identity on some limited subset of the human race has always led to the creation of division, barriers, opposition and conflict between people. It is an animal view of human identity.
But we are no longer animals. We are now human and we need to overcome the animal tendency to separate from others, to exclude them, and to view them as outsiders or enemies to be dominated or destroyed.
It is also useful to note here how tightly many people tie their identity to the system of meaning that they adopt (their belief system or viewpoint). Consequently, any challenge to their system of meaning will produce an aggressive defensive reaction. The system may contain outdated ideas that ought to be challenged and discarded but because it comprises the identity of those who hold it, they will view any challenge as an attack on their very selves and this produces the survival response or reaction. Attacks on the self (self-identity) are viewed as attacks on personal survival and will evoke the aggressive animal defense. In this reaction we see the amygdala overruling the cortex.
This defensive reaction as an attempt to protect the self helps explain in part why people continue to hold on to outdated ideas and systems of belief/meaning. The ideas may not make rational sense to more objective outside viewers but to those who hold them, they make sense in terms of the dominant themes of their overall system.
It is true that we can’t live without meaning or identity. And our identity is often defined by our systems of meaning. This tendency to tie our identity too tightly to our systems of meaning calls for a caution: Human meaning and identity should not be placed in an object- a system of meaning, an ideology, an occupation, a state, a movement, ethnicity or some organization. Our identity and our search for meaning should be focused on the process of becoming human. This orients us to ongoing development and advance. We then remain open to make changes as new information comes along. It’s about the human self as dynamic process, not rigid and unchanging object.
So from our point of view, identity is used to mean the condition of being a specified person or the condition of being oneself and not another. It clusters with the terms personality and individualism, and less fashionably, “soul”.
Figure 1.1: Human Identity by face
1.3. Palm Print
Handprint recognition supplies fingerprint matching algorithms for nature: The two biometric systems are based on personal information, which is represented by the effects seen on the lines. Statistical analysis by FBI officials reflects the fact that handprint identification is a biometric system that is complementary to more popular fingerprint recognition systems. The findings of these studies show that 70% of traces left behind by criminals in crime scenes are from fingerprints and 30% are from palms. Because of the lack of processing capabilities and lack of live-scanning technologies, palm print recognition algorithms work more slowly when automated compared to fingerprint recognition algorithms. Since 1994, there has been a growing interest in systems that use fingerprint and palm print identification together. The palm print identification is based on massive information found on the friction ridge as on the fingerprint. The palm print, or fingerprint, consists of the dark lines representing the high and pointed portions of the lines of evidence, which are in sequence, and white lines representing the valleys between these lines of relief. The palm print recognition technology uses some of these characteristics.
Algorithms used for palm print detection and verification are similar to algorithms used in finger print recognition. These algorithms are basically based on correlation, based on feature points (minutiae) and based on ridges (ridges). Correlation-based matching involves two handfuls of images taken together to find corresponding lines in two images; feature-based matching is based on the determination of the location, orientation and orientation information of specific feature points in the palm image and comparison of these information. The line-based matching technique uses the geometric characteristics of the lines as well as the texture analysis, in addition to feature point analysis, while classifying the palm print.
Correlation-based algorithms work faster than other types of techniques, but they have less tolerance to distortions and rotation variances in the image. Algorithms based on feature points require high quality imagery and do not benefit from the textural or visual qualities of the player. Finally, line-based algorithms require a high-resolution sensor to produce good-quality images, as well as distinctive features of line characteristics that are significantly less than feature points. The positive and negative aspects of these techniques also apply to fingerprinting.
William James Herschel, the son of John Herschel, was an astronomer. His father asked him to choose a career other than astronomy, so he joined the East India Company, and in 1853 was posted to Bengal. Following the Indian Mutiny of 1858, Herschel became a member of the Indian Civil Service, and was posted to Jungipoor.
In 1858 he made a contract with Mr. Konai, he was a local man, for the construction of road building materials. To prevent Konai from rejecting his signature later, Herschel suppressed his handprint in his documentary figure 2.2 shows that Mr. Konai’s palmprints. Herschel continued to experiment with hand prints, soon he realized that it was necessary to use fingers. He collected prints from his friends and family, and the result was that one’s fingerprints did not change over time. The Governor of Bengal suggested that fingerprints should be used on legal documents to prevent impersonation and refusal of contracts, but this proposal was not addressed. [1]
Now we are using palm prints and fingerprints to investigate the criminal cases, for example in a crime scene we found some palmprints and fingerprints on a subject in the crime scene then we collect this print after that we compare the prints with the people who committed crime before. Also in government’s documents we use it like a sign of a person, and in health applications, so as we see there are many areas to use the palm print images.
Figure 1.2: Palm Print
1.3.1. Palm Print Features
Palm print has stable and rich line features, three types of line patterns are visible on the palm. They are principal lines, wrinkles, and ridges. Principal lines are the longest, and widest lines on the palm. The principal lines indicate the most distinguishing direction features on the palm. Most people have three principal lines, which are named as the heart line, head line, and life line. Wrinkles are regarded as the thinner and more irregular line patterns. The wrinkles, especially the pronounced wrinkles around the principal lines, can also contribute for the discriminability of the palm print. On the other hand, ridges are the fine line texture distributed throughout the palmar surface. The ridge feature is less useful for discriminating individual as they cannot be perceived under poor imaging source. Figure 4 shows the palm lines.
1.3.2. The importance of palm print identification
Every person’s palm print is unique, so palm print identification is a perfect form of authentication.
The palm print recognition system has a high level of security because it is impossible to steal.
Palm print is used in many industries such as healthcare, aviation, education, construction and banking. Thus, palm print identification is a user-friendly system.
The size of the palm print recognition system is small and portable.
Palm print recognition system is hygienic due to contactless use.
1.4. Biometric Features
Physiological features include DNA, iris, finger prints, palm prints, and facial features, while behavioral features include mimics, signature, and sound. When measuring physiological / behavioral characteristics, such factors as age, health or mental status of the person should be eliminated from the measurement. The existing identification systems are not sufficient, the conventional methods based on the use of a personal identification number (PIN) together with user name and plastic cards are both inconvenient and unsafe. The ideal biometric based person recognition system should identify the identity of the individual uniquely or verify identification within the database accurately, reliably and most efficiently. For this reason, the system should be able to cope with problems such as inlet deterioration, environmental factors and signal mixtures, and should not change over time and be easily applicable. The most commonly used biometric feature is the most reliable iris scan while fingerprinting.
In this Project, we will work on the palm print recognition system, which is one of the physical features. Palm tracking has advantages over other biometric features. The required images are collected with a low-cost operation and the image does not cause any deterioration; False Accept Rate and False Reject Rate are reasonable values. Incorrect acceptance and rejection rates for a system are part of the total number of identification attempts for total false acceptance / rejection..
CHAPTER TWO
LITERATURE REVIEW
2.1.Background
In this chapter, I will talk about the approaches of palm print. Individual validation utilizing palmprint pictures has gotten extensive consideration the most recent 5 years and various methodologies have been proposed in the writing. The accessible methodologies for palmprint validation can be isolated into three classifications essentially on the premise of separated highlights; (I) surface based methodologies (ii) line-based methodologies, and (iii) appearance-based methodologies. The depiction of these methodologies is past the extent of this paper. However a synopsis of these methodologies with the run of the mill references can be found in Table 1. Analysts have indicated promising outcomes on inked pictures, pictures procured specifically from the scanner and pictures procured from advanced camera utilizing compelled pegged setup. However endeavors are as yet required to enhance the execution of unconstrained pictures gained from sans peg setup. Accordingly this paper uses such pictures to research the execution change. An outline of earlier work in Table 1 demonstrates that there has not been any endeavor to explore the palmprint confirmation utilizing its numerous portrayals [2].
A few coordinating score level combination methodologies for joining different biometric modalities have been displayed in the writing. It has been demonstrated that the execution of various combination methodologies is unique. Be that as it may, there has not been any endeavor to consolidate the choices of different score level combination techniques to accomplish execution change. The association of rest of this paper is as per the following; Section 2 depicted the square outline of the proposed framework. This area likewise points of interest include extraction strategies utilized in the tests. Area 3 subtle elements the coordinating paradigm and the proposed combination technique. Analysis comes about and their exchange show up in Section 4. At last the finishes of this work are compressed in Section.
2.2. Proposed Systems
Unlike previous work, we propose an alternative approach to palmprint authentication by the simultaneous use of different palmprint representations with the best pair of fixed combination rules. The block diagram of the proposed method for palmprint authentication using the combination of multiple features is shown in Fig. 1. The hand image from every user is acquired from the digital camera. These images are used to extract region of interest, i.e. palmprint, using the method detailed in Ref. [5]. Each of these images is further used to extract texture-, line- and appearance-based features using Gabor filters, Line detectors, and principal component analysis (PCA) respectively. These features are matched with their respective template features stored during the training stage. Three matching scores from these three classifiers are combined using fusion mechanism and a combined matching score is obtained, which is used to generate a class label, i.e., genuine or imposter, for each of the user. The experiments were also performed to investigate the performance of decision level fusion using individual decisions of three classifiers. However, the best experimental results were obtained with the proposed fusion strategy which is detailed in Section 4.
Figure 2.1: Block diagram for personal authentication using palmprint
2.2.1. Gabor Features
The surface highlights separated utilizing Gabor channels have been effectively utilized in unique mark characterization, penmanship acknowledgment and as of late in palmprint. In spatial area, an even-symmetric Gabor channel is a Gaussian capacity tweaked by a situated cosine work [3]. The motivation reaction of even-symmetric Gabor channel in 2-D plane has the accompanying general frame:
In this work, the parameters of Gabor filters were empirically determined for the acquired palmprint images. If we filter the image with Gabor filter, we get:
where ‘∗’ indicates discrete convolution and the Gabor channel cover is of size W×W. Accordingly every palmprint picture is sifted with a bank of six Gabor channels to create six separated pictures. Each of the separated pictures emphasizes the unmistakable palmprint lines and wrinkles in relating bearing i.e., while lessening foundation clamor and structures in different headings. The segments of palmprint wrinkles and lines in six unique ways are caught by each of these channels. Every one of these pictures separated picture is partitioned into a few covering squares of same size. The component vector from every one of the six sifted pictures is framed by figuring the standard deviation in every one of these covering squares. This component vector is utilized to remarkably speak to the palmprint picture and assess the execution [3].
Figure 2.2: Partial-domain representation
2.2.2. Extraction of line features
Palmprint distinguishing proof utilizing line highlights has been accounted for to be effective and offers high precision. The extraction of line highlights utilized as a part of our tests is same as definite in [4]. Four directional line indicators are utilized to test the palmprint wrinkles and lines arranged at each of the four bearings, i.e. 0 ◦, 45◦, 90◦ and 135 ◦. The spatial degree of these covers was experimentally settled as 9 × 9. The resultant four pictures are consolidated by voting of dark level extent from relating pixel position. The joined picture speaks to the consolidated directional guide of palm-lines and wrinkles in the palmprint picture. This picture is additionally partitioned into a few covering square pieces. The standard deviation of dim level in every one of the covering squares is utilized to shape the element vector for each palmprint picture[2].
The proposed palmprint verification technique was examined on a dataset of 100 clients. This informational index comprises of 1000 pictures, 10 pictures for each client, which were procured from advanced camera utilizing unconstrained sans peg setup in indoor condition. Fig. 5 indicates normal procurement of a hand picture utilizing the advanced camera with live criticism. The hand pictures were gathered over a time of 3 months from the clients in the age group of 16– 50 years. The hand pictures were gathered in two sessions from the volunteers, which were not very helpful. Amid picture procurement, the clients were just asked for to ensure that (I) their fingers don’t touch each other and (ii) the vast majority of their hand (rear) touches the imaging table. The mechanized division of locale of intrigue, i.e. palmprint, was accomplished by the technique itemized in Ref. [5]. Hence the palmprint picture of 300 × 300 pixels were acquired and utilized in our analyses. Every one of the obtained pictures was further histogram evened out.
2.2.3. Extraction of PCA features
The data substance of palmprint picture additionally comprises of certain nearby and worldwide highlights that can be utilized for distinguishing proof. This data can be extricated by enrolling the varieties in an ensemble of palmprint pictures, autonomous of any judgment of palmprint lines or wrinkles. Each N × N pixel palmprint picture is spoken to by a vector of 1 × N2 measurement utilizing line requesting. The accessible set of K preparing vectors is subjected to PCA which creates an arrangement of orthonormal vectors that can ideally speak to the data in the preparation dataset. The covariance framework of standardized vectors j can be gotten as takes after [2]:
2.3. Matching criterion
The grouping of separated component vectors utilizing each of three techniques is accomplished by closest neighbor (NN) classifier. The NN classifier is a basic nonparametric classifier which figures the base separation between the include vector of obscure example g and that of for gm in the mth class [5]:
where and individually speak to the nth part of highlight vector of obscure example and that of mth class. Every one of the three capabilities got from the three unique palmprint portrayals were explored different avenues regarding each of the over three separation measures (8)– (10). The separation measure that accomplished best execution was at long last chosen for the order of capabilities from the comparing palmprint portrayal.
The combination system goes for enhancing the joined grouping execution than that from single palmprint portrayal alone. There are three general techniques for joining classifiers; at include level, at score level and at choice level. Because of the expansive and shifting measurement of include vectors, the combination approach at highlight level has not been considered in this work. An outline [20] of utilized approaches for multimodal combination recommends that the score level combination of capabilities has been the most well-known approach for combination and has appeared to offer noteworthy change in execution. The objective of assessing different score level combination techniques is to create most ideal execution in palmprint verification utilizing given arrangement of pictures. Let LGabor(g, gm),LLine(g, gm) and LPCA(g, gm) signify the coordinating separation delivered by Gabor, Line and PCA classifiers separately. The joined coordinating score LC(g, gm) utilizing the outstanding settled tenets can be acquired
Figure 2.3: Combination of Gabor, Line, and PCA
I is the chosen consolidating guideline, i.e. I speaks to most extreme, aggregate, item or least lead (shortened as MAX, SUM, PROD and MIN separately), assessed in this work. One of the deficiencies of settled guidelines is the supposition that individual classifiers are autonomous. This supposition might be poor, particularly for the Gabor and Line based highlights. In this way SUM manage can be better option for uniting coordinating scores while joining Gabor furthermore, Line highlights. These merged coordinating scores can be additionally joined with PCA coordinating scores utilizing PROD manage (Fig. 3) as the PROD administer is evaluated to perform better on the supposition of free information portrayal [17]. The individual choices from the three palmprint portrayals were additionally joined (dominant part voting) to look at the execution change. The exhibitions of different score level combination techniques are unique. Along these lines the execution from straightforward crossover combination methodology that joins choices of different settled score level combination plans, as appeared in Fig. 4, was likewise examined in this work. of utilizing settled blend manages, the coordinating scores from the preparation set can likewise be utilized to adjust a classifier for two class, i.e. real and fraud, characterization. Hence the consolidated arrangement of three coordinating scores utilizing feed forward neural system (FFN) and bolster vector machine
(SVM) classifier has additionally been explored [5].
Figure 2.4: Hybrid fusion scheme
2.4. Image Acquisition & Alignment
Our picture procurement setup is innately basic and does not utilize any exceptional enlightenment (as in [3]) nor does it utilize any pegs to make any bother clients (as in [20]). The Olympus C-3020 computerized camera (1280 ‘ 960 pixels) was utilized to get the hand pictures as appeared in figure 2. The clients were just asked for to ensure that (I) theirs.
2.4.1. Extraction of hand geometry images
Every one of the gained pictures should be adjusted a favored way in order to catch the same highlights for coordinating. The picture thresholding activity is utilized to acquire a parallel hand-shape picture. The edge esteem is consequently figured utilizing Otsu’s technique [25]. Since the picture foundation is steady (dark), the limit esteem can be processed once and utilized in this manner for different pictures. The binarized state of the hand can be approximated by an oval. The parameters of the best-fitting circle, for a given double hand shape, is registered utilizing the minutes [26]. The introduction of the binarized hand picture is approximated by the significant pivot of the oval and the required edge of turn is the diffe rence amongst typical and the introduction of picture [6]. As appeared in figure 3, the binarized picture is turned and utilized for registering the hand geometry highlights. The assessed introduction of binarized picture is additionally used to turn dim level hand picture, from which the palmprint picture is extricated as nitty gritty in the following subsection.
Figure 2.5: Extraction of two biometric modalities from the hand image
2.4.2. Extraction of palmprint images
Each binarized hand-shape picture is subjected to morphological disintegration, with a known double SE, to register the locale of intrigue, i.e., the palmprint. Give R a chance to be the arrangement of non-zero pixels in a given parallel picture and SE be the arrangement of non-zero pixels, i.e., organizing component. The morphological disintegration is characterized as
where SE g signifies the organizing component with its reference point moved by g pixels. A square organizing component (SE) is utilized to test the composite binarized picture. The inside of twofold hand picture after disintegration, i.e., the focal point of rectangle that can encase the deposit is resolved. This inside directions are utilized to remove a square palmprint locale of settled measure as appeared in figure 3.
2.4.3. Extraction of hand geometry features
The two fold image‡ as appeared in figure 3(c), is utilized to register critical hand geometry highlights. An aggregate of 16 hand geometry highlights were utilized (figure 5); 4 finger lengths, 8 finger widths (2 widths for each finger), palm width, palm length, hand region, and hand length. In this way, the hand geometry of each hand picture is portrayed by a component vector of length 1×16. The various bits of confirmations can be consolidated by various data combination techniques that have been proposed in the writing. With regards to biometrics, three levels of data combination plans have been recommended; (I) combination at portrayal level, where the element vectors of different biometric are connected to shape a joined highlight vector, (ii) combination at choice level, where the choice scores of different biometric framework are consolidated to produce a ultimate conclusion score, and (iii) combination at dynamic level, where numerous choice from different biometric frameworks are combined. The first two combination plans are more important for a bimodal biometric framework and were considered in this work.
Figure 2.6: Hand geometry feature extraction
CHAPTER FOUR
IMPLEMENTATION
4.1. Background
In this project …. .
CHAPTER FIVE
CONCLUSION
5.1. Background
References
A. Kumar, D. C. Wong, H. C. Shen, and A. K. Jain, “Personal verification using palmprint and hand geometry biometric,” in International Conference on Audio-and Video-Based Biometric Person Authentication, 2013, pp. 668-678.
A. Kumar and D. Zhang, “Personal authentication using multiple palmprint representation,” Pattern Recognition, vol. 38, pp. 1695-1704, 2010.
S. Pathania, “Palm Print: A Biometric for Human Identification,” 2016.
J. Kodl and M. Lokay, “Human Identity, Human Identification and Human Security,” in Proceedings of the Conference on Security and Protection of Information, Idet Brno, Czech Republic, 2010, pp. 129-138.
S. Sumathi and R. R. Hemamalini, “Person identification using palm print features with an efficient method of DWT,” in Global Trends in Information Systems and Software Applications, ed: Springer, 2012, pp. 337-346.
H. J. Asghar, J. Pieprzyk, and H. Wang, “A new human identification protocol and Coppersmith’s baby-step giant-step algorithm,” in International Conference on Applied Cryptography and Network Security, 2010, pp. 349-366.
2018-4-25-1524696117
Dental implications of eating disorders
Eating disorders are a type of psychological disorder characterised by abnormal or unhealthy eating habits, usually linked with restrictive food intake . The cause of onset cannot be linked to one reason alone as it is believed that there are multiple contributing factors, including biological, sociocultural and psychological influences. The sociocultural influences are linked with western beauty ideals that have recently been engraved into modern society, due to the increasing importance of social media and its dictation of the ideal body type. Studies show that eating disorders are significantly less common within cultures that have yet to be exposed to these ideals . The most commonly diagnosed disorders include anorexia nervosa (AN) and bulimia nervosa (BN), affecting more women than men (both AN and BM occur in ratios of 3:1, females to males) . Anorexia is a persistent restriction of energy intake and can be linked with obsessive behaviours that stem from severe body dysmorphia and fear of gaining weight. Bulimia is defined by repeated episodes of binge eating followed by measures to prevent weight gain such as forced regurgitation of stomach contents. The effects on an individual’s mental and physical health are commonly recognised, however dental implications may often be overlooked. The unknown cause of eating disorders increases the extent of their significance on dental health because without a definite root cause, it is difficult and sometimes even impossible to ‘cure’ an eating disorder ,thus, preventing the dental implications is more difficult. Association between eating disorders and oral health problems was initially reported in the late 1970’s therefore the established link is relatively recent . Oral complications may be the first and sometimes only clue to an underlying eating disorder. In the US, 28% of all bulimic patients were first diagnosed with bulimia during a dental appointment and this highlights the visibly clear and distinct impact that eating disorders can have on teeth . This report will investigate the main dental implications that may be caused by eating disorders. The significance will be analysed by looking at what causes the dental problems and how greatly these can be linked directly to eating disorders. The extent of significance will be analysed through looking at the extent of impact and whether these impacts are permanent or reversible.
Oral manifestations of nutritional deficiencies
Anorexia Nervosa is characterised by restriction of food intake and an extreme fear of weight gain, therefore it is common that sufferers are often malnourished and vitamin deficient. Aside from obvious health risks, these factors also lead to several oral manifestations. However, dietary patterns show great variability and will usually differ dependent on the individual. Dietary patterns include calorie restricting, eating healthily but at irregular intervals, binge eating, vomiting and fasting for prolonged periods . Therefore, there are limitations to the conclusions we can draw as to the significance of the effects on oral health, since there is an inconsistency in the contents and habits of daily food consumption. When calorie restriction is involved, as an attempt to keep major bodily functions running steadily, the body will attempt to salvage protein, vitamins and other nutrients and consequently, oral maintenance will be neglected . Studies show that patients with anorexia presented diets containing significantly lower values of all major nutrients compared with controls and specifically, intakes of vitamin A, vitamin C and calcium below RDA levels (recommended dietary values) were present in the majority of patients. However, low intakes (below RDA values) of vitamin B1, B2 and B3 were only reported in a few cases . In contrast to these findings, another source states that there is a clear reduced intake of B vitamins in anorexic and bulimic patients . A possible explanation for these results may be due to the previously discussed inconsistency in the daily intake of individuals with eating disorders but overall we can assume that nutrient deficiencies with varying severities are present in the majority of the anorexic population. The common deficiencies: vitamin D, vitamin C, vitamin B and vitamin A are associated with certain disturbances in the oral structure because they are essential for maintaining good oral health. A lack of vitamin A is related to enamel hypoplasia, which consists of horizontal or linear hypoplastic grooves in the enamel. Vitamin B deficiencies cause complications such as a painful, burning sensation of the tongue, aphthous stomatitis (benign mouth ulcers) and atrophic glossitis (smooth, glossy appearance of the tongue and is often tender). A lack of Vitamin A is responsible for infections in the oral cavity as the deficiency can lead to the loss of salivary gland function (salivary gland atrophy) which acts to reduce the defense capacity of the oral cavity, as well as inhibiting its ability to buffer the plaque acids. Inability to buffer these plaque acids could lead to an increased risk of dental caries. Additionally, vitamin B deficiencies can induce angular cheilitis; a condition that can last from days to years and is consists of inflammations focused in the corners of the mouth, causing irritated, red and itchy skin; often accompanied by a painful sensation. There is a consistency in the evaluation of calcium deficiencies among sufferers of eating disorders and this has clear significant impact on oral health. There is an established relationship between calcium intake and periodontal diseases therefore having an eating disorder increases a person’s susceptibility . The process of building density in the alveolar bone that surrounds and supports the teeth is primarily reliant on calcium. Alveolar bone cannot grow back so calcium is needed to stimulate its repair. This is important because the loss of alveolar bone can expose sensitive root surfaces of teeth, which can progress to further oral complications . If patients are not absorbing enough vitamin C, after an extended period there is a chance that they will develop osteoporosis. Although this is rare and most common amongst individuals with anorexia, it can lead to serious consequences because alongside the loss of density in the alveolar bone, it can progress to the loosening and eventually, loss of teeth: a permanent defect. With anorexic and bulimic patients, there is an increased likelihood of halitosis (bad breath) because in the absence of necessary vitamins and minerals, the body is unable to maintain the health of the oral cavity . If the vitamin C deficiency that most patients with eating disorders suffer from is prolonged and sufficiently severe, then there is a risk of scurvy development. In general, therefore, it seems that the nutritional deficiencies caused by anorexia and bulimia are significantly impacting oral health in ways ranging from unpleasant breath and physical defects to permanent loss of oral structures that need to be tackled with medical and cosmetic interventions.
Periodontal disease
As explained earlier when discussing calcium deficiency, the risk of periodontal disease may increase if an individual suffers from an eating disorder. General malnourishment is another factor that causes a quicker onset of periodontal disease, which always begins with gingivitis and only occurs in the presence of dental plaque . As discussed above, the relationship between calcium intake and periodontal disease is potentially controversial, except in rare cases of severe nutritional deficiency states. Patients dealing with extreme cases of anorexia nervosa may fall under this category. Due to the intense psychological nature of this disorder, the extremity of food restriction is likely to progress further as the need to lose weight quickly transforms into an addiction. After studying nutritionally deficient animals, the conclusions drawn suggest that nutritional factors alone are not capable of initiating periodontal diseases but are able to have an effect on their progression . This would suggest that having an eating disorder does not place an individual at greater risk of initiating periodontal diseases compared to an average person, despite their malnourished conditions. However, catalysing the progression of gingivitis into periodontal disease does suggest that having an eating disorder places patients at a significantly greater risk because their untreated gingivitis will evolve into periodontitis at a greater rate. This effect is significant because periodontitis is an irreversible condition that causes permanent damage. The evidence is limited; however, as it is based on animal research and may only accurately correspond to humans at a limited degree.
Turning now to the experimental evidence on the idea that dental plaque is an essential etiological agent in chronic periodontal diseases. It has been proven through experiments involving the isolation of human plaque and the introduction of the plaque bacteria into the mouths of gnotobiotic animals that a link exists between the bacteria in dental plaque and periodontal disease. Supporting this idea, epidemiological studies produced evidence to suggest a strong positive correlation between dental plaque and the severity of periodontal disease. Unlike some previous evidence mentioned, different clinical experiments done on both animals and humans show major findings that the accumulation of dental plaque is a result of withdrawing oral hygiene in initially healthy mouths . There is evidence to suggest that bulimics manifest a significantly higher retention of dental plaque so consequently, this disorder put patients at a greater risk of not only advancement into periodontal disease, but an increased risk of severe periodontal disease . As mentioned earlier, periodontal disease only occurs after the development of gingivitis, which consists of three stages: initial lesion, early lesion and established lesion. When an advanced lesion is present, it corresponds to chronic periodontitis: “a disease characterized by destruction of the connective tissue attachment of the root of the tooth, loss of alveolar bone, and pocket formation” . After discussing the increased likelihood of dental plaque being present in the mouths of bulimics, the strong association between dental plaque and periodontal disease can be linked directly to prove the significance of bulimia’s effects on oral health. Although the evidence is not as conclusive, anorexic patients are liable to malnourishment and since nutritional factors aid the development of gingivitis into periodontal disease, there is a significantly increased chance of anorexic patient’s oral condition transitioning from gingivitis to periodontal disease. This is extremely significant because unlike gingivitis, the oral damage of periodontal disease will be irreversible.
Eating disorders and caries
This increased likelihood of periodontal disease means that an individual is more likely to retain dental plaque, a significant factor that contributes to dental caries. Tooth decay (also known as dental caries), is defined as “the demineralisation of the inorganic part of the tooth structure with the dissolution of the organic substance”. It involves the anaerobic respiration of consumed dietary sugars where the organic acids formed in the dental plaque can demineralise the enamel and dentine . A possible contributing factor to dental caries is a common unhealthy habit adopted by people with eating disorders that involves the consumption of acidic drinks containing zero calories, an example being coke zero. According to professor colon, certain patients will drink as much as 6 litres a day in an attempt to reduce hunger and help with the process of SIV (self-induced vomiting). During episodes of “binge eating” (more common with bulimia), an individual will consume large amounts of food, usually high in sugar or fat within a short timeframe, usually with the intention of regurgitating the contents shortly afterwards. Increased amounts of sugary foods are ingested during this period, leading to an increased risk of dental caries . A study shows that prolonged periods of dietary restraint in anorexic patients did not result in changes to the bacteria associated with dental caries and consequently allows us to understand that malnourishment is not a significant factor when it comes to the risk of dental caries . Due to obsessive personality traits seen in anorexic patients, it is likely that these individuals are more fastidious in their oral hygiene, which discards dental caries as a risk compared to other complications such as dental erosion, which is to be explored later on. Although dental caries does not seem to arise as a direct issue, studies show that patients with anorexia had greater DMFS scores (decayed, missing and filled surfaces) than controls . This is likely a cause of previous factors such as the consumption of low calorie acidic drinks, not the restricted dietary intake.
Bulimia seems to place individuals at a significantly greater risk of dental caries than anorexia. A study of 33 females showed that bulimics had more intense caries when compared to healthy, age and sex matched controls . Another more recently discovered habit is CHSP (chewing and spitting) where an individual can seemingly “enjoy” the taste of certain foods by chewing the food for some time before proceeding to spit it out to avoid consuming any calories. A study shows that 34% of hospitalized eating disorder patients admitted to at least one episode of chewing and spitting in the month prior to admission . This habit can significantly increase dental problems by leading to cavities and tooth decay, presumably due to the high probability of excess residual carbohydrates. This assumption derives from the etiology of how dental caries progresses, which involves the action of acids on the enamel surface. When dietary carbohydrates react with bacteria present in the dental plaque, the acid formed initiates the process of decalcifying tooth substance and subsequently causes disintegration of the oral matrix. Abundant extracellular polysaccharides can increase the bulk of plaque inside the mouth, which interferes with the outward diffusion of acids and the inward diffusion of saliva. Since saliva has buffering properties and acts as a defence against caries by maintaining Ph, interference with the abundance of reduces defence against tooth decay. Dietary sugars diffuse rapidly through plaque and are converted to acids by the bacterial metabolism. Acid is generated within the substance of plaque to such an extent that enamel may dissolve and enamel caries leads to cavity formation. Binge eating or CHSP increases the acidity of plaque since ten minutes after ingesting sugar, the Ph of plaque may fall as much as two units. To support the scientific explanation, there is evidence supporting the association between carbohydrate intake and dental caries. For example, the decrease in prevalence of dental caries during WWII due to sucrose shortages followed by a rise in previous levels during the post-war period, following the increased availability of sucrose. Hopewood House (a childrens home) excluded sucrose and white bread from the diet: children had low caries rates which increased dramatically when they moved out. Alongside this, intrinsic factors such as tooth position, tooth morphology and enamel structure also affect the risk of caries development and this does not link directly to eating disorders because these variables differ throughout the whole population. However, an extrinsic factor that may reduce the incidence of caries is a greater proportion of fat in the diet because phosphates can reduce the cariogenic effect of sugar . Since individuals with anorexia generally avoid foods with high fat content, they are unlikely to ingest the necessary amount of phosphates to reduce their risk of caries. The evidence all relates to the significance of eating disorders (specifically bulimia) and the role they play to increase the likelihood of caries due to incidences of binge eating, CHSP, low fat intake and consumption of acidic drinks high in sugar.
Oral consequences of medication
After discussing dental caries, it is evident that saliva plays an important role in the maintenance of a healthy oral cavity. 20 women with bulimia and 20 age and gender matched controls were studied and the results showed that the unstimulated whole saliva flow rate (UWS) was reduced in the bulimic group, mainly due to medication . Although the UWS was affected, no major compositional salivary changes were found. This information is contrasted by another study that found bulimic patients did not present evidence of lower salivary flow rates but did have more acidic saliva . Another study was conclusive with the first one and found that the stimulated and resting salivary flow was poor amongst bulimic individuals compared to healthy controls. It also found that Ph levels of saliva were lower than the control group but were still within the normal range . Due to the range in findings and the limited sample size in the studies, these results are inconclusive in places and need to be interpreted with caution. However, it would make sense that habits that accompany eating disorders such as fasting or vomiting would potentially cause dehydration and result in a lower UWS.
Although we are unable to determine a strong link between eating disorders and their effect on saliva, there is conclusive evidence to support oral reactions to medication. If an eating disorder has been diagnosed, selective serotonin reuptake inhibitors such as fluoxetine (a common antidepressant), anti-psychotics and anti-cholinergic medication may be prescribed . Smith and Burtner (1994) found that 80.5% of the time, xerostomia (dry mouth) was a side effect of medications. Direct oral effects of xerostomia include diminishment or absence of saliva as well as alterations in saliva composition. These medications also have indirect effects on oral health by causing lethargy, fatigue and lack of motor control which can cause impairments in an individual’s ability to practice a good oral hygiene technique. The medications have anticolinergic of antimuscaric effects which block the actions of the parasympathetic system by inhibiting the effects of its neurotransmitter, acetylcholine on the salivary gland receptors meaning that it cannot bind to its receptors and consequently, the salivary glands cannot secrete saliva. The reason this causes such an immense impact on oral health is because of how important the functions of saliva in the mouth are. They include protection of the oral mucosa, chemical buffering (as mentioned previously when discussing dental caries), digestion, taste, antimicrobial action and maintenance of teeth integrity. Saliva contains glycoproteins that increase its viscosity and helps form a protective barrier against microbial toxins and minor trauma, protecting oral health both chemically and physically. However, a study by Nagler (2004) found that in up to one third of cases, xerostomia does not lead to a real reduction in salivary flow rate therefore this is a limitation to consider . Patients with xerostomia may experience difficulty chewing, swallowing or speaking and salivary glands may swell intermittently or chronically. Physical defects include cracked, peeling lips, a smoothed, reddened tongue and a thinner, reddened oral mucosa (the membrane lining the inside of the mouth). There are links between xerostomia and previously discussed oral complications as there was often a marked increase in caries and patients experiencing dry mouth where tooth decay could be rapid and progressive even in the presence of excellent hygiene . Overall, the extent of the impact caused by eating disorders in respect to xerostomia and a decreased salivary flow rate is fairly minimal for a few reasons. First, evidence related to salivary flow rate is inconclusive and there are several contrasting studies, therefore a confident assumption linking eating disorders to salivary flow rate cannot be made. On the other hand, there is a handful of strong evidence to suggest that xerostomia can be caused by medication which can then affect the flow of saliva, however, in terms of eating disorders, the link is weak and not exclusive. This is due to the simple fact that medication is taken by a large sum of the population for different conditions ranging from depression to heart disease. Therefore, eating disorders are not uniquely responsible for causing xerostomia. As well as this, xerostomia is a secondary effect because it is the medication that is responsible for the oral complication, not the psychological disorder. This gives reason to infer that eating disorders do not have a highly significant impact on this aspect of oral health.
Self-induced vomiting:
The most common symptom associated with bulimia is the binge-purge cycle. This involves an individual consuming large quantities of food in a short time period (binging), followed by an attempt to not gain weight by making themselves vomit or taking laxatives (purging). Linked with the previously discussed issue of xerostomia, since laxatives are medication, frequent use will significantly increase a patient’s likelihood of alterations in saliva contents and flow rate, which can lead to more significant dental issues. A case study evaluates a 25-year-old female patient who had suffered with bulimia for five years. It was found that this particular individual vomited 5-7 times per day and suffered from swelling on both sides of her face and mandible (