Enhanced Recovery After Surgery (ERAS)

Executive Summary

Enhanced Recovery After Surgery (ERAS) is a multi-modal interdisciplinary approach to change the surgical field by decreasing complications and length of stay in patients undergoing necessary surgical procedures. Studies suggest that the effects of these protocol implications have a profound effect by decreasing complications and length of stay by 10-50%1,2 through the support of nursing, anesthesiology, and the surgical team, to provide evidenced-based care through communicating with one another. ERAS is supported by years of research by Olle Ljungqvist, the founder of ERAS, by focusing on the patient’s journey in the hospital. There is still some resistance, as many surgeries are performed without utilization of ERAS in more traditional methods. Although there are many components of ERAS that are utilized to provide adequate care even in traditional practice. These are seen in pre-operative, peri-operative, and postoperative care modes that provide best care practices for patients. ERAS focuses on the utilization of communication among professionals to provide better care as an integrative discussion by auditing the processes and outcomes of patient care. Dr. Ljungqvist’s findings note that patients have half of their complications, reduction in their stay, and changes in clinical outcomes. This is due to the knowledge and research found in understanding metabolism, endocrine, and immunological complications from surgical interventions.

Research has shown that ERAS can save money, time, lives, and resources by altering the methodology of surgery to be less invasive and more progressive. Through laparoscopic surgery and early ambulation and adequate nutrition a patients risk of infection, length of stay and overall mortality decreases through the utilization of ERAS protocol.

Background Information

Enhanced Recovery After Surgery (ERAS) is a process utilized by a multi-disciplinary team to minimize the physiologic changes from surgery through the continuation of nutrition; decreasing opioid interactions, and avoiding deconditioning to allow the surgical area to heal.1 The hope through this approach would be to lower the impact of surgical stress on the patient post-operative from their physiologic and physiological responses. This is done through enhancing the hospital organization to embrace to best practices and develop a team that works and communicates from start to finish instead of departments that the patient flows through (clinic, surgery, anesthesiology, intensive care).2 The team consists of the pre-operative and post-operative team, nursing, the surgeon and the surgical team, anesthesiologist, and dietitian. The original use of ERAS was developed for colorectal patients, however, this is expanding to various kinds of major surgery as methods are perfected for gastrointestinal surgeries, hence why this is such a pertinent feature to understand in the clinical nutrition world.2 There have been multiple journals published with evidence pinpointing that ERAS’ multimodal approach has evidence-based benefits ‘ decreasing length of stay, decreasing the use of analgesics, cost, and increasing comfort of patient.2

Specific ERAS components that the ERAS Society has approved of across all ERAS protocol include the following procedures for each stage. The first stage is the pre-admission where the surgeon or dietitian may suggest pre-operative nutritional support for a malnourished patient complete cessation of smoking and supplement usage by the patient, with the team suggesting medical optimization and information available for the patient.(figure 1) Preoperative preparation for the patient is the usage of a bowel cleanser, preoperative carbohydrates up to 2 hours before surgery, and antibiotic prophylaxis medications. Intra-operative measures are with minimally invasive techniques and minimal drainage, regional analgesia and minimal opioid use, as well as balanced control of temperature and fluids. Post-operative is removal of drains and tubes and cessation of intravenous fluids, no usage of nasogastric tubes, minimal opioid-based pain control with reliance on other medications, early mobilization of the patient, and early oral intake of fluids and calories for initial gut motility, and a patient follow up practices (figure 1, figure 2).

Specific medical nutrition therapy methods used are pre-counseling; this will inform the patient what to expect, as well as notify them it may be different than what they have experienced if they have had prior surgeries before using ERAS or traditional methods.2 There is also the use of pre-operative and post-operative nutritional interventions to enhance the recovery of the patient. Pre-op nutrition includes an oral liquid carbohydrate loaded beverages up to two hours before surgery.3 Evidence has shown that carbohydrate supplementation of a carbohydrate-rich formula given two hours before surgery has a profound effect on decreasing length of stay in the hospital compared to traditional surgical methods.4 Post-operative interventions for nutrition will be through a liberal oral intake to stimulate insulin release.5 Other components related to nutrition include early mobilization of the patient and an adequate fluid intake to improve healing outcomes.

ERAS has also had a renowned effect of saving hospitals money long-term through the ERAS Society protocol measures. A financial analysis found using the budget from a quaternary hospital, the initial implementation cost of ERAS was $552,783. However, this was offset by the first year savings of the hospital from the ERAS program of $948,500, yielding a net savings of $395,717.4 Long term, the implications of ERAS have only a positive impact on facility utilization if ERAS methods are followed correctly and well maintained. Facilities that adhere to only a portion of ERAS protocol and their procedures may see marginal changes in their patient length of stay and cost fluctuations.

Traditional Methods

More commonly, there are standard procedures for all operations used in the hospital setting. The more widely used practice is similar to that of ERAS with steps taken in pre-operative, intra-operative and post-operative, but more focused on the recovery of the patient receiving active treatment via medication pre-op and post-op, and decreasing pre-operative nutritional input through fasting. Patient admission is preferable three days before surgery, however, this is not always possible based on hospital resources available.5 The patient is then risk screened, and should see a dietitian if they are at a nutritional risk, in poor nutritional status, or with a wound upon admission.6

Traditional methods for pre-operative nutrition are an encouragement of overnight fasting. This model was thought to prevent any risk of aspiration while there is an endotracheal tube in the airway for breathing during intubation during surgery.5 There is also a concern that the food in the gastrointestinal tract may also become a risk factor if their bowel is perforated during surgery.5 There is a downside to this as well such as discomfort from thirst, hunger, headaches, and anxiety for the patient as they are unable to eat for an extended period.5 However, latest studies reflect that the intake of clear fluids taken up until two hours before anesthesia does not increase the gastric volumes.3 As many surgical procedures are being altered to be done as minimally invasive and laparoscopically, the overall healing time has increased significantly in both traditional and ERAS surgical procedures.6

Other components of traditional surgery are to increase urine output, therefore, intravenous fluids are administered liberally to output fifty milliliters an hour or more.7 Additional methods of output measures are the utilization of catheters, drainage of the surgical site, and a nasogastric tube to drain any bowel contents. The change in surgical methods from traditional to ERAS methods, like the removal of catheters and decreased medication administration, have been beneficial for those utilizing 80% of ERAS practices or more. However, there is still lots of resistance to change traditional practices because of the relative unknown potential effects of ERAS in surgical subsets that have not had ERAS preformed before.6

Surgery and Nutritional Status

Risk assessments are used upon admission for patients to assess the nutritional status. This is looking for the risk of malnutrition pre-operatively through two different assessment screenings. The patient has a BMI of under 18.5 kg/m2 meaning they are underweight, or the patient has experienced weight loss >10% in six months or >5% over one month and reduced BMI.8 It’s important to assess the patient for risk of malnutrition because it can be detrimental as a post-surgical outcome.6 This is especially true for larger surgeries where surgical stress leads to a catabolic and inflammatory state for the patient.7 Preoperative care and dietitian-led practices such as nutritional education and counseling to patients undergoing surgery through ERAS may be a suggested next step to establish adequate nourishment in patient populations prior to surgery, since that has the best optimal chances for a success.10 The largest concern nutritionally is at the patient is at risk for developing a nosocomial infection, therefore, increasing their needs for calories and protein due to additive stress on the bodies immune system.9 There are many factors that determine these needs such as age, clinical status, and weight, however, using preventable methods to increase optimal health and decrease nutritional risk is always in the patients best interest.9 Nutritional interventions such as early oral interventions and increased protein intake, as well as addressing any deficiencies post-operatively can be utilized to provide energy during acute catabolism from surgical stress. Grade A evidence from the ASPEN Guidelines suggests that in ERAS patients, traditionally oral intake or clear liquids should be initiated within hours after surgery to offset some of the surgical complications like loss of gut integrity seen in patient populations.8

Metabolism After Surgery

After surgery, the body goes through two phases of initial decrease and then subsequent increase in metabolic changes. Initially, the body undergoes an ‘ebb’ metabolism, also known as shock. The body slows down for the first twenty-four to forty-eight hours to assess the damage before entering the flow phase.11 The flow phase is characterized by mass inflammation, hyper-metabolism, and insulin resistance for anywhere between three to ten days.11 The body eventually moves out of the hyper-catabolic state into an anabolic state while it tries to reserve energy stores and heal surgical sites.

The ‘ebb’ metabolism is from the release of shock hormones by the adrenal glands in the endocrine system from stimulation in the brain. The patients basal metabolic rate, core temperature, and carbon dioxide levels all decrease from the shock since the body went through a variety of trauma.11 The release of factors such as catecholamines, cortisol, aldosterone as well as hypovolemia from surgery are influencing factors on the ebb state.11 The role of catecholamines like norepinephrine and epinephrine during the ebb phase allows the body to undergo shock and cascading effects of tachycardia and hypertension through the sympathetic activity.12 This phase lasts only up to two days, where the body’s immune system kicks in and gears the body to a catabolic state.

In the catabolic state, the body goes into a negative nitrogen balance due to the circulating levels of catecholamines to the liver, pancreas, and kidneys.12 The literature states that nutritional interventions must be made sooner because of this metabolic shift.13 This increases energy expenditure and needs of the organs affected by metabolic, endocrine, and immunological shifts. This enables the body to self-mend from surgical trauma.13 Insulin resistance is commonly seen as a metabolic marker for metabolic stress due to the pancreas hormonal response to a glucagon release as a graded response for the operation.3 Decreased glucose uptake, alongside the loss of lean body mass in this actable state, increases the nutritional needs of the patient.3 There are two kinds of cells, the non-insulin sensitive and insulin sensitive cells like immune cells, endothelial cells, and neural cells.3 These cells intake the excess glucose and undergo glycolysis but have no storage capacity ‘ creating free oxygen radicals that increase inflammation in the body.3 By utilizing early nutritional interventions such as an early oral intake and immune-enhancing drinks this can decrease the resistance of insulin sensitivity in the EBB phase.11

Carbohydrate metabolism shifts and blood sugar levels rise due to glycogenolysis and gluconeogenesis caused by the elevated levels of cortisol and catecholamines circulating.14 There is generally little insulin being released peri-operatively and post-operatively from surgical stress, therefore hyperglycemia may be an acute symptom until the body heals and the effects from stress hormones decrease.12 The body also has direct effects on protein and fat metabolism as well. Cortisol concentrations increase the utilization of gluconeogenesis and proteolysis for break down of skeletal muscle.14 The free amino acids can be used for formation of other proteins or changed to glucose for the cells metabolism.14 Cortisol and catecholamines also break down fatty acids through lipolysis and gluconeogenesis for energy formation from glucose and ketone bodies.13 This is important to note that free-fatty acids are the primary sources of energy after trauma and surgery, especially in the liver cells where triglycerides are 50-80% of energy consumed.14 By inducing oral stimulation through ERAS protocol, the body does not break down as much during the catabolic flow state because there is food in the bowel instead to utilize. This promotes gastric motility and the body can begin to regulate itself better than if it was in a fasting state.

Endocrine Systems

During surgical stress, the body goes through endocrinological changes to adapt to the trauma inflicted on the body. These responses are to protect and heal the body post-surgery through the release of pituitary hormones and activation of the sympathetic nervous system.11 These are activated after the ebb state, in which the body shifts toward a state of catabolism and negative nitrogen balance. One example is the release of adrenocorticotrophic hormone (ACTH) from the pituitary glands which respectively releases cortisol and aldosterone from the adrenal cortex. (table 1) Another component that is released by the pituitary gland is growth hormone (GH), which has an anti-insulin effect and direct role in metabolism factors.

Other factors are the Anti-diuretic hormone (ADH) which also is released by the pituitary gland in the brain. It has a direct effect on the retention of water and sodium to raise blood pressure in the body to maintain cardiovascular homeostasis.11 Table 2 reflects all hormonal fluctuation values of catabolism. Within the ERAS guidelines, a placement of an epidural on the (t-4-t9) thorastic spinal cord can block a majority of these effects and therefore, also have a remarkable influence on decreasing insulin resistance in recovering patients.3

The sympathetic nervous system also becomes affected releasing catecholamines that cause tachycardia, tachypnoea, and vasoconstriction. From there after 24-48 hours the body shifts and begins releasing noted hormones such as ACTH, GHR, ADH. These systems work together resulting in a catabolic state of shock in which the body is trying to regain control to self-heal.

Immunological Changes

Immunological functions also shift the post-op period towards an inflammatory state. The inflamed cells that have been affected by the surgical procedure release chemical mediators that recruit macrophages. 13 Macrophages release pro-inflammatory cytokines by in the affected area. Specific releases of such as IL-1”, IL-6, and TNF-” from the innate immune response to directly influence other cells in the area as well as recruit the active immune system in aiding in the healing process. 13 In the acute setting, these markers promote a healing environment to reduce infection at the operation site.12 Other innate immune responses are increasing the body’s temperature, vasodilation to increase blood flow and redness/swelling of the immediate area. Monitoring of infection and routine antibiotics are generally in effect after surgery to aide any risk of infection post-operatively. 12 Through the usage of anti-inflammatory nutritional interventions like HMB and Omega-3 Fatty acid supplementation, the inflammatory response is shortened and wound healing is promoted. Other measures are through the use of laparoscopic procedures and anti-biotic use. 2

Nutritional Therapy and Fluid Interventions

Preoperative nutrition interventions for ERAS patients entering the hospital are manifested as a diet order for clear liquid prior to their surgery that morning until two hours before pre-operation. However, the expectation is that the patient instead of fasting in traditional methods, the patient may have a high carbohydrate beverage before entering the hospital as part of the ERAS protocol.15 The patient will also take a bowel prep to cleanse the bowel to decrease the risk of infection incase of perforation of the bowel. The bowel prep’s purpose is to clean out the gastrointestinal tract, however, there is some concern of upsetting gastrointestinal function.15 Of course, nutritional assessments have to be taken account for to assess any necessary interventions for malnutrition. If a patient is malnourished, using immune-enriched enteral nutrition may be better than standard enteral nutrition in severely malnourished patients for ten to fourteen days before major surgery to improve surgical outcomes in pancreatic and bowel ERAS surgical patients.16 However, the dietitian should use discretion on what the patient may need through their assessment.

ERAS recommendations for fluid intake suggest that the patient will also have a very minimal fluid intake via intravenously and orally.8 This varies on the patient to prevent dehydration, however, the goal is to avoid bowel edema from fluid overload in the patient. However, adequate fluid should be given to meet any intravascular deficit, replace ongoing losses, or for maintenance need.15

Post-operative edema is common in patients who struggle to maintain their oncotic pressure which can be noted by over hydration, loss of blood, and low albumin levels. Some methods to that are utilized by ERAS decrease the risk of post-operative edema and it’s complications.15 This is seen through their components in removing drains and tubes, as well as stopping additional intravenous fluids to prevent over hydration of the patient.(Figure 1).

Other post-operative nutritional interventions are supplementation of arginine, glycine, ”-Hydroxy ”-Methylbutyrate (HMB), and omega-3 fatty acids to benefit the immune system, increase protein synthesis and muscle tissue, and decrease overall length of stay.12 Commonly, after surgery the patient may have a post-operative paralytic illeus which is seen in surgical patients especially after opioid dosing and extra fluid.8 This can postpone oral intake and ambulation and therefore, if the patient is at risk for malnutrition, other forms of nutrition should be utilized if the patient is at risk for further deterioration.

Enteral and Parenteral indications for ERAS Surgery

Enteral Nutrition (EN) may be administered to ERAS patients who are unable to feed orally post-operation after 7-10 days or for those who are not having an adequate intake orally after their surgical procedure. These patients generally are at risk for deterioration and are commonly found in the surgical intensive care unit (SICU) post-operative care. Infusion of EN has many benefits similar to those of oral interventions if initiated within 24-48 hours post-operatively for patients who cannot tolerate an oral intake, however, ASPEN guidelines suggest that EN is only utilized if the patient cannot tolerate oral intake.

Benefits include decrease of septic and malnutrition complications as well as a faster recovery period.17 This is because of the maintenance of the gut barrier function which has a direct role in immune systems as well as the tight junctions in the epithelial lumen lining of the bowel.18 Tailoring specific nutritional needs for the surgical patient is of course essential in surgical subsets and the individual. This would be through the performance of a nutritional assessment and gathering of clinical status to target necessary needs of the patient. Feeding patients as early as possible shows high evidence in preventing refeeding syndrome.10 Attention should still be made to checking labs and assessing the risk of developing refeeding syndrome despite the time of initiation of EN. Patients if on a short term need for EN should be using oral or nasogastric tubes, however, if it is needed longer than three weeks due to prolonged healing, a gastronomy tube should be assessed and placed to ensure continued use of the bowel is enabled.

Indications that the patient should be placed on Parental Nutrition according to the American Society of Parenteral and Enteral Nutrition (ASPEN) is when the gastrointestinal tract is no longer available to use due to ischemia or a chronic paralytic ileus; when the patient’s mean arterial pressure (MAP) is between 50-60, the abdomen is firm and distended, severe short bowel syndrome, mesenteric ischemia, paralysis of the intestinal muscles, small bowel obstruction, or a gastro-intestinal fistula.12 For those in the intensive care unit (ICU), the largest concern is for those who are at a high nutritional risk and enteral nutrition (EN) is not feasible due to one of the other factors listed. There are of course contradictions to these standards, such as patient expected to meet needs within 14 days, risk exceeds benefit or prognosis does not warrant aggressive nutrition support.10 Parenteral nutrition (PN) is the last resort for access in nutritional intervention and needs. Transferring the patient from parenteral nutrition to enteral nutrition whenever possible is preferred to maintain the integrity of the bowel function and prevent atrophy of the mucosal lining. Although ERAS does not directly state what to do in these conditions, ASPEN does have their guidelines to prevent deterioration in surgical patients regardless of the technique utilized.

Conclusion

Enhanced Recovery After Surgery has many benefits if implemented well for hospital surgical programs. These are but not limited to: decreasing length of stay, increasing the comfort of the patient and decreasing costs for the facility through strategic methods of patient care.2 This is done by integration of medical teams through a multi-disciplinary approach to change their practices and audit as needed for identifying problems and outcomes.2 Although updating from traditional methods may be hard to adhere by all staff, the benefits outweigh the costs of ERAS and further investigations should be encouraged for surgical patients. There is lots of research for elective and planned surgery, however, no publications have been released on ERAS methods for trauma and immediate surgery-related cases. This is because ERAS is relatively new to the surgical field as technology advances.

Nutrition should be considered as a major component in recovery and in teaching practices due to its ability to prevent post-operative mortality and optimize success for the patient.7 This is done by nutritional interventions which are mandatory for decreasing risk of post-operative infections, length of stay, and mortality in patient populations across the board. This is especially necessary with those who arrive at the hospital at a malnourished state, and assessments should be conducted to justify any nutritional support prior to surgical procedures.17 Nutritional interventions are the usage of carbohydrate dense drinks before surgery as well as early oral take for the patient post-operatively. 15

Through analysis of endocrine, immunological, and metabolic changes from surgical stress, key identifiers such as early nutrition intervention that influence outcomes can be utilized.19 Fluid restitution post operatively as well as encouragement of oral nutrition intake as soon as possible directly alter the prognosis of patient wellbeing and optimize health through maintenance of gastrointestinal integrity. 14 Those who undergo surgery in a state of malnourishment generally see larger complications and a higher risk of mortality, therefore assessments by the surgical staff, dietitian, and nursing team are essential in providing best care practices. However, if needed, nutrition support should be investigated as an alternative option for oral intake due to another complication or to meet necessary needs of the ERAS surgical patient if the patient is at risk for deterioration. ERAS has a positive impact on surgical methods and will continue to be a component in educational models across the board supported by the ERAS society and congressional board.20.

References

1. Ljungqvist O, Scott M, Fearon KC. Enhanced Recovery After Surgery. JAMA Surg. 2017;152(3):292-298.

2. Ljungqvist O. Improving Surgery By Talking To Each Other. TED Talks. 2018.https://www.youtube.com/watchlist=PLamACy4c25FN7PFuwOAxD9YwE1TF4zZPX&time_continue=98&v=bnzRjO1oP0Y. Accessed March 26, 2018.

3.Ljungqvist O. Jonathan E. Rhoads Lecture 2011. JPEN J of Parente Enteral Nutr. 2012;36(4):389-398.

4. Stone AB, Grant MC, Roda CP, et al. ‘Implementation Costs of an Enhanced Recovery After Surgery Program in the United States: A Financial Model and Sensitivity Analysis Based on Experiences at a Quaternary Academic Medical Center.’ J Am Coll Surg. 2016;222(3):219-225.

5. Lassen K, Coolsen MM, Slim K, et al. Guidelines for perioperative care for pancreaticoduodenectomy: Enhanced Recovery After Surgery (ERAS”) Society recommendations. J Clin Nut. 2012;31(6):817-830.

6. Nutrition and the surgical patient. Nutrition and the surgical patient – SurgWiki. http://www.surgwiki.com/wiki/Nutrition_and_the_surgical_patient. Published May 1, 2012. Accessed March 27, 2018.

7. Soeters PB. The Enhanced Recovery After Surgery (ERAS) program: benefit and concerns. The Am J of Clin Nut. 2017;106(1):10-11.

8. Weimann A, Braga M, Carli F, et al. ESPEN guideline: Clinical nutrition in surgery. J Clin Nutri. 2017;36(3):623-650.

9. Gillis C, Nguyen TH, Liberman AS, Carli F. Nutrition Adequacy in Enhanced Recovery After Surgery. Nutri Clin Pract. 2014;30(3):414-419.

10. Mcclave, Stephen A., et al. ‘Guidelines for the Provision and Assessment of Nutrition Support Therapy in the Adult Critically Ill Patient.’ JPEN. J Parente and Enteral Nutri, vol. 40, no. 2, 2016, pp. 159’211.

11. Marian M. Surgery_trauma.ppt. 2018.

12. Desborough J. The stress response to trauma and surgery. Br JAnaesth. 2000;85(1):109-117. doi:10.1093/bja/85.1.109.

13. Sugisawa N, Tokunaga M, Makuuchi R, et al. A phase II study of an enhanced recovery after surgery protocol in gastric cancer surgery. J Gastric Cancer. 2015;19(3):961-967.

14. Simsek T, Simsek HU, Canturk NZ. Response to trauma and metabolic changes: posttraumatic metabolism. Turk J Surg. 2014;30(3):153-159.

15. Steenhagen, Elles. ‘Enhanced Recovery After Surgery: It’s Time to Change Practice!’ Nut Clin Pract, 2015, vol. 31, no. 1,, pp. 18’29..

16. Bozzetti F, Mariani L. Perioperative nutritional support of patients undergoing pancreatic surgery in the age of ERAS. J Nutr. 2014;30(11-12):1267-1271.

17. Moore, Scott M., and Clay Cothren Burlew. ‘Nutrition Support in the Open Abdomen.’ Nut Clin Practi, vol. 31, no. 1, 2015, pp. 9’13.,

18. Fukushima R, Kaibori M. Enhanced Recovery after Surgery. Singapore: Springer Singapore; 2018.

19. Zhang J-M, An J. Cytokines, Inflammation, and Pain. Int Anesthesiol Clin. 2007;45(2):27-37. doi:10.1097/aia.0b013e318034194e.

20. Previous ERAS Congress. Eras. http://erassociety.org/about/history/eras-world-congress/. Accessed March 27, 2018.Figure 1

Figure 2

essay-2018-05-02-000EF7

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338

× How can I help you?