Tuesday, November 26, 2019

positive effects of gene altering essays

positive effects of gene altering essays The Positive Effects of Gene Altering Since the beginning of the human race, we have been looking. We have been looking for ways to make our lives healthier, more comfortable, and happier. In the beginning it was simple rocks, plants, and fires. As our technology advanced so did the comfort of our lives. The wheel, the cure to the plaque, and who can forget the remote control, were all tools that made it possible to improve the quality of life. What tool lies ahead in the future to promote our well being and happiness? Genetic engineering is that tool. Every living thing is made up of genes, and with the capability of altering these genes, the possibilities are endless. Everything from better quality produce to the prevention of cancer is a possibility with genetic engineering, and scientists are just now beginning to understand the complex gene patterns. If you can imagine a world free of diabetes, or male pattern baldness, and genetics has a major role. Genetic engineers might someday have the capabilities to remove these genes or even clone wanted genes, and in the end allowing us to live the healthy, comfortable, happier lives we seek. The numbers of positive outcomes from genetic engineering are inconceivable. Genetic engineering will lead to healthier, more comfortable, and better lives. Genetic engineering will improve every day produce and goods. For producers involved with living organisms as their products, genes play a major role in the quality of their products and amount of profit. If a farmer's cows are not as lean, or their corn is diseased, then the demand for their product is going to be less than the competition. That is where genetics comes in. It is possible, by altering certain genes, to create a leaner cow, or a disease resistant stalk of corn, and it is this fact that makes genetic engineering invaluable to the every day farmer. If their cattle is leaner, or their chickens are engineered to...

Saturday, November 23, 2019

Identify the Ash

Identify the Ash An ash tree commonly refers to trees of the genus Fraxinus (from Latin ash tree) in the olive family Oleaceae. The ashes are usually medium to large trees, mostly deciduous though a few subtropical species are evergreen. Identification of ash during the spring/early summer growing season is straight forward. Their leaves are opposite (rarely in whorls of three)  and mostly pinnately compound but can be simple in a few species. The seeds, popularly known as keys or helicopter seeds, are a type of fruit known as a samara. The genus Fraxinus contains 45-65 species worldwide. The Common North American Ash Species Green and white ash trees are the two most common ash species and their range covers most of the Eastern United States and Canada. Other significant ash trees to cover significant ranges are black ash, Carolina ash, and blue  ash. green ashwhite ash Unfortunately, both green ash and white ash populations are being decimated by the  emerald ash borer  or EAB. Discovered in 2002 near Detroit, MIichigan, the boring beetle has spread through much of the northern ash range and threatens billions of ash trees. Dormant Identification Ash has shield-shaped leaf scars (at the point where the leaf breaks away from the twig). The tree has tall, pointed buds above the leaf scars. There are no stipules on ash trees so no stipulate scars. The tree in winter has pitchfork-like looking limb tips and there could be long and narrow clustered winged seed or samaras. Ash has continuous bundle scars inside leaf scar looks like smiley face. Important: A leaf scar is the major botanical feature when keying a green or white ash. The white ash will have a U-shaped leaf scar with the bud inside the dip; the green ash will have a D-shaped leaf scar with the bud sitting atop the scar. Leaves: opposite  , pinnately compound , without teeth.Bark: gray and furrowed.Fruit: a single winged key hanging in clusters. The Most Common North American Hardwood List ash  - Genus Fraxinus  beech  - Genus Fagus  basswood  - Genus Tilia  birch  - Genus  Betula  black cherry  Ã‚  - Genus  Prunus  black walnut/butternut  - Genus Juglans  cottonwood  Ã‚  - Genus  Populus  elm  Ã‚  - Genus  Ã¢â‚¬â€¹Ulmus  hackberry  Ã‚  - Genus   Celtis  hickory  Ã‚  - Genus   Carya  holly  Ã‚  - Genus   IIex  locust  - Genus Robinia and Gleditsia  magnolia  Ã‚  - Genus   Magnolia  maple  - Genus Acer  oak  - Genus Quercus  poplar  Ã‚  - Genus  Populus  red alder  Ã‚  - Genus   Alnus  royal paulownia  Ã‚  - Genus  Paulownia  sassafras  Ã‚  - Genus   Sassafras  sweetgum  - Genus Liquidambar  sycamore  Ã‚  - Genus   Platanus  tupelo  Ã‚  - Genus   Nyssa  willow  Ã‚  - Genus   Salix  yellow-poplar  - Genus  Liriodendron

Thursday, November 21, 2019

The Rise and Fall of the Berlin Wall Essay Example | Topics and Well Written Essays - 2750 words

The Rise and Fall of the Berlin Wall - Essay Example There was entrance for other allies in the war and it grew to be a world war. Germany wanted Britain but Britain could not allow Hitler to acquire it. The war ended with the entrance of America and the subsequent defeat of Japan through bombing of Nagasaki and Hiroshima. The war had increased in magnitude such that not only the death of Helter and his ally Benito Mussolini of Italy would have ended the war. But Germany was left more unstable with the clout rule of Nazi regime that had helped to hold the country together under an authoritarian rule. After the ended to the war the territory that was left that could be termed as Nazi Germany had been divided into four occupation zones according to the postal agreed. Each of the regions was occupied by the allied power, American, British, French and the soviets. The former capital of Germany Berlin was occupied by the allied powers and acted as their center of control of the whole region. It as also subdivided into four zones although the city was in the zone that was occupied buy the soviet.The intention of the agreement that had led to the division of Germany among the powers was in order to govern the country together as one. But immediately after the war there was growing tension between Soviet Union which was working to occupy the superpower vacuum in the world and the allied forced led by America. The era of cold war had just but set in.The advent of cold war saw increasing tension between the forces that had divided Germany among themselves. The French, British and American zone were brought together to form Federal Republic of Germany and West Berlin. On the other hand the region which... Each of the regions was occupied by the allied power, American, British, French and the soviets.   The former capital of Germany Berlin was occupied by the allied powers and acted as their center of control of the whole region.   It as also subdivided into four zones  Ã‚   although the city was in the zone that was occupied buy the soviet.     The intention of the agreement that had led to the division of Germany among the powers was in order to govern the country together as one. But immediately after the war there was growing tension between Soviet Union which was working to occupy the superpower vacuum in the world and the allied forced led by America. The era of cold war had just but set in.   The advent of cold war saw increasing tension between the forces that had divided Germany among themselves.  Ã‚   The French, British and American zone were brought together to form Federal Republic of Germany and West Berlin. On the other hand the region which was under the s oviet rule formed the Germany democratic Republic which included East Berlin.   Therefore the main forces behind division for Germany were the growing differences that were emerging between the allied forces and the Soviet Union which was mainly based on ideological differences between the two forces. (Maddrell, 2006)Growing difference between West and East  The cold war was purely based on ideological difference between the USA and the Soviet Union which was then led by Russia.   This was the main differences that had led to the eruption of the cold war.

Tuesday, November 19, 2019

Female genital mutilation and the practice of midwifery Dissertation

Female genital mutilation and the practice of midwifery - Dissertation Example The practices of FGM seem to be barbaric and cruel to Western society and in societies that hold such practices are done with the belief that there is a benefit to stealing the sexual arousal mechanisms from women in order to make them less carnal and more proper. The difficulty comes in trying to honour cultures for their beliefs while motivating them to change those beliefs because of false and dangerous consequences where female sex organs are concerned. Midwifery requires the acceptance of beliefs in concert with the application of good medical and traditional knowledge where childbirth is concerned. Consulting and caring for women who have had FGM requires sensitivity to the cultural beliefs with a firm understanding of how such procedures affect women in reference to their procreative lives. Psychological and medical knowledge is necessary to treat women with both respect and dignity despite any converse beliefs on the subject. While ideally it would be beneficial to abolish th e act of FGM, at this point in time it is still a potential problem that might arise when caring from patients from certain cultures or who come from a history of traumatic circumstances that ended in FGM. The following research proposal will explore the potential for a project in which the subject of FGM is examined through victims of the procedures, through the medical consequences that midwives face when dealing with patients who are victims of FGM, and through examining the balance between the victimisation of women and the cultural belief systems that must be honoured and respected while finding ways to deal with the consequences and offer reparative solutions where possible. 1.2 Background Female genital mutilation, also known as... From this research it is clear that female genital mutilation, also known as female genital cutting and female circumcision, has been defined by the World Health Organisation as â€Å"all procedures that involve the partial or total removal of female genitalia, or other injury to the female genital organs for non-medical reasons†. Unlike male circumcision, there are no health benefits to female circumcision and often contribute to urination difficulties or difficulty in childbirth later in life. The procedure most often will occur between the ages of birth and 15 and is considered a violation against women by world organisations across agencies. There are an estimated 100 to 140 million women who live with the consequences of the procedure with 92 million living on the African continent. There are four major types of FGM. These types are as follows: †¢ Clitoridectomy: partial or total removal of the clitoris (a small, sensitive and erectile part of the female genitals) and, in very rare cases, only the prepuce (the fold of skin surrounding the clitoris). †¢ Excision: partial or total removal of the clitoris and the labia minora, with or without excision of the labia majora (the labia are "the lips" that surround the vagina). †¢ Infibulation: narrowing of the vaginal opening through the creation of a covering seal. The seal is formed by cutting and repositioning the inner, or outer, labia, with or without removal of the clitoris. †¢ Other: all other harmful procedures to the female genitalia for non-medical purposes, e.g. pricking, piercing, incising, scraping and cauterizing the genital area.

Sunday, November 17, 2019

Battle of Trafalgar Essay Example for Free

Battle of Trafalgar Essay The Battle of Trafalgar was the most significant battle won by the British against the combined forces of the French and Spanish fleets during the Napoleonic Wars. This battle also had significant impact on the concept of navigation when it comes to the Naval Doctrine of War. This battle proved that tactical unorthodoxy could win battles; even though you might be outmanned and outgunned by your opponent you can still win battles by deviating from the old Naval Doctrine. This battle was part of a much larger campaign called the Trafalgar campaign which included several different battles that led up to the final battle at Trafalgar. This campaign was a long and complicated series of fleet maneuvers carried out by the combined French and Spanish fleets and the opposing moves of the British Royal Navy during much of 1805. These were the culmination of French plans to force a passage through the English Channel, and so achieve a successful invasion of the United Kingdom. The plans were extremely complicated and proved to be impractical. Much of the detail was due to the personal intervention of Napoleon, who was a soldier rather than a sailor. This was largely because Napoleon failed to consider the effects of weather, difficulties in communication, and the intervention of the Royal Navy. Despite limited successes in achieving some elements of the plan the French commanders were unable to follow the main objective through to execution. The campaign, which took place over thousands of miles of ocean, was marked by several naval engagements, most significantly at the Battle of Trafalgar on 21 October 1805. The naval doctrine at the time dictated that both sides should line up parallel to eachother in a straight line so that they could engage in battle and bring all their guns to bear against the enemy. One of the reasons for the development of the line of battle was to help the admiral control the fleet. If all the ships were in line, signaling in battle became possible. The line also had defensive properties, allowing either side to disengage by breaking away in formation. If the attacker chose to continue combat their line would be broken as well. This type of warfare allowed each side to fight a battle and then to disengage at any time to minimize the losses to their fleet. However with England under threat of invasion by Napoleon and his grand army, British Admiral Lord Horatio Nelson needed to ensure that the British were in control of the seas. In order to do this Nelson needed to fight and win a decisive battle that would clearly establish Britain’s naval supremacy. However in order to do this he would have to make sure that the combined French and Spanish fleets actually remained in the battle long enough to win a decisive victory. What Nelson planned on doing was instead of lining up parallel to the opposing fleet, Nelson would take his navy and charge at the enemy and deliberately cut the their battle line in two. This type of deviation from normal naval warfare in terms of navigation was unheard of at the time. Despite the risk to the British fleet, Nelson believed that this was the best way to engage the enemy fleet in the upcoming battle because it had numerous advantages. The primary advantage was that this would allow the British to cut half of the enemy fleet off, surround it, and force a fight to the end. This is unlike normal engagements where the battle was often inconclusive due to the fact that both fleets would withdraw before a clear winner could be seen. The plan had three principal advantages. First, it would allow the British fleet to close with the Franco-Spanish fleet as quickly as possible, reducing the chance that it would be able to escape without fighting. Second, it would quickly bring on close quarters battle by breaking the Franco-Spanish line and inducing a series of individual ship-to-ship fights, in which the British were likely to prevail. Nelson knew that the better seamanship, faster gunnery, and higher morale of his crews were great advantages. Third, it would bring a decisive concentration on the rear of the Franco-Spanish fleet. The ships in the front of the enemy fleet would have to turn back to support the rear, and this would take a long time. Additionally, once the Franco-Spanish line had been broken, their ships would be relatively defenseless to powerful broadsides from the British fleet and would take a long time to reposition and return fire. The main drawback of this strategy was that sailing the British fleet into the combined French and Spanish battle line, the British ships would be fully exposed to the enemy broadsides without the ability to return fire. In order to lessen the time the fleet was exposed to this danger Nelson would have to drive the fleet straight into the enemy battle line as fast as he could. This was yet another departure from navigation rules of naval warfare. Nelson was also well aware that French and Spanish gunners were ill-trained, nd would probably be supplemented with soldiers. These untrained men and would have difficulty firing accurately from a moving gun platform. This was in stark comparison to British gunners who were well drilled, and the Royal Marines who were expert marksmen. Another advantage that the British fleet had was that the enemy was sailing across a heavy swell, causing the ships to roll heavily and exacerbating these problems. Nelsons plan was indeed a gamble, but a carefully calculated one. The battle itself started exactly as Nelson wanted it to. The British fleet was able to successfully cut the French and Spanish battle line in half thus forcing a close quarter’s battle. Despite the huge risk that Nelson was taking his plan ended up working. Nelson scored a huge victory against the combined French and Spanish fleet. He managed to capture over twenty of the enemy ships and inflicted heavy casualties against while suffering few casualties himself. Unfortunately during the battle Nelson was pierced by a musket ball and died from his wounds before he could see the outcome of the victory. Some argue that his loss outweighed any gains made by the British Navy. Following the battle, the Royal Navy was never again seriously challenged by the French fleet in a large-scale engagement. Napoleon had already abandoned his plans of invasion before the battle and they were never revived. This battle firmly established Britain’s naval supremacy over France. In terms of navigation, this battle was very significant. The most important thing is that it proved that following standard navigational techniques during an engagement won’t always win a battle. The best tactic is to be unpredictable so that the enemy has to adapt to what you are doing thus giving you the tactical advantage. This is exactly what Nelson did in the Battle of Trafalgar and it paid off. He proved that sometimes in battle deviating from the norm of battle navigation is the best thing to do, and ever since navies around the world have looked to the strategies employed by Nelson. What is being done today is that naval commanders are being educated about naval history so that they can learn and even employ these types of strategies if they need to in battle. In conclusion, the Battle of Trafalgar was a turning point in which ships would fight naval battles in terms of navigation due to the tactical unorthodoxy employed by Nelson. This battle has had long term effects and even today commanders look back and employ some of the same strategies used. The importance of this battle cannot be underestimated because not only was it the turning point in the Napoleonic Wars for the British in terms of establishing naval supremacy at the time, it was a turning point in naval warfare. Navigation would never be the same thanks to one man and one decisive battle.

Thursday, November 14, 2019

Salmon Farming Essay -- essays research papers

Salmon Farming If you recently ordered salmon off the menu of your favorite restaurant, or purchased it from your local grocery store, chances are it was farmed. According to â€Å"Salmon of the Americas, an organization of salmon-producing companies in Canada, Chile and the United States, 70 percent of the salmon produced in British Columbia and Washington comes from salmon farms. If it weren’t for these farms, we would not have the luxury and abundance of this delicious and healthy food available to us year round. Salmon farming represents one very important way to feed the world and people want to eat more salmon and seafood- more than can be caught. Salmon farming began over 30 years ago and has become a huge industry. Experts say it’s the fastest growing segment of agriculture. Salmon farming plays an important role in the economies of many areas as well. Jobs and other economic benefits contribute to the value of salmon as much its role in good nutrition. Salmon is an oily fish rich in omega-3 fatty acids, a substance that almost certainly helps protect against heart disease and may also reduce the risk of cancer and Alzheimer's. There is one species of Atlantic salmon and five species of Pacific. Atlantic salmon account for almost 95 percent of the farmed salmon produced, and most of them are farm-raised on the pacific coast. Pacific species account for all of the wild salmon caught in the Americas and some of them are also farm-raised. No wild Atlantic salmon are fished commercially in North America, as they are an endangered species. Atlantic salmon have become the species of choice to raise on farms because they are more adaptable to the farming techniques and make better use of feed so they produce more salmon with less feed. Not everybody agrees however, that farmed salmon raised in net pens are healthy for the environment or for you to eat. Over the years, there have been numerous stories in the media that have pointed out the negatives of farm raised salmon. These arguments have ranged from wastes from salmon farms, the spreading of disease from farmed to wild fish, the negative impacts of farm raised fish escapes and interacting with native fish, and recently, the effects of farmed salmon consumption on human health. The latest issue that the media got there hands on and consequently got the public concerned, was a report that polychlorinated... ...sk for cancer. There is no need to be alarmed with high levels of contaminants when it comes to consuming any kind of salmon. What we do need to be alarmed about is the media reporting and their level of contaminants! Ronad A. Hites, Jeffery A. Foran, David O. Carpenter, M. Coreen Hamilton, Barbara A. Knuth, Steven J. Schwager (2004) study: Global assessment of organic contaminants in farmed salmon, Science 303:226-229. Centers for Disease Control and Prevention, National Center for Environmental Health Health Studies Branch Kevin Amos, National Aquatic Animal Health Coordinator, NOAA Fisheries Salmon of the Americas SOTA is an organization of salmon-producing companies in Canada, Chile and the United States whose mission is to improve health, awareness and dining enjoyment of consumers in North America by providing timely, complete, accurate and insightful information about salmon on behalf of the member companies. Ashley Dean, Shwartz,.Mark 2003. Salmon farms pose significant threat to salmon fisheries in the Pacific Northwest, researchers find. Stanford University American Journal of Clinical Nutrition, April 2002, 76:608-613. Pediatric Research, 1998, 44(2):201-209. Salmon Farming Essay -- essays research papers Salmon Farming If you recently ordered salmon off the menu of your favorite restaurant, or purchased it from your local grocery store, chances are it was farmed. According to â€Å"Salmon of the Americas, an organization of salmon-producing companies in Canada, Chile and the United States, 70 percent of the salmon produced in British Columbia and Washington comes from salmon farms. If it weren’t for these farms, we would not have the luxury and abundance of this delicious and healthy food available to us year round. Salmon farming represents one very important way to feed the world and people want to eat more salmon and seafood- more than can be caught. Salmon farming began over 30 years ago and has become a huge industry. Experts say it’s the fastest growing segment of agriculture. Salmon farming plays an important role in the economies of many areas as well. Jobs and other economic benefits contribute to the value of salmon as much its role in good nutrition. Salmon is an oily fish rich in omega-3 fatty acids, a substance that almost certainly helps protect against heart disease and may also reduce the risk of cancer and Alzheimer's. There is one species of Atlantic salmon and five species of Pacific. Atlantic salmon account for almost 95 percent of the farmed salmon produced, and most of them are farm-raised on the pacific coast. Pacific species account for all of the wild salmon caught in the Americas and some of them are also farm-raised. No wild Atlantic salmon are fished commercially in North America, as they are an endangered species. Atlantic salmon have become the species of choice to raise on farms because they are more adaptable to the farming techniques and make better use of feed so they produce more salmon with less feed. Not everybody agrees however, that farmed salmon raised in net pens are healthy for the environment or for you to eat. Over the years, there have been numerous stories in the media that have pointed out the negatives of farm raised salmon. These arguments have ranged from wastes from salmon farms, the spreading of disease from farmed to wild fish, the negative impacts of farm raised fish escapes and interacting with native fish, and recently, the effects of farmed salmon consumption on human health. The latest issue that the media got there hands on and consequently got the public concerned, was a report that polychlorinated... ...sk for cancer. There is no need to be alarmed with high levels of contaminants when it comes to consuming any kind of salmon. What we do need to be alarmed about is the media reporting and their level of contaminants! Ronad A. Hites, Jeffery A. Foran, David O. Carpenter, M. Coreen Hamilton, Barbara A. Knuth, Steven J. Schwager (2004) study: Global assessment of organic contaminants in farmed salmon, Science 303:226-229. Centers for Disease Control and Prevention, National Center for Environmental Health Health Studies Branch Kevin Amos, National Aquatic Animal Health Coordinator, NOAA Fisheries Salmon of the Americas SOTA is an organization of salmon-producing companies in Canada, Chile and the United States whose mission is to improve health, awareness and dining enjoyment of consumers in North America by providing timely, complete, accurate and insightful information about salmon on behalf of the member companies. Ashley Dean, Shwartz,.Mark 2003. Salmon farms pose significant threat to salmon fisheries in the Pacific Northwest, researchers find. Stanford University American Journal of Clinical Nutrition, April 2002, 76:608-613. Pediatric Research, 1998, 44(2):201-209.

Tuesday, November 12, 2019

Is Prejudice and Discrimination a Myth or a Real Life Situation Essay

Prejudice is a cultural attitude that rests on negative stereotypes about individuals or groups because of their cultural, religious, racial, or ethnic background. Discrimination is the active denial of desired goals from a category of persons. A category can be based on sex, ethnicity, nationality, religion, language, or class. More recently, disadvantaged groups now also include those based on gender, age, and physical disabilities. Prejudice and discrimination are deeply imbedded at both the individual and societal levels. Attempts to eradicate prejudice and discrimination must thus deal with prevailing beliefs or ideologies, and social structure. Although there is no wide agreement as to the â€Å"cause† of prejudice and discrimination, there is a consensus that they constitute a learned behaviour. The internalization of prejudice starts with parents and, later, teachers–the groups primary in the formation of attitudes within children. The media and social institutions solidify prejudicial attitudes, giving them social legitimacy. In a sense, it is incorrect to speak of â€Å"eradicating† prejudice, since prejudice is learned. At best, one can reduce prejudice and discrimination. Society looks most often to education and legislation to alleviate prejudice and discrimination–for reasons still not clearly known, inter-group contact alone is not enough to reduce prejudice. On one hand, multicultural education, whether direct or indirect, constitute the mainstay of educational efforts to eliminate prejudice. On the other hand, the emphasis on civil rights, enlightened immigration policies, and mandates for quota hiring are the cornerstone of legal approaches to alleviating the effects of prejudice and discrimination. The most overlooked area in resolving the problems of prejudice and discrimination lies in the web of close relationships where genuine feelings of love can be fostered and strengthened. The private sphere may indeed be the last frontier where a solution to the problems of prejudice may have to be found.

Saturday, November 9, 2019

Is There Such a Phenomena as ‘Pilot Error’ in Aviation Accidents

The term ‘Pilot error’ has been attributed to 78%[1] of Army aviation accidents. Despite the technological advances in Rotary Wing (RW) aircraft i. e. , helicopters accidents attributed to technology failure are decreasing, whilst pilot error is increasing. Currently, RW accidents are investigated and recorded using a taxonomy shown to suffer difficulties when coding human error and quantifying the sequence of events prior to an air accident. As Human Factors (HF) attributed accidents are increasing, lessons aren’t being identified nor the root cause is known. Therefore, I propose to introduce Human Factors Analysis and Classification system (HFACS) an untried taxonomy to the UK military developed as an analytical framework to investigate the role of HF in United States of America (USA) aviation accidents. HFACS, supports organizational structure, pre-cursors of psychological error and actual error; but little research exists to explain the intra-relations between the levels and components, or the application in the military RW domain. Therefore, I intend to conduct post-hoc analysis using HFACS of 30+ air accidents between 1993 to present. Implications of this research are to develop a greater understanding of how Occupational Psychology (OP) can help pilots understand HF, raise flight awareness and reduce HF attributed fatalities. Introduction â€Å"On 2 June 1994 an RAF Chinook Mk2 helicopter, ZD 576, crashed on the Mull of Kintyre on a flight from RAF Aldergrove to Fort George, near Inverness. All on board were killed: the two pilots, the two crewmembers and the 25 passengers. This was to have been a routine, non-operational flight, to take senior personnel of the security services to a conference. The sortie was planned in advance; it was entirely appropriate for these pilots, Flt Lts Jonathan Tapper and Richard Cook, and for the aircraft, ZD576, to have been assigned this mission. An RAF Board of Inquiry (BOI) was convened following the accident and carried out a detailed investigation. BOIs are established to investigate the cause of serious accidents, primarily, to make safety recommendations but, at the time of this crash, to also determine if human failings were involved. Their conclusion, after an exhaustive investigation was there was not one single piece of known fact that does not fit the conclusion that this tragic accident was a controlled flight into terrain. † The BOI found no evidence of mechanical failure and multiple witnesses stated that the aircraft appeared to be flying at 100ft at 150 knots there was no engine note change, the aircraft didn’t appear to be in distress and at the crash scene the throttle controls were still in the cruise position (not at emergency power if collision with the ground was imminent). 2] So the causation moved to Human Factors (HF). But some questions remain unanswered, on that fateful day why did these seasoned and experienced pilots fly their aircraft and passengers into a hillside at 150 knots. If this accident was attributed to HF it now appears to some that the aircrew themselves are more deadly than the aircraft they fly (Mason, 1993: cited in Murray, 1997). The crucial issue therefore is to understand why pilots Flt Lts Jonathan Tapper and Richard Cooks’ actions made sense to them at the time the fatal accident happened. Relevance of Research So why is this topic relevant to OP research? The British Army branch of aviation is an organization called the Army Air Corps (AAC) and in keeping with the trends of the other two services the Fleet Air Arm of the Royal Navy and the Royal Air Force, it has seen a steep decline in accidents in recent years. However, accidents attributed to Human Factors (HF) have steadily risen and are responsible for 90% of all aviation accidents. [3]. This research will depart from the traditional perspective of the label â€Å"pilot error† as the underlying causation of Aviation accidents, whereby current theory and research purport a ‘systemic’ approach to human factors investigation of Aviation accidents. This approach is derived from Reasons Model of Accident Causation, which examines the causal factors of organizational accidents across a spectrum of sectors from; nuclear power industry (e. g. , Chernobyl), off-shore oil and gas production (e. g. Piper Alpha) to transportation (e. g. Charring Cross) (Reason 1990). This approach recognizes that humans, as components of socio-technical systems, are involved in designing, manufacturing, maintaining, managing and operating aviation systems including the methods of selecting and assessing potential employees to the aviation industry from Pilots, Cabin crew, Engineers and Baggage handlers. Therefore, our ability to identify, understand and manage these potential issues enables us to develop systems that are more error-tolerant, thus reducing risk and the potential for accidents. I intend to be able to provide a more consistent, reliable and detailed analysis of HF causal factors that attribute to aviation accidents within the AAC. On average, the AAC experiences around 6 major accidents per year, although a record year was recorded with only two accidents in 1993. However, in 1992 aviation accidents cost over ?10M[4] in taxpayer’s money. Usually the causation of accidents are classified (human error, technical failure or operational hazard). Whilst there was a reduced figure of ?1M for 1993, the satisfaction of this financial success was marred by the fact that one of the two accidents resulted in a fatality. However, it is the concept of human error or pilot error that dominates the outcome of most BOIs particularly when there are fatalities. Current taxonomies used to classify accident causal groups do not extend beyond this distinction although more recently organizational factors have been included to reflect a more systemic view of accident causation. However, the HF domain is extensive and current taxonomies employed by the AAC do not encapsulate this. By using HFACS (currently adopted by the US Navy, Army, Airforce, and Coast Guard), a human error orientated accident investigation and analysis process; I will conduct post-hoc analysis of 30+ category four and five accidents from 1993 to present day. Literature review Before we start to look at any reduction in Air Accidents we need to grasp an understanding of category of accident. How many times when we hear about air accidents, â€Å"it was pilot error†, merely noting HF was responsible doesn’t prevent repetition nor identify any critical lessons, plus the description is far too generic. The term pilot error doesn’t assist us in understanding the processes underlying what leads to a crash, nor does it give us a means to apply remediation or even identify lessons to prevent re-occurrence. The other issue is that it is very seldom one single factor caused the helicopter to crash. Professor RG Green (1996) uses a categorization method: Modes of failure, Aircrew Factors and System failures. Within each of these exist sub-categories. E. g. , in Modes of Failure category lists a number of common errors made by the individual or individuals from; selective attention, automatic behaviour, forming inappropriate mental models, affects of fatigue and perceptual challenges leading to spatial disorientation, particularly common to RW flight. Aircrew factors, refers to background factors relevant to individuals: decision-making, personality, problem solving, Crew composition, Cockpit Authority Gradient (CAG) and Life stress. Finally, the systems factors applicable to the organization that we serve under, termed enabling conditions such as: Ergonomics, Job pressures and Organizational Culture. Bodies of Research Now, human error doesn’t just happen, usually a sequence of events will unfold prior to the accident. Human error is often a product of deeper problems; they are systematically connected to features of the individual’s tools, tasks and the surrounding media (Dekker, 2001). Therefore, in order to provide remediation through the development of strategies it is vital that we understand the various perspectives experienced through flight and how these could effect a pilot; these range from: cognitive, ergonomic, behavioural, psychosocial, aeromedical, and the Organizational Perspectives (Weigmann and Shappell 2003). Within the environment of human performance error is a unique state of a pilot’s operational environment that could be affected by anyone of, or all of the perspectives. Rasmussen (1982) utilized a cognitive methodology to understanding aircraft accidents. O’Hare et al. (1994) described the system as consisting of six stages: ‘detection of stimulus; diagnosis of the system; setting the goal; selection of strategy; adoption of procedure; and the action stage'. The model was found to be helpful in identifying the human errors involved in aviation accidents and incidents (O’Hare et al. 1994). One draw back being that these models using cognition are operator centric and do not consider other factors such as; the working environment, task properties, or the upervisory and work organization (Wiegmann and Sappell, 2001c). Edwards (1972) developed the ‘HELS system' model, which was subsequently called the ‘SHEL' model. Citing that Humans do not perform tasks on their own but within the context of a system; initially SHEL was a system focusing on the ergonomics and considered the man-machine interface. A tool that can be appli ed to investigate air accidents through the evaluation of human-machine systems failure. The ‘SHEL' model categorizes failure into: software, hardware, liveware and environment conditions. However the SHEL model fails to address the functions of management and the cultural aspects of society. Empirical findings Bird’s Domino Theory (1974) views accidents as a linear sequence of related factors or series of events that lead to an actual mishap. The theory covers the five-step sequence First domain Safety/Loss of control, the second domain, basic causes, identifies the origin of causes, such as human, environment or task related. The immediate causes include substandard practices and circumstances. The fourth domain involves contact with hazards. The last domain could be related to personal injury and damage to assets (Bird, 1974; and Heinreich, et al. , 1980). It is much like falling dominos each step causes the next to occur. Removing the factors from any of the first three dominos could prevent an accident. This view has been expanded upon by Reason (1990). Reason’s ‘Swiss cheese' model fig 1, includes four levels of human failure: organizational factors, unsafe supervision, preconditions for unsafe acts and unsafe acts. The HFACS was developed from this model in order to address some of limitations. The starting point for the chain of event is the organization ‘Fallible decisions' take place at higher levels, resulting in latent defects waiting for enabling factors (Reason, 1990). Management and safe supervision underpins any air operation through flight operations, planning, maintenance and training. However, it is the corporate executives, the decision makers who make available the resources, finances and set budgets. These are then cascaded down through the tiers of management and to the operator. Now this sounds like an efficient and effective organization and according to Reason failures in the organization come about by the breakdown in interactions and holes begin to form in the cheese. Within an organization unsafe acts may be manifested by lack of supervision attributed to organizational cultures operating within a: high-pressure environment, insufficient training or poor communication. The latent conditions at the unsafe supervision level promote hazard formation and increase the operational risks. Working towards the accident, the third level of the model is preconditions for unsafe acts. Performance of the aircrew can be affected by fatigue, complacency, inadequate design and their psychological and physical state (USNSC, 2001; Shappell and Wiegmann, 2001a; Wiegmann and Shappell, 2003). Finally, the unsafe acts of the operator are the direct causal factor of the accident. These actions committed by the aircrew could be either intentional or unintentional (Reason, 1990). The ‘Swiss cheese' model sees the aviation environment as a multifaceted system that does not work well when an incorrect decision been taken at higher levels (Wiegmann and Shappell, 2003). The model depicts a thin veneer of cheese the veneer symbolizing the defence against Aviation accidents and the dotted holes portray a latent condition or active failure. It is a chain of events that usually lead to an accident however as errors are made the holes begin to appear in the cheese, a datum line penetrates the cheese and if all the holes pass through the line, then a catastrophic failure occurs and a crash ensues. These causal attributions of poor management and supervision (organizational perspective) may only be unearthed if equipment is found in poor maintenance (ergonomic). If the organizational culture is one of a pressured environment then this could place unnecessary demands on the aircrew producing fatigue (Aeromedical). Or management could ignore pilots’ concerns if the CAG was at imbalance (psychosocial perspective). All of these factors could hinder and prevent aircrew from processing and performing efficiently in the cockpit, which could result in pilot error followed later by an Air Accident. However, with Reasons model it doesn’t identify what the holes in the cheese depict. For any intervention strategy to function and prevent reoccurrence the organization must be able to identify the causal factors involved. The important issue in a HF investigation is to understand why pilots’ actions made sense to them at the time the accident happened (Dekker, 2002). HFACS was specifically developed to define latent and active failures implicated in Reasons Swiss Cheese model so it could be used an accident investigation and analysis tool (Shappel and Weigmann, 1997; 1998; 1999; 2000; 2001). The framework was developed and refined by analyzing hundreds of accident reports containing thousands of human causal factors. Although designed originally for use within the context of the military aviation HFACS has shown to be effective within the civil aviation arena as well (Wiegmann and Shappel, 2001b). Specifically HFACS describes four levels of failure; each one corresponds to one of the cheese slices of Reasons model. These are a) Unsafe acts b) Pre-conditions for Unsafe acts c) Unsafe supervision and d) Organizational influences (Weigmann and Shappel, 2001c) Methodology By using a combination of qualitative (i. e. the process of recoding causal factors based on individual and group discussions) and quantitative (causal factor analysis of recoded narratives against HFACS taxonomy) research methodologies to identify further causal groups to be used in classifying accidents and to assess the validity of the HFACS framework as a tool to classify and analyze accidents. Data to be used in this study will be derived from the narrative findings of AAC BOIs conducted between 1990 and 2006[5]. This should equate to approximately 30-35 narratives to be used in the analysis. Authority to access the Board of Inquiry library has been granted by the Army's Flight Safety and Standards Inspectorate, which is the AAC organization responsible for conducting Aviation accident investigations and analysis. Data will only be used that comprises of category 4 accidents (single fatalities and severe damage to aircraft) and category 5 (multiple fatalities and loss of aircraft). In addition to the narrative description in the report, the following information will also be collected: the type of mission in which the accident happened (e. . low-level flying, exercise, HELEARM[6]); the flight phase (e. g. take-off, in the hover, flight in the operational area, approach, and landing); the rank of the pilot(s) (to measure CAG and see if this is a contributory factor) involved and the type and category of aircraft. This study will concentrate on all Army helicopters; including all variants of the Lynx, Gazelle and Squirrel trainer. Coding frames will be developed and tested for use in the final recoding exercise. An Occupational Psychologist from the Human Factors epartment of the MOD will supervise the training and the coders will be a number of RW pilots with a minimum of 1000hours flying time at the time of the research. Each pilot will be provided with a workshop in the use of HFACS framework. This is to ensure parity and that all coders understand the HFACS categories. After the period of training the raters will be randomly assigned air accidents so that two independent raters can independently code each accident. It is intended to code the inter-rater reliability on a category-by-category basis. The degree of agreement (the inter-rater reliability) initially between the two coders will be achieved by Cohens Kappa (Cohen, 1960;Landis and Koch, 1977). SPSS v. 15. 0 will be used to quantify the frequency of causal factors of the 30+ narratives. It is also hoped to compare the inter-rater reliability between all the coders using Fleiss Kappa. Fleiss’s Kappa assessment method is used to measure the similarity agreement of observers and treats them symmetrically (Fleiss, 1981). The level of agreement between the raters is statistically measured against what could be achieved through chance. The Kappa level range would be classed as achieving moderate inter reliability if it were between 0. 41-0. 60. Cohen’s Kappa is based on the statistical measurement analysis of the level of agreement between raters in excess of (Landis and Koch, 1977). Discussion The research intends to apply an untried methodology not as yet sanctioned by the UKs Ministry of Defence in order to analyze a number of Air Accidents within the AAC between 1993 and present day. Thirty plus serious Category 4 and 5 accidents will be re-classified using the taxonomy of HFACS. It is intended where pilot error was the cause, to identify the HF associated and attribute to each accident. It is also hoped that the HFACS taxonomy can accommodate the HF identified during re-coding and therefore provide tangible evidence that HFACS could be used by the AAC as a reliable tool. It is hoped a number of comparison analysis can be achieved and are accidents more prevalent when flying in visual meteorological conditions (VMC) or poor visibility instrument meteorological conditions (IMC) therefore two sets of visual conditions; VMC and daylight or impoverished visual conditions IMC or twilight/nighttime. Wiegmann, D. A. and Shappell, S. A. (2003). What would also be interesting was the causation and aircrew behaviours of fatal and non-fatal accidents and are these more prevalent on operations or during training. The author was in Afghanistan 2006 and over 6-month period there wasn’t a single crash let alone fatality. But the AAC records 6 crashes a year so again this is worthy of investigation. The ranks of the pilot is also worthy of interest with regards to achieving a good CAG there may be causal evidence to indicate that an imbalance between ranks could have lead to an aircrash. The Organizational hierarchy will; also be researched is it one specific organization that keeps having crashes is there an issue with the pressures placed on the pilots by the organization. The inter-rater reliability will also be calculated by using Fleiss Kappa which will work for more than two raters, it is intended that an acceptable level of inter rater reliability will be recorded. In addition, the intra-rater reliability as a holistic measurement is hoped to be high in order to support the credibility of the results. An Organization could benefit from gaining a standardized, consistent coding methodology and that data can be used for identifying trends and intervention strategies can then target these trends in accident causation. It is hoped that granularity can be achieved beyond the label â€Å"pilot error† and identify the underlying causation of the accident. If successful and if HFACS is adopted UK military wide, perhaps the real cause of why ZD576 flew into the Mull of Kyntre could be unearthed. If other Military organizations can reap success then HFACS could be a reliable tool to identify causation and could be used in accident investigation. Ethics I will comply fully with the BPS[7] ethical principles when conducting research with human participants. All identifiable information relating to individuals discussed in the narrative findings will be removed in accordance with the data protection act, for the purposes of analysis and reporting. All participates will be fully appraised of my research, recognize that all the coders are volunteers and give informed consent before the research and to understand how the information will be used. The coders will be reviewing material depicting instances of fatalities therefore it is important that the coders do not come to any psychological harm, over and above the risk of harm in ordinary life (participants will be invited to contact me if participation causes concern at any time or to ask questions). Maintaining a good rapport particularly with the coders is also a desirable. Being an Aeronautical Engineer should also bridge any cultural gaps and maintain a good working relationship.

Thursday, November 7, 2019

Erwin Schrödinger and the Schrödingers Cat Experiment

Erwin Schrà ¶dinger and the Schrà ¶dinger's Cat Experiment Erwin Rudolf Josef Alexander Schrà ¶dinger (born on August 12, 1887 in Vienna, Austria) was a physicist who conducted groundbreaking work in quantum mechanics, a field which studies how energy and matter behave at very small length scales. In 1926, Schrà ¶dinger developed an equation that predicted where an electron would be located in an atom. In 1933, he received a Nobel Prize for this work, along with physicist Paul Dirac. Fast Facts: Erwin Schrà ¶dinger Full Name: Erwin Rudolf Josef Alexander Schrà ¶dingerKnown For: Physicist who developed the Schrà ¶dinger equation, which signified a great stride for quantum mechanics. Also developed the thought experiment known as â€Å"Schrà ¶dinger’s Cat.†Born: August 12, 1887 in Vienna, AustriaDied: January 4, 1961 in Vienna, AustriaParents: Rudolf and Georgine Schrà ¶dingerSpouse: Annemarie BertelChild: Ruth Georgie Erica (b. 1934)Education: University of ViennaAwards: with quantum theorist, Paul A.M. Dirac awarded 1933 Nobel Prize in Physics.Publications: What Is Life? (1944), Nature and the Greeks  (1954), and My View of the World  (1961). Schrà ¶dinger may be more popularly known for â€Å"Schrà ¶dinger’s Cat,† a thought experiment he devised in 1935 to illustrate problems with a common interpretation of quantum mechanics. Early Years and Education Schrà ¶dinger was the only child of Rudolf Schrà ¶dinger – a linoleum and oilcloth factory worker who had inherited the business from his father – and Georgine, the daughter of a chemistry professor of Rudolf’s. Schrà ¶dinger’s upbringing emphasized cultural appreciation and advancement in both science and art. Schrà ¶dinger was educated by a tutor and by his father at home. At the age of 11, he entered the Akademische Gymnasium in Vienna, a school focused on classical education and training in physics and mathematics. There, he enjoyed learning classical languages, foreign poetry, physics, and mathematics, but hated memorizing what he termed â€Å"incidental† dates and facts. Schrà ¶dinger continued his studies at the University of Vienna, which he entered in 1906. He earned his PhD in physics in 1910 under the guidance of Friedrich Hasenà ¶hrl, whom Schrà ¶dinger considered to be one of his greatest intellectual influences. Hasenà ¶hrl was a student of physicist Ludwig Boltzmann, a renowned scientist known for his work in statistical mechanics. After Schrà ¶dinger received his PhD, he worked as an assistant to Franz Exner, another student of Boltzmann’s, until being drafted at the beginning of World War I. Career Beginnings In 1920, Schrà ¶dinger married Annemarie Bertel and moved with her to Jena, Germany to work as the assistant of physicist Max Wien. From there, he became faculty at a number of universities over a short period of time, first becoming a junior professor in Stuttgart, then a full professor at Breslau, before joining the University of Zurich as a professor in 1921. Schrà ¶dinger’s subsequent six years at Zurich were some of the most important in his professional career. At the University of Zurich, Schrà ¶dinger developed a theory that significantly advanced the understanding of quantum physics. He published a series of papers – about one per month – on wave mechanics. In particular, the first paper, â€Å"Quantization as an Eigenvalue Problem, introduced what would become known as the Schrà ¶dinger equation, now a central part of quantum mechanics. Schrà ¶dinger was awarded the Nobel Prize for this discovery in 1933. Schrà ¶dinger’s Equation Schrà ¶dingers equation mathematically described the wavelike nature of systems governed by quantum mechanics. With this equation, Schrà ¶dinger provided a way to not only study the behaviors of these systems, but also to predict how they behave. Though there was much initial debate about what Schrà ¶dinger’s equation meant, scientists eventually interpreted it as the probability of finding an electron somewhere in space. Schrà ¶dinger’s Cat Schrà ¶dinger formulated this thought experiment in response to the Copenhagen interpretation of quantum mechanics, which states that a particle described by quantum mechanics exists in all possible states at the same time, until it is observed and is forced to choose one state. Heres an example: consider a light that can light up either red or green. When we are not looking at the light, we assume that it is both red and green. However, when we look at it, the light must force itself to be either red or green, and that is the color we see. Schrà ¶dinger did not agree with this interpretation. He created a different thought experiment, called Schrà ¶dingers Cat, to illustrate his concerns. In the Schrà ¶dingers Cat experiment, a cat is placed inside a sealed box with a radioactive substance and a poisonous gas. If the radioactive substance decayed, it would release the gas and kill the cat. If not, the cat would be alive. Because we do not know whether the cat is alive or dead, it is considered both alive and dead until someone opens the box and sees for themselves what the state of the cat is. Thus, simply by looking into the box, someone has magically made the cat alive or dead even though that is impossible. Influences on Schrà ¶dinger’s Work Schrà ¶dinger did not leave much information about the scientists and theories that influenced his own work. However, historians have pieced together some of those influences, which include: Louis de Broglie, a physicist, introduced the concept of â€Å"matter waves. Schrà ¶dinger had read de Broglie’s thesis as well as a footnote written by Albert Einstein, which spoke positively about de Broglie’s work. Schrà ¶dinger was also asked to discuss de Broglie’s work at a seminar hosted by both the University of Zurich and another university, ETH Zurich.Boltzmann. Schrà ¶dinger considered Boltzmann’s statistical approach to physics his â€Å"first love in science,† and much of his scientific education followed in the tradition of Boltzmann.Schrà ¶dinger’s previous work on the quantum theory of gases, which studied gases from the perspective of quantum mechanics. In one of his papers on the quantum theory of gases, â€Å"On Einstein’s Gas Theory,† Schrà ¶dinger applied de Broglie’s theory on matter waves to help explain the behavior of gases. Later Career and Death In 1933, the same year he won the Nobel Prize, Schrà ¶dinger resigned his professorship at the University of Berlin, which he had joined in 1927, in response to the Nazi takeover of Germany and the dismissal of Jewish scientists. He subsequently moved to England, and later to Austria. However, in 1938, Hitler invaded Austria, forcing Schrà ¶dinger, now an established anti-Nazi, to flee to Rome. In 1939, Schrà ¶dinger moved to Dublin, Ireland, where he remained until his return to Vienna in 1956. Schrà ¶dinger died of tuberculosis on January 4, 1961 in Vienna, the city where he was born. He was 73 years old. Sources Fischer E. We are all aspects of one single being: An introduction to Erwin Schrà ¶dinger. Soc Res, 1984; 51(3): 809-835.Heitler W. â€Å"Erwin Schrà ¶dinger, 1887-1961.† Biogr Mem Fellows Royal Soc, 1961; 7: 221-228.Masters B. â€Å"Erwin Schrà ¶dinger’s path to wave mechanics.† Opt Photonics News, 2014; 25(2): 32-39.Moore W. Schrà ¶dinger: Life and thought. Cambridge University Press; 1989.Schrà ¶dinger: Centenary celebration of a polymath. Ed. Clive Kilmister, Cambridge University Press; 1987.Schrà ¶dinger E. â€Å"Quantisierung als Eigenwertproblem, erste Mitteilung.†Ann. Phys., 1926; 79: 361-376.Teresi D. The lone ranger of quantum mechanics. The New York Times website. https://www.nytimes.com/1990/01/07/books/the-lone-ranger-of-quantum-mechanics.html. 1990.

Tuesday, November 5, 2019

SAT High School Codes and Test Center Codes

SAT High School Codes and Test Center Codes SAT / ACT Prep Online Guides and Tips When you register for your SAT, you have to submit codes for your high school and test center, the location where you are going to take your SAT. The codes make it easier for the College Board to keep track of the high school and test center of everyone who takes the SAT. You want to make sure you submit the right codes, since making a mistake can result in your having to take the SAT at a random high school that's far away from where you live or sending your scores to the wrong college. In this article, I will let you know how to look up SAT high school and test center codes and advise you how to use them properly. How To Enter Codes During Online Registration High School Codes It's very easy to submit your high school code during the online registration process. All you have to do is begin typing the name of your high school and your high school should appear in a dropdown menu. Just click on the name of your school and your high school code will be automatically entered. If the name of your school doesn't appear, you can search for your school by its zip code. Then, the name of your school will be automatically entered. If you click "change your school," you can search for your high school by its code, name, city, state, or zip code. Just select your school from the search results and your high school code will be entered. Test Center Codes Near the end of the online registration process, you can select your test center location. You can search for test centers in your area, and then you'll be given a list of options. Just select where you want to take the test, and the test center code will be entered. How To Look Up SAT Codes You can alsosearch for high school and test center codesbefore, during, or after the online registration process. High School Codes To find your high school code, you can search by country, city, state, and zip code. After you enter the search criteria and click search, on the left, you'll be given the school name, and on the right, you'll be given the corresponding high school code. Test Centers To find your test center code, you can search by your test date, country, state, and city. When you search for test center codes, you'll be given the test center name, address, and code. Special Situations Homeschooled If you're homeschooled, your high school code is 970000. If Your High School Code Is Not Listed If you go to high school in the US or in a US territory and your school code is not listed, enter 000003. If You Go to High School Outside of the US If you go to high school in a country outside of the US, enter 000004. Advice for Ensuring Your Codes Are Correct If you select your high school and test center while registering,make sure the codes on your admission ticket are correct. You can double-check the codes by looking them up on the SAT website. If you do manually enter your codes during registration, make sure you've entered the right codes and that the codes you've entered correspond with your high school and test center. What's Next? For anyone studying for the SAT, I highly recommend that you check out the ultimate SAT study guide. You'll learn extremely important information like how to beat procrastination in your SAT prep and how to get a perfect score. If you want more information about SAT logistics, read our articles about SAT admission tickets and SAT fees and registration. Want to improve your SAT score by 160 points?We've written a guide about the top 5 strategies you must be using to have a shot at improving your score. Download it for free now:

Sunday, November 3, 2019

Quality Management Tools & Techniques Essay Example | Topics and Well Written Essays - 1250 words

Quality Management Tools & Techniques - Essay Example In addition, it is also used in the monitoring of the effects of process improvement theories. Consequently, as the standard the X-bar and R chart will only work in place of X-bar and s or the median and R chart. In order to create an X-bar and R chart you can use CHARTrunner and SQCpack software. The X-bar is used to show the mean or average of every subgroup. It also used to analyze central location. On the other hand the R-chart is used to depict how data is spread and study system variability. We can actually utilize the R charts and X-bar for any of the processes that with a subgroup size greater than one. Usually it is used when the size of the subgroup falls within two and ten. However, the s charts and X-bar charts are used those subgroups of eleven and more. The X-bar and R charts are only utilized; if you need to assess stability of the system, the data is in variable form, if the data is collected in such subgroups that are larger than one but are less than eleven. So as t o ensure the best of the results, before calculating the control limits should collect as many subgroups as possible. This is because with the small amount of data the variability of the entire system may not be represented by the X-bar and R chart. Therefore, the more subgroups utilized in the calculation of limits usually 20-25 the more reliable the results (Waite, 2010). As in the case of Scott and Larraine the utilization of 30 sub groups is actually recommended. Since Scott said that he noticed that the number of complaints seem to have significantly increased since the new system was installed, it can actually be diagnosed that the problems may be emanating from the system thus the need of checking if there is any variability in the system. But since the errors increased in the last third of the month it is also substantiated that the system has been in place close to a month. The X-bar and R charts can be of help if you commence to improve the system and later use them to ass ess the systems stability. After assessment of the system’s stability, should determine if there is need to stratify the data. This is because you may actually come across variability in the results should collect the data and enter it such way that lets you to stratify it by location, symptom, lots, time and operator. Moreover, since the hotel was continuously receiving complaints the X-bar and R charts can also be used to analyze the results of improvements of the process. This would curb down an increased trend of complaints of the inflated bills from the hotel staff. Finally, the X-bar and R charts can be used for standardization. This means the data should continue to be collected and analyzed through the process of operation. If changes have been made to the system that can make the collection of data to stop, then you can only have the perception and opinion that they improved the system (Waite, 2010). An X-bar monitors the average value of particular process overtime. This means that for every subgroup the x-bar value is plotted. The lower and upper control limits are the ones that define the range of inherent variation in the means of the subgroups when the process is in control. However, the R chart is used to monitor the process if the variable of interest is a quantitative measure. To find the upper and lower limits we use the formulae (Woodwall, 2011). UCL = ?+ 3vn and UCL ?-3vn To commence with, the R chart is