Wednesday, December 25, 2019

A Database For A Relational System - 865 Words

Database Normalization Srikanth Karra Instructor: Dr. Steven Case Southern New Hampshire University When we design a database for a relational system, the main objective in the development of a logical data model is to create an accurate representation of the data its relationship and constraints. The data should be split in different tables, which can then be joined together based on their relations with each other and the data found in each one of them. These tables should therefore be designed well to save on space and ensure that cases on data inconsistency are eliminated. Another aspect, which will be saved on, is space that is occupied by repetitive data. Normalization is the process that is used to remove redundancy†¦show more content†¦It is highly recommended by practitioners that all databases be designed to at most the third normal form as there’s little or no benefit designing to the higher normal forms. The type of activity or transactions to be performed against the database should determine how normalized this database will be to achieve the performance benefits. Importance of having a normalized database There are many advantages to normalizing databases. The first being the ability to minimize modification anomalies by reducing redundancy, maintain data consistency and protect data integrity. Data consistency is the act on ensuring that similar data does not appear in different tables or entries in the database. This is highly discouraged because inconsistency can result into a lot of losses in terms of space and time. It can be confusing and especially when updating the data which has multiple entries. A database should therefore be well designed to ensure that all the data is well documented and all inconsistency has been eliminated in the tables. Normalization is a process for evaluating and correcting tables to minimize the likelihood of data anomalies. Basically, normalization can help ensure the proper data is entered into any particular field by restricting what can be entered or stored in that particular field (Kroenke, 2006). The essence of data normalization is to split your data into several tables that will be connected to each other based on the data within them

Tuesday, December 17, 2019

Personal Experience Camping with my Family Essay - 540 Words

Every summer my dad, brother, and I go camping at least twice a month. We don’t have a set camp ground we go to, we just love camping on warm summer days and we bring our little fishing boat to the lake and ride the cool calm water. We would set the boat with our anchor, cast our fishing rods, and relax while catching so many whiskered catfish. We have been doing this for a few years since we moved here from Massachusetts. The lakes are large with not a lot of people around, which is nice and relaxing since some people are loud and obnoxious. The lakes are surrounded by tall trees, glittering beaches and old hiking bridges. My favorite part is the smell of the burning campfires and smoky food we have, and it is a great getaway from people.†¦show more content†¦The way the sun/moonlight shines through the trees high above it just so beautiful. When we go camping I walk on the hiking trails at least once. The trails are a great way to clear your mind, I just love walkin g across the old bridges that overlook the sparkling lake and occasionally see a fish or two jump out of the water. When youre camping at a lake or just camping in general, there are smells all over the place. The smoky smell of fresh campfire smoke fills the air and sometimes the scent of marshmallows and hotdogs being toasted/cooked are all over. My brother always insets to get marshmallows and Hershey bars to make smores even when it’s still light out; often he would get her fingers sticky and dirty from making so many of them. The campfires I make are so big bright with fiery reds, oranges and yellows. This is always a great time for my dad and me to peacefully talk to each other while looking up at the stars. Camping is also a great way to get way from people who you may not like or that annoy you. Camping is the best thing I like to do with my family, no matter where we may go I know we are going to have a great time. From driving the boat around the sparkling water to relaxing, catching fish of all shapes and sizes. Also getting away from the people who I don’t like and relaxing under the starry sky. The smell of campfire smoke slowing floating up the tall pine trees is one of me favorite things aboutShow MoreRelatedCamping Is An Amazing Experience1281 Words   |  6 PagesOne of the bests experiences of my young adult life of twenty-four years is going camping. Camping is an amazing experience. It allows for you to break off from normal everyday routines and lets you enjoy a simpler life experience without the aid of technology. Camping with family can be a bonding heartwarming experience, but camping with friends is where the fun is at. There s nothing better than going out into the dry Arizona wilderness with nothing but your best buddies and a sack of suppliesRea d MorePsychology And Camping At Chutes Provincial Park1656 Words   |  7 Pages Psychology and Camping at Chutes Provincial Park Next, I will be analyzing my leisure experience at Chutes Provincial Park through a psychological lens. More specifically I will be looking at the positive psychology movement. Kahneman and Krueger (2006) believe that â€Å"positive psychology focuses on the well-being and flourishing of individuals and communities† (as quoted in Mock, Mannell, Guttentag, 2016, pp. 41). In addition to looking at well-being, positive psychology studies human functioningRead MoreDavid Kolb s Framework Of Experiential Learning1534 Words   |  7 Pagesbig proponent of ESL learning, what my case study consists of, can be compared to David Kolb’s framework of experiential learning. Kolb’s experiential learning cycle features four stages: the concrete experience, reflective observation, abstract conceptualisation, and active experimentation. The first stage, concrete experience, involves doing or having an experience. The second stage, reflective observation, involves reviewing or reflecting on the experience. The third stage, abstract conceptualisationRead MoreWinter Camping Can Be A Disaster1860 Words   |  8 PagesWinter camping can either be a disaste r, or a ton of fun! These Winter camping tips should help you make sure your next trip stays under the tons of fun category! Don t Go Alone - Any camping trip is more fun with friends and family, but winter camping trips can multiply this tenfold! Even if you re cold and miserable the entire time, you and your friends or family members can laugh about it later! Additionally, having extra minds around can come in handy when it comes to staying warm and havingRead MoreOutdoor Recreation With The Whole Family3630 Words   |  15 PagesOutdoor recreation with the whole family doesn t have to break your budget. Here are some tips for a great outdoor adventure in the Ozarks. Bring your tent, some supplies, and the whole family for a back-to-nature getaway. Places to Camp in the Ozarks: Lake of the Ozarks State Park, Kaiser, Missouri. This place is packed with fun for the outdoor adventurer. Activities include spectacular cave tours (special fee) for all ages, trail rides (special fee), swimming, 12 hiking trails, fishing, andRead MoreHow Camping Can Be A Responsible Camper While You Enjoy The Wild1955 Words   |  8 PagesCamping is a fun, low cost way to enjoy the summer with family and friends. Here are a few tips that will help you be a responsible camper while you enjoy the wild. 1. For bug repellent try citronella candles and Avon Skin So Soft instead of toxic chemical insect repellents. 2. Wear long sleeves and long pants at dawn and dusk when insects are most active. 3. Stay on trails to protect habitat and avoid snake encounters. Snakes sense vibrations, so walking with a buddy or a group is not onlyRead MoreTechnology Has Made A Huge Impact On Our World1415 Words   |  6 Pagesnature. Both of my parents were raised spending sunrise to sunset outside. They both played with neighborhood kids in their backyards from the time they were able to walk until the time they entered middle school. There was not much technology for them to use, they each had a radio and television but they were mainly used for news rather than entertainment. Similarly, I was raised up until I was eight or nine years old to spend all day outside. Technology was not too prevalent in my life at this pointRead MoreThe Value Of The Campsite Doesn t Make It Worth It?2333 Words   |  10 Pagesa lot to go out camping, and that it really isn t worth going out and camping because you re paying so much money just for a crappy campsite where you can throw down a tent and start a fire. It is true that it does cost more money that it probably should for a campsite, however, many things in life tend to cost more than they are really worth, however does this mean that the value of the campsite doesn t make it worth it? Entertainment/Family Value To me, camping is a family or social typeRead MoreMy Family : My Life746 Words   |  3 Pagesadversity I’ve had in my life all my troubles and obstacles, have strengthened me.† (2) I Vanessa Hoene personally haven t had many troubles in my life. (3) But when I did my family was there to help me. (4) Today I will be talking to you about my family, future career and goals, and my favorite things to do. BODY: The first thing I will be talking about today is my family. My family may be small or large to others, I have one sister, Rayana, and my parents Jesse and Yolanda. My mom has her mom, MaryRead MoreMy Culture : A Culturally Diverse Upbringing868 Words   |  4 PagesMy Culture My culture being my beliefs, values, practices, way of dress might be found by some to be rather confusing. For the most part of my life I experienced a culturally diverse upbringing, holding strong to my family’s traditions, but at the same time being able to learn and develop through experiences not of my own culture. Those personal experiences have formed a sense of culture unique to me and I find it has been confusing for my fellow Canadians, who I believe I share the same cultural

Monday, December 9, 2019

Literature Review Paper Corporate Restructuring

Question: Discuss about theLiterature Review Paperfor Corporate Restructuring. Answer: Corporate Restructuring in the West The restructuring literature in the companies found in the west provides a hint of the failure or success of actions of restructuring undertaken by the management in determining and creating business value. (Ruigrok, 1999) argues that the experiences of the companies found in the West, can be of benefit to other businesses found in the developing countries.The main aim of enterprise restructuring is to transform businesses into capitalist firms which creates value. Western Europe and North American managers, scholars, and politicians have long regarded restructuring as mainly a temporary phenomenon(Ruigrok, 1999). In this view, restructuring was considered as a company stage during which it had to adapt to the changes of the environment such as Asian competition, slower growth rate, and higher prices of input. The late 1980s to 1990s events have made this perspective unsustainable(Ruigrok, 1999). It seems unsafe to continue assuming that many firms will continue in the coming years t o restructure. This is because of the ongoing monetary and economic integration in the EU (European Union), further discussions on economic integration with North Atlantic Area and the Americas, and the economic crisis in the Asia (Onundo Riany, 2012). As a result, this brings up questions of the direction and nature of the efforts of corporate restructuring, and possible differences of cross-nationals among the West corporates. Through literature review, this study analyzes the significance to restructuring to the organizations, especially the West trading companies. The Firms Performance and Organizational Restructuring For the organizations that want to remain relevant in the world of business, organizational restructuring is a vital strategy. (Shermon, 2012) defines the restructuring as the changing of the structure of a firms operations, governance, financing, and investment structures. (Three Sigma Inc, 2002) defines restructuring as the process of introducing structural changes in the daily business management for onetime transaction activities such as acquisitions, debt swaps, spin-offs, and repurchasing stock. It is viewed that the main concern of restructuring is to change structures for the quest of long-term and short-term benefits. According to (Shermon, 2012) the undertakings of restructuring can be categorized into three main classifications. Those types are the portfolio, organizational, and financial restructuring. Restructuring that is based on financials comprises of the firms capital changes, which includes the debt-equity swaps, recapitalization leverage, and leverage buyout. A financial restructuring common way is to increase equity through new share issuing. However, portfolio restructuring comprises substantial changes in the firms asset mix or the firms business lines, which include spin-offs, liquidation, and asset sales (Vyacheslav, 2000). On the other hand, organizational restructuring encompasses substantial changes in the firms organizational structure, which comprises of the spreading span of control, corporate governance reformation, reducing product diversification, redrawing of divisional boundaries, downsizing employment, revising compensation, and flattening of hierarchic levels. This pap er is focuses on the restructuring of the organizational which comes with the changes in policies of human resources. There is need to change the current human resource policies in line with the changing situation. The department of the human resource needs to initiate change management. (Vyacheslav, 2000) demonstrates that in order to maintain the employees external and internal equity, there should be streamlining of the current pay structure. (Andreas Kemper, n.d.) notes that there are signs that can be used to determine the need for organizational restructuring. Such signs include influenced performance appraisals; unpredictable organizational communications, significant staffing increases or decreases are contemplated; accountability for results are not communicated clearly and measurable resulting in subjective and retaining personnel and turnover becomes a problem that is significant; parts of the organization are substantially under or over staffed; new skills and capabilities are needed to meet current or expected operational requirements; technology and/or innovation are creating changes in workflow and production processes; fragmented, and inefficient; and stagnant workforce productivity or diminishing morale (Jarso, 2016). Organizational restructuring in a number of ways has shown to be important and that they are more important in strategies implementation of due to good formulation but that are not confined to reducing costs of operation. A study conducted by (Srivastava, 2013) on the effect of restructuring on the operational issues of the West countries public traded companies in 2013, tested whether the restructuring resulted in substantial changes. In their study, they applied profit margin, the ratio of total asset turnover after and before the restructuring as substitute for the firms performance, return on assets, and changes in revenues (Srivastava, 2013). They were able to determine through the analysis that there was substantial increase in return on assets, profit margin, and total revenue after restructuring. However, there was no proof of substantial effect on the ratio of asset turnover. The researchers were able to find existence of substantial evidences of significant market expectations and over response to the announcements of restructuring. They performed a study to explore corporate performance improvements in companies involved in acquisition and merger. Similarly, other researchers carried out a study to analyse the merger relationship in the US steel industry. The research used the methodology of New Empirical Industrial Organization. This analysis was carried out by observing the time between 1951 to 1988. The study examined the relationship between mergers in the U.S. steel industry and the market power. According the study results, there was slight boost to power in the steel industry since 1972 1984 (Jarso, 2016). In steel industry, acquisition and merger resulted in improvement of solvency, cash flow positions, liquidity, and efficiency. Also, one study conducted in the US in 1998 found out than less than 20% of the firms, had considered restructuring as an essential step of integrating the acquired firm into their organization (Jarso, 2016). However, firms which have a comprehensive integration plans, have managed to develop in their industries, an average value. (Wambui, 2012) conducted a research on the impact of restructuring on the operation of the organization specializing in mobile phone industry in UK. The study agreed that, all three techniques of restructuring have positive impact on the firms market share and growth. The findings from their study demonstrated that organizational restructuring had the least effect on the firms market share, the second was portfolio restructuring, whereas, financial restructuring had the highest impact. Nonetheless, on the market growth rate, organizational restructuring had the highest impact. References Andreas Kemper, F. K., n.d. Corporate Restructuring Dynamics: A case Study Analysis, Oestrich-Winkel, German: s.n. Jarso, H. A., 2016. Restructuring Strategy and Performance of Major Commercial Bank in Kenya, Nairobi: University of Nairobi, Kenya. Onundo Riany, G. H. M. O. O., 2012. Effects of Restructuring on Organization Performance of Mobile Phone Service ProvidersChrista. International Review of Social Sciences and Humanities, 4(1), pp. 198-204 . Ruigrok, W. P. A., 1999. Corporate Restructuring and New Forms of Organizing: Evidence from Europe.. [Online] Available at: https://www.freepatentsonline.com/article/Management-International-Review/57645213.html [Accessed 25 4 2017]. Shermon, G., 2012. Creating an optimized organisation: Key Opportunities and Challenges, s.l.: s.n. Srivastava, S. B., 2013. Organizational Restructuring and Social Capital Activation, Berkeley: University of California . Three Sigma Inc, 2002. Organizational Restructuring, s.l.: s.n. Vyacheslav, T., 2000. An Investigation into Methods of Restructuring and Reorganizing Industrial Enterprises. Club of Economics in Miskolc TMP , Volume 5, pp. 81-84. . Wambui, N. A., 2012. Corporate Restructuring and Firm Performance In The Banking Sector Of Kenya, Nairobi Kenya: University of Nairobi.

Sunday, December 1, 2019

The Cherokee Removal Book Review free essay sample

The Cherokees had lived in the interior southeast, for hundreds of years in the nineteenth century. But in the early eighteenth century setters from the European ancestry started moving into the Cherokees territory. From then on the colonial governments in the area began demanding that the Cherokees give up their territory. By the end of the Revolutionary War, the Cherokees had surrendered more than half of their original territory to the state and federal government.In the late 1 asss the US began urging the Cherokees to stop hunting and heir traditional ways of life and to instead learn about how to live, farm, and worship like Christian Americans. Despite everything the white people in Georgia and other southern states that abutted the Cherokee Nation refused to accept the Cherokee people as social equals and urged their political representatives to take the Cherokees land. The purchase of the Louisiana Territory from France in 1 803 gave Thomas Jefferson the chance to relocate the eastern tribes beyond the Mississippi River. We will write a custom essay sample on The Cherokee Removal Book Review or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page The War of 181 2, with help from General Andrew Jackson help the united States to end what he called the absurdity of negotiating with the Indians tribes. From that point forward the Georgia politicians increasingly raised the pressure on the federal government to fulfill the Compact of 1802. In the agreement the federal government had to extinguish the Indian land title and remove the Cherokees from the states. The Cherokee government maintained that they constituted a sovereign nation independent of the American state and federal government.The Treaty of Hopeful in 1785 established borders between the United States ND the Cherokee Nation offered the Cherokees the right to send a deputy to Congress, and made American settlers in Cherokee territory subject to Cherokee law. With help from John Ross they helped protect the national territory. In 1825 the Cherokees capital was established, near present day Calhoun Georgia. The Cherokee National Council advised the United States that it would refuse future cession request and enacted a law prohibiting the sale of national land upon penalty of death. In 1 827 the Cherokees adopted a written constitution, an act further removed by Georgia. But between the years Of 1 827 and 1831 the Georgia legislature extended the states jurisdiction over the Cherokee territory, passed laws purporting to abolish the Cherokees laws and government, and set in motion a process to seize the Cherokees lands, divide it into parcels, and other offer some to the lottery to the white Georgians. Andrew Jackson was declared president in 1828 immediately declaring the removal of eastern tribes. In 1830 Congress passed the Indian Removal Act which authorized the president to negotiate removal treaties.In 1831 combined army, militia, and other volunteer forces began to move the tribes along one of several routes to two forts located in Indian Territory; Fort Gibson and Fort Towns. The last tribe to be moved was the Cherokees in 1838. During this move some tribes accepted bribes of money and or land; whole others didnt and were forced under the threat of death. During the move there were several weigh states along the route, and from bad planning or lack of concern to malfeasant actions the Indians were not allowed or even access to proper food, medical supplies, warm clothing, nor were allowed to rest for any significant period of time.This resulted in death of many of the tribal members. The Native Americans began to cal the trail, the Trail where they Weeper/ Cried and it was later changed to The Trail of Tears by modern translation. There were approximately eleven trails that took different tribes to different locations. They ranged from 200 to 900 miles and went through around fourteen states. There was an estimated 4,000 to 15,000 Cherokees deaths during these trails.

Tuesday, November 26, 2019

positive effects of gene altering essays

positive effects of gene altering essays The Positive Effects of Gene Altering Since the beginning of the human race, we have been looking. We have been looking for ways to make our lives healthier, more comfortable, and happier. In the beginning it was simple rocks, plants, and fires. As our technology advanced so did the comfort of our lives. The wheel, the cure to the plaque, and who can forget the remote control, were all tools that made it possible to improve the quality of life. What tool lies ahead in the future to promote our well being and happiness? Genetic engineering is that tool. Every living thing is made up of genes, and with the capability of altering these genes, the possibilities are endless. Everything from better quality produce to the prevention of cancer is a possibility with genetic engineering, and scientists are just now beginning to understand the complex gene patterns. If you can imagine a world free of diabetes, or male pattern baldness, and genetics has a major role. Genetic engineers might someday have the capabilities to remove these genes or even clone wanted genes, and in the end allowing us to live the healthy, comfortable, happier lives we seek. The numbers of positive outcomes from genetic engineering are inconceivable. Genetic engineering will lead to healthier, more comfortable, and better lives. Genetic engineering will improve every day produce and goods. For producers involved with living organisms as their products, genes play a major role in the quality of their products and amount of profit. If a farmer's cows are not as lean, or their corn is diseased, then the demand for their product is going to be less than the competition. That is where genetics comes in. It is possible, by altering certain genes, to create a leaner cow, or a disease resistant stalk of corn, and it is this fact that makes genetic engineering invaluable to the every day farmer. If their cattle is leaner, or their chickens are engineered to...

Saturday, November 23, 2019

Identify the Ash

Identify the Ash An ash tree commonly refers to trees of the genus Fraxinus (from Latin ash tree) in the olive family Oleaceae. The ashes are usually medium to large trees, mostly deciduous though a few subtropical species are evergreen. Identification of ash during the spring/early summer growing season is straight forward. Their leaves are opposite (rarely in whorls of three)  and mostly pinnately compound but can be simple in a few species. The seeds, popularly known as keys or helicopter seeds, are a type of fruit known as a samara. The genus Fraxinus contains 45-65 species worldwide. The Common North American Ash Species Green and white ash trees are the two most common ash species and their range covers most of the Eastern United States and Canada. Other significant ash trees to cover significant ranges are black ash, Carolina ash, and blue  ash. green ashwhite ash Unfortunately, both green ash and white ash populations are being decimated by the  emerald ash borer  or EAB. Discovered in 2002 near Detroit, MIichigan, the boring beetle has spread through much of the northern ash range and threatens billions of ash trees. Dormant Identification Ash has shield-shaped leaf scars (at the point where the leaf breaks away from the twig). The tree has tall, pointed buds above the leaf scars. There are no stipules on ash trees so no stipulate scars. The tree in winter has pitchfork-like looking limb tips and there could be long and narrow clustered winged seed or samaras. Ash has continuous bundle scars inside leaf scar looks like smiley face. Important: A leaf scar is the major botanical feature when keying a green or white ash. The white ash will have a U-shaped leaf scar with the bud inside the dip; the green ash will have a D-shaped leaf scar with the bud sitting atop the scar. Leaves: opposite  , pinnately compound , without teeth.Bark: gray and furrowed.Fruit: a single winged key hanging in clusters. The Most Common North American Hardwood List ash  - Genus Fraxinus  beech  - Genus Fagus  basswood  - Genus Tilia  birch  - Genus  Betula  black cherry  Ã‚  - Genus  Prunus  black walnut/butternut  - Genus Juglans  cottonwood  Ã‚  - Genus  Populus  elm  Ã‚  - Genus  Ã¢â‚¬â€¹Ulmus  hackberry  Ã‚  - Genus   Celtis  hickory  Ã‚  - Genus   Carya  holly  Ã‚  - Genus   IIex  locust  - Genus Robinia and Gleditsia  magnolia  Ã‚  - Genus   Magnolia  maple  - Genus Acer  oak  - Genus Quercus  poplar  Ã‚  - Genus  Populus  red alder  Ã‚  - Genus   Alnus  royal paulownia  Ã‚  - Genus  Paulownia  sassafras  Ã‚  - Genus   Sassafras  sweetgum  - Genus Liquidambar  sycamore  Ã‚  - Genus   Platanus  tupelo  Ã‚  - Genus   Nyssa  willow  Ã‚  - Genus   Salix  yellow-poplar  - Genus  Liriodendron

Thursday, November 21, 2019

The Rise and Fall of the Berlin Wall Essay Example | Topics and Well Written Essays - 2750 words

The Rise and Fall of the Berlin Wall - Essay Example There was entrance for other allies in the war and it grew to be a world war. Germany wanted Britain but Britain could not allow Hitler to acquire it. The war ended with the entrance of America and the subsequent defeat of Japan through bombing of Nagasaki and Hiroshima. The war had increased in magnitude such that not only the death of Helter and his ally Benito Mussolini of Italy would have ended the war. But Germany was left more unstable with the clout rule of Nazi regime that had helped to hold the country together under an authoritarian rule. After the ended to the war the territory that was left that could be termed as Nazi Germany had been divided into four occupation zones according to the postal agreed. Each of the regions was occupied by the allied power, American, British, French and the soviets. The former capital of Germany Berlin was occupied by the allied powers and acted as their center of control of the whole region. It as also subdivided into four zones although the city was in the zone that was occupied buy the soviet.The intention of the agreement that had led to the division of Germany among the powers was in order to govern the country together as one. But immediately after the war there was growing tension between Soviet Union which was working to occupy the superpower vacuum in the world and the allied forced led by America. The era of cold war had just but set in.The advent of cold war saw increasing tension between the forces that had divided Germany among themselves. The French, British and American zone were brought together to form Federal Republic of Germany and West Berlin. On the other hand the region which... Each of the regions was occupied by the allied power, American, British, French and the soviets.   The former capital of Germany Berlin was occupied by the allied powers and acted as their center of control of the whole region.   It as also subdivided into four zones  Ã‚   although the city was in the zone that was occupied buy the soviet.     The intention of the agreement that had led to the division of Germany among the powers was in order to govern the country together as one. But immediately after the war there was growing tension between Soviet Union which was working to occupy the superpower vacuum in the world and the allied forced led by America. The era of cold war had just but set in.   The advent of cold war saw increasing tension between the forces that had divided Germany among themselves.  Ã‚   The French, British and American zone were brought together to form Federal Republic of Germany and West Berlin. On the other hand the region which was under the s oviet rule formed the Germany democratic Republic which included East Berlin.   Therefore the main forces behind division for Germany were the growing differences that were emerging between the allied forces and the Soviet Union which was mainly based on ideological differences between the two forces. (Maddrell, 2006)Growing difference between West and East  The cold war was purely based on ideological difference between the USA and the Soviet Union which was then led by Russia.   This was the main differences that had led to the eruption of the cold war.

Tuesday, November 19, 2019

Female genital mutilation and the practice of midwifery Dissertation

Female genital mutilation and the practice of midwifery - Dissertation Example The practices of FGM seem to be barbaric and cruel to Western society and in societies that hold such practices are done with the belief that there is a benefit to stealing the sexual arousal mechanisms from women in order to make them less carnal and more proper. The difficulty comes in trying to honour cultures for their beliefs while motivating them to change those beliefs because of false and dangerous consequences where female sex organs are concerned. Midwifery requires the acceptance of beliefs in concert with the application of good medical and traditional knowledge where childbirth is concerned. Consulting and caring for women who have had FGM requires sensitivity to the cultural beliefs with a firm understanding of how such procedures affect women in reference to their procreative lives. Psychological and medical knowledge is necessary to treat women with both respect and dignity despite any converse beliefs on the subject. While ideally it would be beneficial to abolish th e act of FGM, at this point in time it is still a potential problem that might arise when caring from patients from certain cultures or who come from a history of traumatic circumstances that ended in FGM. The following research proposal will explore the potential for a project in which the subject of FGM is examined through victims of the procedures, through the medical consequences that midwives face when dealing with patients who are victims of FGM, and through examining the balance between the victimisation of women and the cultural belief systems that must be honoured and respected while finding ways to deal with the consequences and offer reparative solutions where possible. 1.2 Background Female genital mutilation, also known as... From this research it is clear that female genital mutilation, also known as female genital cutting and female circumcision, has been defined by the World Health Organisation as â€Å"all procedures that involve the partial or total removal of female genitalia, or other injury to the female genital organs for non-medical reasons†. Unlike male circumcision, there are no health benefits to female circumcision and often contribute to urination difficulties or difficulty in childbirth later in life. The procedure most often will occur between the ages of birth and 15 and is considered a violation against women by world organisations across agencies. There are an estimated 100 to 140 million women who live with the consequences of the procedure with 92 million living on the African continent. There are four major types of FGM. These types are as follows: †¢ Clitoridectomy: partial or total removal of the clitoris (a small, sensitive and erectile part of the female genitals) and, in very rare cases, only the prepuce (the fold of skin surrounding the clitoris). †¢ Excision: partial or total removal of the clitoris and the labia minora, with or without excision of the labia majora (the labia are "the lips" that surround the vagina). †¢ Infibulation: narrowing of the vaginal opening through the creation of a covering seal. The seal is formed by cutting and repositioning the inner, or outer, labia, with or without removal of the clitoris. †¢ Other: all other harmful procedures to the female genitalia for non-medical purposes, e.g. pricking, piercing, incising, scraping and cauterizing the genital area.

Sunday, November 17, 2019

Battle of Trafalgar Essay Example for Free

Battle of Trafalgar Essay The Battle of Trafalgar was the most significant battle won by the British against the combined forces of the French and Spanish fleets during the Napoleonic Wars. This battle also had significant impact on the concept of navigation when it comes to the Naval Doctrine of War. This battle proved that tactical unorthodoxy could win battles; even though you might be outmanned and outgunned by your opponent you can still win battles by deviating from the old Naval Doctrine. This battle was part of a much larger campaign called the Trafalgar campaign which included several different battles that led up to the final battle at Trafalgar. This campaign was a long and complicated series of fleet maneuvers carried out by the combined French and Spanish fleets and the opposing moves of the British Royal Navy during much of 1805. These were the culmination of French plans to force a passage through the English Channel, and so achieve a successful invasion of the United Kingdom. The plans were extremely complicated and proved to be impractical. Much of the detail was due to the personal intervention of Napoleon, who was a soldier rather than a sailor. This was largely because Napoleon failed to consider the effects of weather, difficulties in communication, and the intervention of the Royal Navy. Despite limited successes in achieving some elements of the plan the French commanders were unable to follow the main objective through to execution. The campaign, which took place over thousands of miles of ocean, was marked by several naval engagements, most significantly at the Battle of Trafalgar on 21 October 1805. The naval doctrine at the time dictated that both sides should line up parallel to eachother in a straight line so that they could engage in battle and bring all their guns to bear against the enemy. One of the reasons for the development of the line of battle was to help the admiral control the fleet. If all the ships were in line, signaling in battle became possible. The line also had defensive properties, allowing either side to disengage by breaking away in formation. If the attacker chose to continue combat their line would be broken as well. This type of warfare allowed each side to fight a battle and then to disengage at any time to minimize the losses to their fleet. However with England under threat of invasion by Napoleon and his grand army, British Admiral Lord Horatio Nelson needed to ensure that the British were in control of the seas. In order to do this Nelson needed to fight and win a decisive battle that would clearly establish Britain’s naval supremacy. However in order to do this he would have to make sure that the combined French and Spanish fleets actually remained in the battle long enough to win a decisive victory. What Nelson planned on doing was instead of lining up parallel to the opposing fleet, Nelson would take his navy and charge at the enemy and deliberately cut the their battle line in two. This type of deviation from normal naval warfare in terms of navigation was unheard of at the time. Despite the risk to the British fleet, Nelson believed that this was the best way to engage the enemy fleet in the upcoming battle because it had numerous advantages. The primary advantage was that this would allow the British to cut half of the enemy fleet off, surround it, and force a fight to the end. This is unlike normal engagements where the battle was often inconclusive due to the fact that both fleets would withdraw before a clear winner could be seen. The plan had three principal advantages. First, it would allow the British fleet to close with the Franco-Spanish fleet as quickly as possible, reducing the chance that it would be able to escape without fighting. Second, it would quickly bring on close quarters battle by breaking the Franco-Spanish line and inducing a series of individual ship-to-ship fights, in which the British were likely to prevail. Nelson knew that the better seamanship, faster gunnery, and higher morale of his crews were great advantages. Third, it would bring a decisive concentration on the rear of the Franco-Spanish fleet. The ships in the front of the enemy fleet would have to turn back to support the rear, and this would take a long time. Additionally, once the Franco-Spanish line had been broken, their ships would be relatively defenseless to powerful broadsides from the British fleet and would take a long time to reposition and return fire. The main drawback of this strategy was that sailing the British fleet into the combined French and Spanish battle line, the British ships would be fully exposed to the enemy broadsides without the ability to return fire. In order to lessen the time the fleet was exposed to this danger Nelson would have to drive the fleet straight into the enemy battle line as fast as he could. This was yet another departure from navigation rules of naval warfare. Nelson was also well aware that French and Spanish gunners were ill-trained, nd would probably be supplemented with soldiers. These untrained men and would have difficulty firing accurately from a moving gun platform. This was in stark comparison to British gunners who were well drilled, and the Royal Marines who were expert marksmen. Another advantage that the British fleet had was that the enemy was sailing across a heavy swell, causing the ships to roll heavily and exacerbating these problems. Nelsons plan was indeed a gamble, but a carefully calculated one. The battle itself started exactly as Nelson wanted it to. The British fleet was able to successfully cut the French and Spanish battle line in half thus forcing a close quarter’s battle. Despite the huge risk that Nelson was taking his plan ended up working. Nelson scored a huge victory against the combined French and Spanish fleet. He managed to capture over twenty of the enemy ships and inflicted heavy casualties against while suffering few casualties himself. Unfortunately during the battle Nelson was pierced by a musket ball and died from his wounds before he could see the outcome of the victory. Some argue that his loss outweighed any gains made by the British Navy. Following the battle, the Royal Navy was never again seriously challenged by the French fleet in a large-scale engagement. Napoleon had already abandoned his plans of invasion before the battle and they were never revived. This battle firmly established Britain’s naval supremacy over France. In terms of navigation, this battle was very significant. The most important thing is that it proved that following standard navigational techniques during an engagement won’t always win a battle. The best tactic is to be unpredictable so that the enemy has to adapt to what you are doing thus giving you the tactical advantage. This is exactly what Nelson did in the Battle of Trafalgar and it paid off. He proved that sometimes in battle deviating from the norm of battle navigation is the best thing to do, and ever since navies around the world have looked to the strategies employed by Nelson. What is being done today is that naval commanders are being educated about naval history so that they can learn and even employ these types of strategies if they need to in battle. In conclusion, the Battle of Trafalgar was a turning point in which ships would fight naval battles in terms of navigation due to the tactical unorthodoxy employed by Nelson. This battle has had long term effects and even today commanders look back and employ some of the same strategies used. The importance of this battle cannot be underestimated because not only was it the turning point in the Napoleonic Wars for the British in terms of establishing naval supremacy at the time, it was a turning point in naval warfare. Navigation would never be the same thanks to one man and one decisive battle.

Thursday, November 14, 2019

Salmon Farming Essay -- essays research papers

Salmon Farming If you recently ordered salmon off the menu of your favorite restaurant, or purchased it from your local grocery store, chances are it was farmed. According to â€Å"Salmon of the Americas, an organization of salmon-producing companies in Canada, Chile and the United States, 70 percent of the salmon produced in British Columbia and Washington comes from salmon farms. If it weren’t for these farms, we would not have the luxury and abundance of this delicious and healthy food available to us year round. Salmon farming represents one very important way to feed the world and people want to eat more salmon and seafood- more than can be caught. Salmon farming began over 30 years ago and has become a huge industry. Experts say it’s the fastest growing segment of agriculture. Salmon farming plays an important role in the economies of many areas as well. Jobs and other economic benefits contribute to the value of salmon as much its role in good nutrition. Salmon is an oily fish rich in omega-3 fatty acids, a substance that almost certainly helps protect against heart disease and may also reduce the risk of cancer and Alzheimer's. There is one species of Atlantic salmon and five species of Pacific. Atlantic salmon account for almost 95 percent of the farmed salmon produced, and most of them are farm-raised on the pacific coast. Pacific species account for all of the wild salmon caught in the Americas and some of them are also farm-raised. No wild Atlantic salmon are fished commercially in North America, as they are an endangered species. Atlantic salmon have become the species of choice to raise on farms because they are more adaptable to the farming techniques and make better use of feed so they produce more salmon with less feed. Not everybody agrees however, that farmed salmon raised in net pens are healthy for the environment or for you to eat. Over the years, there have been numerous stories in the media that have pointed out the negatives of farm raised salmon. These arguments have ranged from wastes from salmon farms, the spreading of disease from farmed to wild fish, the negative impacts of farm raised fish escapes and interacting with native fish, and recently, the effects of farmed salmon consumption on human health. The latest issue that the media got there hands on and consequently got the public concerned, was a report that polychlorinated... ...sk for cancer. There is no need to be alarmed with high levels of contaminants when it comes to consuming any kind of salmon. What we do need to be alarmed about is the media reporting and their level of contaminants! Ronad A. Hites, Jeffery A. Foran, David O. Carpenter, M. Coreen Hamilton, Barbara A. Knuth, Steven J. Schwager (2004) study: Global assessment of organic contaminants in farmed salmon, Science 303:226-229. Centers for Disease Control and Prevention, National Center for Environmental Health Health Studies Branch Kevin Amos, National Aquatic Animal Health Coordinator, NOAA Fisheries Salmon of the Americas SOTA is an organization of salmon-producing companies in Canada, Chile and the United States whose mission is to improve health, awareness and dining enjoyment of consumers in North America by providing timely, complete, accurate and insightful information about salmon on behalf of the member companies. Ashley Dean, Shwartz,.Mark 2003. Salmon farms pose significant threat to salmon fisheries in the Pacific Northwest, researchers find. Stanford University American Journal of Clinical Nutrition, April 2002, 76:608-613. Pediatric Research, 1998, 44(2):201-209. Salmon Farming Essay -- essays research papers Salmon Farming If you recently ordered salmon off the menu of your favorite restaurant, or purchased it from your local grocery store, chances are it was farmed. According to â€Å"Salmon of the Americas, an organization of salmon-producing companies in Canada, Chile and the United States, 70 percent of the salmon produced in British Columbia and Washington comes from salmon farms. If it weren’t for these farms, we would not have the luxury and abundance of this delicious and healthy food available to us year round. Salmon farming represents one very important way to feed the world and people want to eat more salmon and seafood- more than can be caught. Salmon farming began over 30 years ago and has become a huge industry. Experts say it’s the fastest growing segment of agriculture. Salmon farming plays an important role in the economies of many areas as well. Jobs and other economic benefits contribute to the value of salmon as much its role in good nutrition. Salmon is an oily fish rich in omega-3 fatty acids, a substance that almost certainly helps protect against heart disease and may also reduce the risk of cancer and Alzheimer's. There is one species of Atlantic salmon and five species of Pacific. Atlantic salmon account for almost 95 percent of the farmed salmon produced, and most of them are farm-raised on the pacific coast. Pacific species account for all of the wild salmon caught in the Americas and some of them are also farm-raised. No wild Atlantic salmon are fished commercially in North America, as they are an endangered species. Atlantic salmon have become the species of choice to raise on farms because they are more adaptable to the farming techniques and make better use of feed so they produce more salmon with less feed. Not everybody agrees however, that farmed salmon raised in net pens are healthy for the environment or for you to eat. Over the years, there have been numerous stories in the media that have pointed out the negatives of farm raised salmon. These arguments have ranged from wastes from salmon farms, the spreading of disease from farmed to wild fish, the negative impacts of farm raised fish escapes and interacting with native fish, and recently, the effects of farmed salmon consumption on human health. The latest issue that the media got there hands on and consequently got the public concerned, was a report that polychlorinated... ...sk for cancer. There is no need to be alarmed with high levels of contaminants when it comes to consuming any kind of salmon. What we do need to be alarmed about is the media reporting and their level of contaminants! Ronad A. Hites, Jeffery A. Foran, David O. Carpenter, M. Coreen Hamilton, Barbara A. Knuth, Steven J. Schwager (2004) study: Global assessment of organic contaminants in farmed salmon, Science 303:226-229. Centers for Disease Control and Prevention, National Center for Environmental Health Health Studies Branch Kevin Amos, National Aquatic Animal Health Coordinator, NOAA Fisheries Salmon of the Americas SOTA is an organization of salmon-producing companies in Canada, Chile and the United States whose mission is to improve health, awareness and dining enjoyment of consumers in North America by providing timely, complete, accurate and insightful information about salmon on behalf of the member companies. Ashley Dean, Shwartz,.Mark 2003. Salmon farms pose significant threat to salmon fisheries in the Pacific Northwest, researchers find. Stanford University American Journal of Clinical Nutrition, April 2002, 76:608-613. Pediatric Research, 1998, 44(2):201-209.

Tuesday, November 12, 2019

Is Prejudice and Discrimination a Myth or a Real Life Situation Essay

Prejudice is a cultural attitude that rests on negative stereotypes about individuals or groups because of their cultural, religious, racial, or ethnic background. Discrimination is the active denial of desired goals from a category of persons. A category can be based on sex, ethnicity, nationality, religion, language, or class. More recently, disadvantaged groups now also include those based on gender, age, and physical disabilities. Prejudice and discrimination are deeply imbedded at both the individual and societal levels. Attempts to eradicate prejudice and discrimination must thus deal with prevailing beliefs or ideologies, and social structure. Although there is no wide agreement as to the â€Å"cause† of prejudice and discrimination, there is a consensus that they constitute a learned behaviour. The internalization of prejudice starts with parents and, later, teachers–the groups primary in the formation of attitudes within children. The media and social institutions solidify prejudicial attitudes, giving them social legitimacy. In a sense, it is incorrect to speak of â€Å"eradicating† prejudice, since prejudice is learned. At best, one can reduce prejudice and discrimination. Society looks most often to education and legislation to alleviate prejudice and discrimination–for reasons still not clearly known, inter-group contact alone is not enough to reduce prejudice. On one hand, multicultural education, whether direct or indirect, constitute the mainstay of educational efforts to eliminate prejudice. On the other hand, the emphasis on civil rights, enlightened immigration policies, and mandates for quota hiring are the cornerstone of legal approaches to alleviating the effects of prejudice and discrimination. The most overlooked area in resolving the problems of prejudice and discrimination lies in the web of close relationships where genuine feelings of love can be fostered and strengthened. The private sphere may indeed be the last frontier where a solution to the problems of prejudice may have to be found.

Saturday, November 9, 2019

Is There Such a Phenomena as ‘Pilot Error’ in Aviation Accidents

The term ‘Pilot error’ has been attributed to 78%[1] of Army aviation accidents. Despite the technological advances in Rotary Wing (RW) aircraft i. e. , helicopters accidents attributed to technology failure are decreasing, whilst pilot error is increasing. Currently, RW accidents are investigated and recorded using a taxonomy shown to suffer difficulties when coding human error and quantifying the sequence of events prior to an air accident. As Human Factors (HF) attributed accidents are increasing, lessons aren’t being identified nor the root cause is known. Therefore, I propose to introduce Human Factors Analysis and Classification system (HFACS) an untried taxonomy to the UK military developed as an analytical framework to investigate the role of HF in United States of America (USA) aviation accidents. HFACS, supports organizational structure, pre-cursors of psychological error and actual error; but little research exists to explain the intra-relations between the levels and components, or the application in the military RW domain. Therefore, I intend to conduct post-hoc analysis using HFACS of 30+ air accidents between 1993 to present. Implications of this research are to develop a greater understanding of how Occupational Psychology (OP) can help pilots understand HF, raise flight awareness and reduce HF attributed fatalities. Introduction â€Å"On 2 June 1994 an RAF Chinook Mk2 helicopter, ZD 576, crashed on the Mull of Kintyre on a flight from RAF Aldergrove to Fort George, near Inverness. All on board were killed: the two pilots, the two crewmembers and the 25 passengers. This was to have been a routine, non-operational flight, to take senior personnel of the security services to a conference. The sortie was planned in advance; it was entirely appropriate for these pilots, Flt Lts Jonathan Tapper and Richard Cook, and for the aircraft, ZD576, to have been assigned this mission. An RAF Board of Inquiry (BOI) was convened following the accident and carried out a detailed investigation. BOIs are established to investigate the cause of serious accidents, primarily, to make safety recommendations but, at the time of this crash, to also determine if human failings were involved. Their conclusion, after an exhaustive investigation was there was not one single piece of known fact that does not fit the conclusion that this tragic accident was a controlled flight into terrain. † The BOI found no evidence of mechanical failure and multiple witnesses stated that the aircraft appeared to be flying at 100ft at 150 knots there was no engine note change, the aircraft didn’t appear to be in distress and at the crash scene the throttle controls were still in the cruise position (not at emergency power if collision with the ground was imminent). 2] So the causation moved to Human Factors (HF). But some questions remain unanswered, on that fateful day why did these seasoned and experienced pilots fly their aircraft and passengers into a hillside at 150 knots. If this accident was attributed to HF it now appears to some that the aircrew themselves are more deadly than the aircraft they fly (Mason, 1993: cited in Murray, 1997). The crucial issue therefore is to understand why pilots Flt Lts Jonathan Tapper and Richard Cooks’ actions made sense to them at the time the fatal accident happened. Relevance of Research So why is this topic relevant to OP research? The British Army branch of aviation is an organization called the Army Air Corps (AAC) and in keeping with the trends of the other two services the Fleet Air Arm of the Royal Navy and the Royal Air Force, it has seen a steep decline in accidents in recent years. However, accidents attributed to Human Factors (HF) have steadily risen and are responsible for 90% of all aviation accidents. [3]. This research will depart from the traditional perspective of the label â€Å"pilot error† as the underlying causation of Aviation accidents, whereby current theory and research purport a ‘systemic’ approach to human factors investigation of Aviation accidents. This approach is derived from Reasons Model of Accident Causation, which examines the causal factors of organizational accidents across a spectrum of sectors from; nuclear power industry (e. g. , Chernobyl), off-shore oil and gas production (e. g. Piper Alpha) to transportation (e. g. Charring Cross) (Reason 1990). This approach recognizes that humans, as components of socio-technical systems, are involved in designing, manufacturing, maintaining, managing and operating aviation systems including the methods of selecting and assessing potential employees to the aviation industry from Pilots, Cabin crew, Engineers and Baggage handlers. Therefore, our ability to identify, understand and manage these potential issues enables us to develop systems that are more error-tolerant, thus reducing risk and the potential for accidents. I intend to be able to provide a more consistent, reliable and detailed analysis of HF causal factors that attribute to aviation accidents within the AAC. On average, the AAC experiences around 6 major accidents per year, although a record year was recorded with only two accidents in 1993. However, in 1992 aviation accidents cost over ?10M[4] in taxpayer’s money. Usually the causation of accidents are classified (human error, technical failure or operational hazard). Whilst there was a reduced figure of ?1M for 1993, the satisfaction of this financial success was marred by the fact that one of the two accidents resulted in a fatality. However, it is the concept of human error or pilot error that dominates the outcome of most BOIs particularly when there are fatalities. Current taxonomies used to classify accident causal groups do not extend beyond this distinction although more recently organizational factors have been included to reflect a more systemic view of accident causation. However, the HF domain is extensive and current taxonomies employed by the AAC do not encapsulate this. By using HFACS (currently adopted by the US Navy, Army, Airforce, and Coast Guard), a human error orientated accident investigation and analysis process; I will conduct post-hoc analysis of 30+ category four and five accidents from 1993 to present day. Literature review Before we start to look at any reduction in Air Accidents we need to grasp an understanding of category of accident. How many times when we hear about air accidents, â€Å"it was pilot error†, merely noting HF was responsible doesn’t prevent repetition nor identify any critical lessons, plus the description is far too generic. The term pilot error doesn’t assist us in understanding the processes underlying what leads to a crash, nor does it give us a means to apply remediation or even identify lessons to prevent re-occurrence. The other issue is that it is very seldom one single factor caused the helicopter to crash. Professor RG Green (1996) uses a categorization method: Modes of failure, Aircrew Factors and System failures. Within each of these exist sub-categories. E. g. , in Modes of Failure category lists a number of common errors made by the individual or individuals from; selective attention, automatic behaviour, forming inappropriate mental models, affects of fatigue and perceptual challenges leading to spatial disorientation, particularly common to RW flight. Aircrew factors, refers to background factors relevant to individuals: decision-making, personality, problem solving, Crew composition, Cockpit Authority Gradient (CAG) and Life stress. Finally, the systems factors applicable to the organization that we serve under, termed enabling conditions such as: Ergonomics, Job pressures and Organizational Culture. Bodies of Research Now, human error doesn’t just happen, usually a sequence of events will unfold prior to the accident. Human error is often a product of deeper problems; they are systematically connected to features of the individual’s tools, tasks and the surrounding media (Dekker, 2001). Therefore, in order to provide remediation through the development of strategies it is vital that we understand the various perspectives experienced through flight and how these could effect a pilot; these range from: cognitive, ergonomic, behavioural, psychosocial, aeromedical, and the Organizational Perspectives (Weigmann and Shappell 2003). Within the environment of human performance error is a unique state of a pilot’s operational environment that could be affected by anyone of, or all of the perspectives. Rasmussen (1982) utilized a cognitive methodology to understanding aircraft accidents. O’Hare et al. (1994) described the system as consisting of six stages: ‘detection of stimulus; diagnosis of the system; setting the goal; selection of strategy; adoption of procedure; and the action stage'. The model was found to be helpful in identifying the human errors involved in aviation accidents and incidents (O’Hare et al. 1994). One draw back being that these models using cognition are operator centric and do not consider other factors such as; the working environment, task properties, or the upervisory and work organization (Wiegmann and Sappell, 2001c). Edwards (1972) developed the ‘HELS system' model, which was subsequently called the ‘SHEL' model. Citing that Humans do not perform tasks on their own but within the context of a system; initially SHEL was a system focusing on the ergonomics and considered the man-machine interface. A tool that can be appli ed to investigate air accidents through the evaluation of human-machine systems failure. The ‘SHEL' model categorizes failure into: software, hardware, liveware and environment conditions. However the SHEL model fails to address the functions of management and the cultural aspects of society. Empirical findings Bird’s Domino Theory (1974) views accidents as a linear sequence of related factors or series of events that lead to an actual mishap. The theory covers the five-step sequence First domain Safety/Loss of control, the second domain, basic causes, identifies the origin of causes, such as human, environment or task related. The immediate causes include substandard practices and circumstances. The fourth domain involves contact with hazards. The last domain could be related to personal injury and damage to assets (Bird, 1974; and Heinreich, et al. , 1980). It is much like falling dominos each step causes the next to occur. Removing the factors from any of the first three dominos could prevent an accident. This view has been expanded upon by Reason (1990). Reason’s ‘Swiss cheese' model fig 1, includes four levels of human failure: organizational factors, unsafe supervision, preconditions for unsafe acts and unsafe acts. The HFACS was developed from this model in order to address some of limitations. The starting point for the chain of event is the organization ‘Fallible decisions' take place at higher levels, resulting in latent defects waiting for enabling factors (Reason, 1990). Management and safe supervision underpins any air operation through flight operations, planning, maintenance and training. However, it is the corporate executives, the decision makers who make available the resources, finances and set budgets. These are then cascaded down through the tiers of management and to the operator. Now this sounds like an efficient and effective organization and according to Reason failures in the organization come about by the breakdown in interactions and holes begin to form in the cheese. Within an organization unsafe acts may be manifested by lack of supervision attributed to organizational cultures operating within a: high-pressure environment, insufficient training or poor communication. The latent conditions at the unsafe supervision level promote hazard formation and increase the operational risks. Working towards the accident, the third level of the model is preconditions for unsafe acts. Performance of the aircrew can be affected by fatigue, complacency, inadequate design and their psychological and physical state (USNSC, 2001; Shappell and Wiegmann, 2001a; Wiegmann and Shappell, 2003). Finally, the unsafe acts of the operator are the direct causal factor of the accident. These actions committed by the aircrew could be either intentional or unintentional (Reason, 1990). The ‘Swiss cheese' model sees the aviation environment as a multifaceted system that does not work well when an incorrect decision been taken at higher levels (Wiegmann and Shappell, 2003). The model depicts a thin veneer of cheese the veneer symbolizing the defence against Aviation accidents and the dotted holes portray a latent condition or active failure. It is a chain of events that usually lead to an accident however as errors are made the holes begin to appear in the cheese, a datum line penetrates the cheese and if all the holes pass through the line, then a catastrophic failure occurs and a crash ensues. These causal attributions of poor management and supervision (organizational perspective) may only be unearthed if equipment is found in poor maintenance (ergonomic). If the organizational culture is one of a pressured environment then this could place unnecessary demands on the aircrew producing fatigue (Aeromedical). Or management could ignore pilots’ concerns if the CAG was at imbalance (psychosocial perspective). All of these factors could hinder and prevent aircrew from processing and performing efficiently in the cockpit, which could result in pilot error followed later by an Air Accident. However, with Reasons model it doesn’t identify what the holes in the cheese depict. For any intervention strategy to function and prevent reoccurrence the organization must be able to identify the causal factors involved. The important issue in a HF investigation is to understand why pilots’ actions made sense to them at the time the accident happened (Dekker, 2002). HFACS was specifically developed to define latent and active failures implicated in Reasons Swiss Cheese model so it could be used an accident investigation and analysis tool (Shappel and Weigmann, 1997; 1998; 1999; 2000; 2001). The framework was developed and refined by analyzing hundreds of accident reports containing thousands of human causal factors. Although designed originally for use within the context of the military aviation HFACS has shown to be effective within the civil aviation arena as well (Wiegmann and Shappel, 2001b). Specifically HFACS describes four levels of failure; each one corresponds to one of the cheese slices of Reasons model. These are a) Unsafe acts b) Pre-conditions for Unsafe acts c) Unsafe supervision and d) Organizational influences (Weigmann and Shappel, 2001c) Methodology By using a combination of qualitative (i. e. the process of recoding causal factors based on individual and group discussions) and quantitative (causal factor analysis of recoded narratives against HFACS taxonomy) research methodologies to identify further causal groups to be used in classifying accidents and to assess the validity of the HFACS framework as a tool to classify and analyze accidents. Data to be used in this study will be derived from the narrative findings of AAC BOIs conducted between 1990 and 2006[5]. This should equate to approximately 30-35 narratives to be used in the analysis. Authority to access the Board of Inquiry library has been granted by the Army's Flight Safety and Standards Inspectorate, which is the AAC organization responsible for conducting Aviation accident investigations and analysis. Data will only be used that comprises of category 4 accidents (single fatalities and severe damage to aircraft) and category 5 (multiple fatalities and loss of aircraft). In addition to the narrative description in the report, the following information will also be collected: the type of mission in which the accident happened (e. . low-level flying, exercise, HELEARM[6]); the flight phase (e. g. take-off, in the hover, flight in the operational area, approach, and landing); the rank of the pilot(s) (to measure CAG and see if this is a contributory factor) involved and the type and category of aircraft. This study will concentrate on all Army helicopters; including all variants of the Lynx, Gazelle and Squirrel trainer. Coding frames will be developed and tested for use in the final recoding exercise. An Occupational Psychologist from the Human Factors epartment of the MOD will supervise the training and the coders will be a number of RW pilots with a minimum of 1000hours flying time at the time of the research. Each pilot will be provided with a workshop in the use of HFACS framework. This is to ensure parity and that all coders understand the HFACS categories. After the period of training the raters will be randomly assigned air accidents so that two independent raters can independently code each accident. It is intended to code the inter-rater reliability on a category-by-category basis. The degree of agreement (the inter-rater reliability) initially between the two coders will be achieved by Cohens Kappa (Cohen, 1960;Landis and Koch, 1977). SPSS v. 15. 0 will be used to quantify the frequency of causal factors of the 30+ narratives. It is also hoped to compare the inter-rater reliability between all the coders using Fleiss Kappa. Fleiss’s Kappa assessment method is used to measure the similarity agreement of observers and treats them symmetrically (Fleiss, 1981). The level of agreement between the raters is statistically measured against what could be achieved through chance. The Kappa level range would be classed as achieving moderate inter reliability if it were between 0. 41-0. 60. Cohen’s Kappa is based on the statistical measurement analysis of the level of agreement between raters in excess of (Landis and Koch, 1977). Discussion The research intends to apply an untried methodology not as yet sanctioned by the UKs Ministry of Defence in order to analyze a number of Air Accidents within the AAC between 1993 and present day. Thirty plus serious Category 4 and 5 accidents will be re-classified using the taxonomy of HFACS. It is intended where pilot error was the cause, to identify the HF associated and attribute to each accident. It is also hoped that the HFACS taxonomy can accommodate the HF identified during re-coding and therefore provide tangible evidence that HFACS could be used by the AAC as a reliable tool. It is hoped a number of comparison analysis can be achieved and are accidents more prevalent when flying in visual meteorological conditions (VMC) or poor visibility instrument meteorological conditions (IMC) therefore two sets of visual conditions; VMC and daylight or impoverished visual conditions IMC or twilight/nighttime. Wiegmann, D. A. and Shappell, S. A. (2003). What would also be interesting was the causation and aircrew behaviours of fatal and non-fatal accidents and are these more prevalent on operations or during training. The author was in Afghanistan 2006 and over 6-month period there wasn’t a single crash let alone fatality. But the AAC records 6 crashes a year so again this is worthy of investigation. The ranks of the pilot is also worthy of interest with regards to achieving a good CAG there may be causal evidence to indicate that an imbalance between ranks could have lead to an aircrash. The Organizational hierarchy will; also be researched is it one specific organization that keeps having crashes is there an issue with the pressures placed on the pilots by the organization. The inter-rater reliability will also be calculated by using Fleiss Kappa which will work for more than two raters, it is intended that an acceptable level of inter rater reliability will be recorded. In addition, the intra-rater reliability as a holistic measurement is hoped to be high in order to support the credibility of the results. An Organization could benefit from gaining a standardized, consistent coding methodology and that data can be used for identifying trends and intervention strategies can then target these trends in accident causation. It is hoped that granularity can be achieved beyond the label â€Å"pilot error† and identify the underlying causation of the accident. If successful and if HFACS is adopted UK military wide, perhaps the real cause of why ZD576 flew into the Mull of Kyntre could be unearthed. If other Military organizations can reap success then HFACS could be a reliable tool to identify causation and could be used in accident investigation. Ethics I will comply fully with the BPS[7] ethical principles when conducting research with human participants. All identifiable information relating to individuals discussed in the narrative findings will be removed in accordance with the data protection act, for the purposes of analysis and reporting. All participates will be fully appraised of my research, recognize that all the coders are volunteers and give informed consent before the research and to understand how the information will be used. The coders will be reviewing material depicting instances of fatalities therefore it is important that the coders do not come to any psychological harm, over and above the risk of harm in ordinary life (participants will be invited to contact me if participation causes concern at any time or to ask questions). Maintaining a good rapport particularly with the coders is also a desirable. Being an Aeronautical Engineer should also bridge any cultural gaps and maintain a good working relationship.

Thursday, November 7, 2019

Erwin Schrödinger and the Schrödingers Cat Experiment

Erwin Schrà ¶dinger and the Schrà ¶dinger's Cat Experiment Erwin Rudolf Josef Alexander Schrà ¶dinger (born on August 12, 1887 in Vienna, Austria) was a physicist who conducted groundbreaking work in quantum mechanics, a field which studies how energy and matter behave at very small length scales. In 1926, Schrà ¶dinger developed an equation that predicted where an electron would be located in an atom. In 1933, he received a Nobel Prize for this work, along with physicist Paul Dirac. Fast Facts: Erwin Schrà ¶dinger Full Name: Erwin Rudolf Josef Alexander Schrà ¶dingerKnown For: Physicist who developed the Schrà ¶dinger equation, which signified a great stride for quantum mechanics. Also developed the thought experiment known as â€Å"Schrà ¶dinger’s Cat.†Born: August 12, 1887 in Vienna, AustriaDied: January 4, 1961 in Vienna, AustriaParents: Rudolf and Georgine Schrà ¶dingerSpouse: Annemarie BertelChild: Ruth Georgie Erica (b. 1934)Education: University of ViennaAwards: with quantum theorist, Paul A.M. Dirac awarded 1933 Nobel Prize in Physics.Publications: What Is Life? (1944), Nature and the Greeks  (1954), and My View of the World  (1961). Schrà ¶dinger may be more popularly known for â€Å"Schrà ¶dinger’s Cat,† a thought experiment he devised in 1935 to illustrate problems with a common interpretation of quantum mechanics. Early Years and Education Schrà ¶dinger was the only child of Rudolf Schrà ¶dinger – a linoleum and oilcloth factory worker who had inherited the business from his father – and Georgine, the daughter of a chemistry professor of Rudolf’s. Schrà ¶dinger’s upbringing emphasized cultural appreciation and advancement in both science and art. Schrà ¶dinger was educated by a tutor and by his father at home. At the age of 11, he entered the Akademische Gymnasium in Vienna, a school focused on classical education and training in physics and mathematics. There, he enjoyed learning classical languages, foreign poetry, physics, and mathematics, but hated memorizing what he termed â€Å"incidental† dates and facts. Schrà ¶dinger continued his studies at the University of Vienna, which he entered in 1906. He earned his PhD in physics in 1910 under the guidance of Friedrich Hasenà ¶hrl, whom Schrà ¶dinger considered to be one of his greatest intellectual influences. Hasenà ¶hrl was a student of physicist Ludwig Boltzmann, a renowned scientist known for his work in statistical mechanics. After Schrà ¶dinger received his PhD, he worked as an assistant to Franz Exner, another student of Boltzmann’s, until being drafted at the beginning of World War I. Career Beginnings In 1920, Schrà ¶dinger married Annemarie Bertel and moved with her to Jena, Germany to work as the assistant of physicist Max Wien. From there, he became faculty at a number of universities over a short period of time, first becoming a junior professor in Stuttgart, then a full professor at Breslau, before joining the University of Zurich as a professor in 1921. Schrà ¶dinger’s subsequent six years at Zurich were some of the most important in his professional career. At the University of Zurich, Schrà ¶dinger developed a theory that significantly advanced the understanding of quantum physics. He published a series of papers – about one per month – on wave mechanics. In particular, the first paper, â€Å"Quantization as an Eigenvalue Problem, introduced what would become known as the Schrà ¶dinger equation, now a central part of quantum mechanics. Schrà ¶dinger was awarded the Nobel Prize for this discovery in 1933. Schrà ¶dinger’s Equation Schrà ¶dingers equation mathematically described the wavelike nature of systems governed by quantum mechanics. With this equation, Schrà ¶dinger provided a way to not only study the behaviors of these systems, but also to predict how they behave. Though there was much initial debate about what Schrà ¶dinger’s equation meant, scientists eventually interpreted it as the probability of finding an electron somewhere in space. Schrà ¶dinger’s Cat Schrà ¶dinger formulated this thought experiment in response to the Copenhagen interpretation of quantum mechanics, which states that a particle described by quantum mechanics exists in all possible states at the same time, until it is observed and is forced to choose one state. Heres an example: consider a light that can light up either red or green. When we are not looking at the light, we assume that it is both red and green. However, when we look at it, the light must force itself to be either red or green, and that is the color we see. Schrà ¶dinger did not agree with this interpretation. He created a different thought experiment, called Schrà ¶dingers Cat, to illustrate his concerns. In the Schrà ¶dingers Cat experiment, a cat is placed inside a sealed box with a radioactive substance and a poisonous gas. If the radioactive substance decayed, it would release the gas and kill the cat. If not, the cat would be alive. Because we do not know whether the cat is alive or dead, it is considered both alive and dead until someone opens the box and sees for themselves what the state of the cat is. Thus, simply by looking into the box, someone has magically made the cat alive or dead even though that is impossible. Influences on Schrà ¶dinger’s Work Schrà ¶dinger did not leave much information about the scientists and theories that influenced his own work. However, historians have pieced together some of those influences, which include: Louis de Broglie, a physicist, introduced the concept of â€Å"matter waves. Schrà ¶dinger had read de Broglie’s thesis as well as a footnote written by Albert Einstein, which spoke positively about de Broglie’s work. Schrà ¶dinger was also asked to discuss de Broglie’s work at a seminar hosted by both the University of Zurich and another university, ETH Zurich.Boltzmann. Schrà ¶dinger considered Boltzmann’s statistical approach to physics his â€Å"first love in science,† and much of his scientific education followed in the tradition of Boltzmann.Schrà ¶dinger’s previous work on the quantum theory of gases, which studied gases from the perspective of quantum mechanics. In one of his papers on the quantum theory of gases, â€Å"On Einstein’s Gas Theory,† Schrà ¶dinger applied de Broglie’s theory on matter waves to help explain the behavior of gases. Later Career and Death In 1933, the same year he won the Nobel Prize, Schrà ¶dinger resigned his professorship at the University of Berlin, which he had joined in 1927, in response to the Nazi takeover of Germany and the dismissal of Jewish scientists. He subsequently moved to England, and later to Austria. However, in 1938, Hitler invaded Austria, forcing Schrà ¶dinger, now an established anti-Nazi, to flee to Rome. In 1939, Schrà ¶dinger moved to Dublin, Ireland, where he remained until his return to Vienna in 1956. Schrà ¶dinger died of tuberculosis on January 4, 1961 in Vienna, the city where he was born. He was 73 years old. Sources Fischer E. We are all aspects of one single being: An introduction to Erwin Schrà ¶dinger. Soc Res, 1984; 51(3): 809-835.Heitler W. â€Å"Erwin Schrà ¶dinger, 1887-1961.† Biogr Mem Fellows Royal Soc, 1961; 7: 221-228.Masters B. â€Å"Erwin Schrà ¶dinger’s path to wave mechanics.† Opt Photonics News, 2014; 25(2): 32-39.Moore W. Schrà ¶dinger: Life and thought. Cambridge University Press; 1989.Schrà ¶dinger: Centenary celebration of a polymath. Ed. Clive Kilmister, Cambridge University Press; 1987.Schrà ¶dinger E. â€Å"Quantisierung als Eigenwertproblem, erste Mitteilung.†Ann. Phys., 1926; 79: 361-376.Teresi D. The lone ranger of quantum mechanics. The New York Times website. https://www.nytimes.com/1990/01/07/books/the-lone-ranger-of-quantum-mechanics.html. 1990.

Tuesday, November 5, 2019

SAT High School Codes and Test Center Codes

SAT High School Codes and Test Center Codes SAT / ACT Prep Online Guides and Tips When you register for your SAT, you have to submit codes for your high school and test center, the location where you are going to take your SAT. The codes make it easier for the College Board to keep track of the high school and test center of everyone who takes the SAT. You want to make sure you submit the right codes, since making a mistake can result in your having to take the SAT at a random high school that's far away from where you live or sending your scores to the wrong college. In this article, I will let you know how to look up SAT high school and test center codes and advise you how to use them properly. How To Enter Codes During Online Registration High School Codes It's very easy to submit your high school code during the online registration process. All you have to do is begin typing the name of your high school and your high school should appear in a dropdown menu. Just click on the name of your school and your high school code will be automatically entered. If the name of your school doesn't appear, you can search for your school by its zip code. Then, the name of your school will be automatically entered. If you click "change your school," you can search for your high school by its code, name, city, state, or zip code. Just select your school from the search results and your high school code will be entered. Test Center Codes Near the end of the online registration process, you can select your test center location. You can search for test centers in your area, and then you'll be given a list of options. Just select where you want to take the test, and the test center code will be entered. How To Look Up SAT Codes You can alsosearch for high school and test center codesbefore, during, or after the online registration process. High School Codes To find your high school code, you can search by country, city, state, and zip code. After you enter the search criteria and click search, on the left, you'll be given the school name, and on the right, you'll be given the corresponding high school code. Test Centers To find your test center code, you can search by your test date, country, state, and city. When you search for test center codes, you'll be given the test center name, address, and code. Special Situations Homeschooled If you're homeschooled, your high school code is 970000. If Your High School Code Is Not Listed If you go to high school in the US or in a US territory and your school code is not listed, enter 000003. If You Go to High School Outside of the US If you go to high school in a country outside of the US, enter 000004. Advice for Ensuring Your Codes Are Correct If you select your high school and test center while registering,make sure the codes on your admission ticket are correct. You can double-check the codes by looking them up on the SAT website. If you do manually enter your codes during registration, make sure you've entered the right codes and that the codes you've entered correspond with your high school and test center. What's Next? For anyone studying for the SAT, I highly recommend that you check out the ultimate SAT study guide. You'll learn extremely important information like how to beat procrastination in your SAT prep and how to get a perfect score. If you want more information about SAT logistics, read our articles about SAT admission tickets and SAT fees and registration. Want to improve your SAT score by 160 points?We've written a guide about the top 5 strategies you must be using to have a shot at improving your score. Download it for free now:

Sunday, November 3, 2019

Quality Management Tools & Techniques Essay Example | Topics and Well Written Essays - 1250 words

Quality Management Tools & Techniques - Essay Example In addition, it is also used in the monitoring of the effects of process improvement theories. Consequently, as the standard the X-bar and R chart will only work in place of X-bar and s or the median and R chart. In order to create an X-bar and R chart you can use CHARTrunner and SQCpack software. The X-bar is used to show the mean or average of every subgroup. It also used to analyze central location. On the other hand the R-chart is used to depict how data is spread and study system variability. We can actually utilize the R charts and X-bar for any of the processes that with a subgroup size greater than one. Usually it is used when the size of the subgroup falls within two and ten. However, the s charts and X-bar charts are used those subgroups of eleven and more. The X-bar and R charts are only utilized; if you need to assess stability of the system, the data is in variable form, if the data is collected in such subgroups that are larger than one but are less than eleven. So as t o ensure the best of the results, before calculating the control limits should collect as many subgroups as possible. This is because with the small amount of data the variability of the entire system may not be represented by the X-bar and R chart. Therefore, the more subgroups utilized in the calculation of limits usually 20-25 the more reliable the results (Waite, 2010). As in the case of Scott and Larraine the utilization of 30 sub groups is actually recommended. Since Scott said that he noticed that the number of complaints seem to have significantly increased since the new system was installed, it can actually be diagnosed that the problems may be emanating from the system thus the need of checking if there is any variability in the system. But since the errors increased in the last third of the month it is also substantiated that the system has been in place close to a month. The X-bar and R charts can be of help if you commence to improve the system and later use them to ass ess the systems stability. After assessment of the system’s stability, should determine if there is need to stratify the data. This is because you may actually come across variability in the results should collect the data and enter it such way that lets you to stratify it by location, symptom, lots, time and operator. Moreover, since the hotel was continuously receiving complaints the X-bar and R charts can also be used to analyze the results of improvements of the process. This would curb down an increased trend of complaints of the inflated bills from the hotel staff. Finally, the X-bar and R charts can be used for standardization. This means the data should continue to be collected and analyzed through the process of operation. If changes have been made to the system that can make the collection of data to stop, then you can only have the perception and opinion that they improved the system (Waite, 2010). An X-bar monitors the average value of particular process overtime. This means that for every subgroup the x-bar value is plotted. The lower and upper control limits are the ones that define the range of inherent variation in the means of the subgroups when the process is in control. However, the R chart is used to monitor the process if the variable of interest is a quantitative measure. To find the upper and lower limits we use the formulae (Woodwall, 2011). UCL = ?+ 3vn and UCL ?-3vn To commence with, the R chart is

Thursday, October 31, 2019

The Educational System in the United States Research Paper

The Educational System in the United States - Research Paper Example From this discussion it is clear that  everyday conversational skills such as writing, reading and collaboration can truly solidify the foundation of the student’s cognitive and linguistic skills. In the process of learning an additional language, ESL students must keep up with the daily strains that are placed in their program of studies.  ESL students must learn to excel in time management skills. Indisputably, each student has a unique set of literacy development needs. Most ESL students have a strict schedule and must be accommodated with personalized program of activities that is managed by the staff itself. One can only imagine the surreal experience of international students, who at times felt hopeless as a foreign student in a North American School. The point is to acknowledge that internationals students are faced with academic, social, and emotional challenges in every aspect of life, which makes learning English much more difficult.This study highlights that lea rning and applying that knowledge is a fairly non-trivial challenge for the average American. But what of non-natives who are required to learn and master one of the most grammatically complex languages? This is a struggle which is unquantifiable and difficult to overcome.  Many non-native English speakers who often feel confused, frustrated, and pressured to achieve in an environment of native English speakers are in continuous pressure to excel in academics.... Clearly, it is vital that students should ask for help and set goals on how to overcome each issue. Therefore, â€Å"procedure† becomes a necessary element to facilitate student growth as the tutor advises them to set goals and helps them accomplish these goals. In addition, it affords students the opportunity to discuss with the tutor any concerns they have.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚     For majority of the ELL students, grammar is the biggest focal point. ESL students are often very knowledgeable in grammar yet fall into the trap of superfluity. ESL students are constantly struggling to write like their counterparts, yet they traditionally fall short because they approach the problem with a skewed mentality. ESL students aim for a high status instead of learning from experience.   The challenge of writing like a native English student extends beyond writing the ‘right’ word as the language itself contains multiple-word meanings. In addition, the American standard demands effective argumentation and synthesis at higher levels. This standard challenges students not only adapt to different writing styles, but to acknowledge other writing method - a seemingly painless task which is continuously compounded by a lack of familiarity with the language and its intricacies.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   As a tutor, it is important to account for the differences in writing style prevalent between ESL and native English speakers. The lack of context and organization present in ESL students’ writing stems from an overemphasis on grammar. Nevertheless, these differences do not correspond to a deficiency. Most ESL students get so caught up by wanting to get their papers fixed that they fail to understand the objective of the tutoring session, which

Tuesday, October 29, 2019

Cisco Case Analysis Research Paper Example | Topics and Well Written Essays - 500 words

Cisco Case Analysis - Research Paper Example As per the 2010 company data, Cisco employed over 60,000 people and earned annual revenue of US$ 40 billion. Despite the adversities of 2009 global financial crisis, the company remains an attractive investment option for global investors. Networking industry comprises of a number of huge players like IBM. Therefore, the degree of competitive rivalry is very high in this sector. In addition, threat of substitutes is also high in networking industry since majority of the networking equipment is substitutable. However, the industry is less affected with the threat of new entrants because entry cost is huge in the networking sector. Evidently, supplier has less bargaining power over networking companies as there are a large number of potential suppliers. Although there are numerous potential suppliers, companies rarely opt to change their suppliers because of high switching cost. At the same time, buyer power is relatively high in the networking industry since modern customers are well informed of the prevailing market prices and increased provider options. Hence, the Porters’ five forces analysis indicates that the networking industry’s external environment does not offer potential opportunities to Cisco. The Cisco was founded in 1984 by two Stanford computer scientists, and the company was made public in 1990. As Nolan (2005) reports, from the beginning, the company concentrated to dominate the dramatically growing ‘internetworking’ market. In 1997, Cisco was included in the top five companies listed by Fortune 500 on the basis of return on revenues and return on assets. In the following year, the company’s market capitalization crossed $100 billion mark. The company overtook Microsoft in 2000. In the same year, some surveys reflected that Cisco products play a role in more than 75% of all internet traffic. Mission statement of Cisco is; â€Å"shape the future of the

Sunday, October 27, 2019

Information System Business

Information System Business A system, whether automated or manual, that comprises people, machines, and/or methods organized to collect, process, transmit, and disseminate data that represent user information. The elements of an information system are Workload, Response time, Throughput, Resource utilization, Resource service time. In other words information system is a system in which all the data is stored, analyzed and output with all the options is given to the managerial level to make decision for the development of the business. It is a system which is helpful at levels of business. Information systems deal with the development, use and management of an organizations IT infrastructure. In the post-industrial, information age, the focus of companies has shifted from being product oriented to knowledge oriented, in a sense that market operators today compete on process and innovation rather than product: the emphasis has shifted from the quality and quantity of production, to the production process itself, and the services that accompany the production process. The biggest asset of companies today, is their information, represented in people, experience, know-how, innovations (patents, copyrights, trade secrets), and for a market operator to be able to compete, he/she must have a strong information infrastructure, at the heart of which, lies the information technology infrastructure. Thus, the study of information systems focuses on why and how technology can be put into best use to serve the information flow within an organization. Compare and contrast the roles of systems designers from systems builders. System Designer has to collect the information for the system to be designed. Than analyze the gathered information. Create a document to show how the system is going to functioning; what are the requirements, who are the users, what would be the environment for the system. System builders: Based on the system design document system builders develops a plan to build the system, resources needed to develop the system, resource utilization plan, time needed to build the system. What are the similarities and differences between business and data requirements? Business Requirement: A requirement is a description of what a system should do. Systems may have from dozens to hundreds of requirements. A requirement describes a condition to which a system must conform; either derived directly or indirectly from user needs. A requirement for a computer system specifies what you want or desire from a system. Requirements should be: unique in scope. Is this the only requirement that defines this particular objective? precise in wording. Are there any vague words that are difficult to interpret? bounded by concrete expectations. Are there concrete boundaries in the objectives? irrefutably testable. Can you build one or more test cases that will completely verify all aspects of this requirement? Data Requirement: To build or create the above business requirement data is needed to analyze the business requirements. Based on the data collected report is created to justify the business requirement. How does the concept of work-flow change the focus of a traditional information system? Workflow can be described simply as the movement of documents and tasks through a business process. Workflow can be a sequential progression of work activities or a complex set of processes each taking place concurrently, eventually impacting each other according to a set of rules, routes, and roles. Workflow is acknowledged in the industry for facilitating powerful and flexible process automation. It is a tool that both business users and IT professionals can use to automate business processes and track work as it moves through the organization ensuring that the right work gets to the right person at the right time. It can be scaled from a small departmental solution to an enterprise-level Business Process Management solution that supports thousands of concurrent users across multiple sites. By that productivity can be increased and managed. Workflow Management Systems Workflow Management Systems allow organizations to define and control the various activities associated with a business process. In addition, many management systems also allow a business the opportunity to measure and analyze the execution of the process so that continuous improvements can be made. Such improvements may be short-term (e.g., reallocation of tasks to better balance the workload at any point in time) or long-term (e.g., redefining portions of the workflow process to avoid bottlenecks in the future). Most workflow systems also integrate with other systems used by the organization: document management systems, databases, e-mail, office automation products, Describe the major aspects of a feasibility analysis. The feasibility of a project can be ascertained in terms of technical factors, economic factors, or both. It is a study documented with a report showing all the aspects of the project. Different Feasibility studies are as follows: Technical Feasibility. It refers to the ability of the process to take advantage of the current technology in pursuing further improvement. The technical capability of the personnel as well as the capability of the available technology should be considered. Technology transfer between geographical areas and cultures needs. Managerial Feasibility: It involves the capability of the infrastructure of a process to achieve improvement. Support of Management, Involvement of employee, and commitment are key elements required for managerial feasibility. Economic Feasibility: This involves the feasibility of the proposed project to generate economic benefits. A cost-benefit analysis is important aspects of evaluating the economic feasibility of projects. The tangible and intangible aspects of a project should be translated into economic terms to facilitate a consistent basis for evaluation. Financial Feasibility: It involves the capability of the project organization to raise the appropriate funds needed to implement the proposed project. Project financing can be a major obstacle in large multi-party projects because of the level of capital required. It is done to determine that whether it is worth to spend that much money according to the profit analysis. Cultural Feasibility. It deals with the compatibility of the proposed project with the cultural setup of the project environment. As an example: religious beliefs may influence what an individual is willing to do or not do. Social Feasibility. Social feasibility addresses the influences that a proposed project may have on the social system in the project environment. The ambient social structure may be such that certain categories of workers may be in short supply or nonexistent. The effect of the project on the social status of the project participants must be assessed to ensure compatibility. It should be recognized that workers in certain industries may have certain status symbols within the society. Safety Feasibility. Safety feasibility is another important aspect that should be considered in project planning. It refers to an analysis of whether the project is capable of being implemented and operated safely with minimal adverse effects on the environment. Political Feasibility. Political considerations often dictate direction for a proposed project. This is particularly true for large projects with national visibility that may have significant government inputs and political implications. Environmental Feasibility. Concern must be shown and action must be taken to address any and all environmental concerns raised or anticipated. It mostly done for bio technological projects.. Market Feasibility. The market needs analysis to view the potential impacts of market demand, competitive activities, etc. and market share available. Price war activities by competitors, whether local, regional, national or international, must also be analyzed for early contingency funding and debt service negotiations during the start-up, ramp-up, and commercial start-up phases of the project. What is scope creep? Give an example and describe methods for controlling creep. A scope creep means when an unavoidable or unexpected change occurs while the project development. It can also result in a project team overrunning its original budget and schedule. If budget or schedule is not increased along with scope, the change is usually considered an unacceptable addition to the project is known as scope creep. Methods for controlling creep : Expect that there will be scope creep. Implement Change Order forms early and educate the project drivers on your processes. A Change Order form will allow you to perform a cost-benefit analysis before scheduling changes requested by the project drivers. Be sure you thoroughly understand the project vision. Meet with the project developers and deliver an overview of the project as a whole for their review and comments. List the priorities. Make an ordered list for your review throughout the project duration. Items should include budget, deadline, feature delivery, customer satisfaction, and employee satisfaction Define your deliverables and have them approved by the project developers. Deliverables should be general descriptions of functionality to be completed during the project. Divide the approved deliverables into actual work requirements. The requirements should be as detailed as necessary. The larger your project, the more detail you should include. If your project spans more than a month or two, dont forget to include time for software upgrades during development and always include time for ample documentation. Break the project into major and minor milestones. Minor milestones span should not be more than a month. Whatever your method for determining task duration, leave room for error. When working with an unknown staff schedule 140 to 160 percent of the duration as expected to be delivered. If your schedule is tight, reevaluate your deliverables. Once a schedule has been created, assign resources and determine your critical path using a PERT Chart or Work Breakdown Structure. Your critical path will change over the course of your project, so its important to evaluate it before development begins. Follow the map to determine which deliverables must be completed on time. Describe PERT charts. What major elements are tracked? PERT (Program Evaluation and Review technique) A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. It is a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine missile program. A PERT chart looks more like a flow chart and concentrates on the relationships between tasks and less on the timeline. PERT charts emphasize task relationships rather than time. Major Elements tracked are : Identify the specific activities and their milestones. Determine the proper sequence of the activities Construct a network diagram. Estimate the time required for each activity. It has 3 types of timing which are Optimistic time, Most likely time and Pessimistic time. Determine the critical path in the process. It is helpful to determine ES (Earliest Start Time), EF(Earliest Finish Time), LS (Latest Start time) and LF (latest Finish Time. Update the PERT CHART along the progress of project. Define Systems Analysis. Systems Analysis is a deep and through study of an existing system or the new system that has to be created. In it all the aspects are taken into condition like whether the new system would be helpful for the business to grow or run smoothly, Cost effective, improve the overall system process for the business. System Analysis more emphasis is given to understanding the details of an existing system or a proposed one and then deciding whether the proposed system is desirable or not and whether the existing system needs improvements. Thus, system analysis is the process of investigating a system, identifying problems, and using the information to recommend improvements to the system. An analysis report is generated and based on it system design document is prepared. Or Systems analysis of an operating system consists of the evaluation of the efficiency, economy, accuracy, and productivity of existing procedures measured against the stated objectives of the library; and the design of new procedures to satisfy the demands of management and user. What is a use case? What are the elements and how is it used? USE CASE: A Use Case is a top level category of system functionality (i.e.: Log on, Shut down, etc.). A Use Case has graphical representation and/or a text description. The diagram or description identifies all the actors (outside of the system) involved in the function, as well as an indication of how the Use Case is initiated. The collection of Use Case diagrams provides a ‘context diagram of system interfaces. Each Use Case constitutes a complete list of events initiated by an Actor and it specifies the interaction that takes place between an Actor and the System. In a Use Case the system is viewed as opaque, where only the inputs, outputs, and functionality matter. Components of USE CASE: The Use Case diagram just provides a quick overview of the relationship of actors to Use Cases. The meat of the Use Case is the text description. This text will contain the following: Name Case ID Brief Description SRS (software requirement specification)Requirements Supported Pre Post Conditions Event Flow The requirements in the SRS are each uniquely numbered so that they may be accounted for in the verification testing. These requirements should be mapped to the Use Case that satisfies them for accountability. What is the purpose of Primary and Foreign Keys on database tables? Primary key constraint is set a database table to make a record unique. In other words to avoid duplicate records primary key constrain is created. It can be on field or an combination of more than one field. Foreign key constraint is created to check a matching entry in another table which it refers. It is useful to link two tables with different details. It can have relationship like one to one or one to many. Foreign key is referenced by the primary key or unique key field in another table. BONUS Describe the similarities that exist between the Project Management, Systems Analysis and Information Systems lifecycles. Project Management Lifecycle System Analysis and Information System Lifecycle Phase I: Project Initiation Phase I System Initiation and Feasibility Study Phase II: Project planning Phase II Project Planning and Functional Analysis Phase III System Design Phase III: Project Execution Phase IV Programming Phase V Testing and Implementation Project Closure Phase VI Post-Implementation Evaluation