Thursday, October 31, 2019

Water clock Research Paper Example | Topics and Well Written Essays - 1250 words

Water clock - Research Paper Example However this is not the case and many experiments are underway all around the globe to find the best product possible which can wake up an individual as ordinary alarms do not do the trick when it comes to waking them up from sleep. The product that has been chosen for the sake of this paper is that of a water clock which is in the form of a top lamp. It aims to spill water on to the sleeper if he does not get up and avoids using the alarm so that he could get an extra dose of sleep in the early hours of the morning. This paper will discuss the basis of a water clock and how it has been studied from the aspect of four dimensions of the marketing – product, price, place and promotion. The water clock runs with electricity and is basically an alarm clock. It is different from other clocks because it has a water tank fitted inside it. However this water tank is small in size and does not take that much space. The water clock is an interesting idea because it helps the sleeper to wake up and get on his toes immediately. The basic intention is to make him feel all soggy so that shall wake him up quickly. Some people find it hard to digest the fact that they would be spilled with water so early in the morning; however the advantages of such a water clock cannot be denied at all, especially if the sleeper believes in having deep sleep patterns. The water clock acts as a deterrent to sleeping for long period of time, thus offering a chance to the sleeper to be on his toes instantly. As far as comparing it with other products in the market is concerned, the water clock acts as a product that is one step ahead of them. Other products make use of the cell (alkaline battery ) technology while this water clock is run through electricity alone. Hence if the cells go off in the middle of the night, the sleeper can still be assured that his water clock will wake him up early in the morning, at his own designated time. The

Tuesday, October 29, 2019

Beauty of Women Essay Example for Free

Beauty of Women Essay Proposal: I have chosen to write about the comparison between the cultures that produced the Venus de Willendorf with the culture that produces the Barbie doll. While writing about the two cultures, I will describe some common themes the Paleolithic culture and the modern culture share; what our modern culture has adopted and those things it has rejected of which the Paleolithic culture held to high standards. Most important through my writing I will show you how much our world views have changed from the era of the Venus de Willendorf to the modern world view of what the ideal woman should look like and how the view of beauty should not be as superficial as the modern culture makes it out to be. Outline: 1. Intro a. â€Å"Beauty is in the eye of the beholder†, this saying can be proven by looking at the different cultures throughout the ages. Beauty of a woman is much more than what she looks like, it is also what a woman’s body can produce and withstand, what is within her. Past cultures show us something that our modern culture tends to forget; that beauty of a woman is more than what she may look like, a woman’s beauty is something that should be adorned. 2. Body b. Common themes between the Paleolithic culture’s views on beauty vs. the modern world’s view i. One common theme on the view of beauty between the Paleolithic culture and the modern world is that both cultures worship a woman’s beauty; each in their own way. c. Some differences between the view of beauty between the Paleolithic culture and modern culture. ii. The most obvious difference between the two cultures is the size/shape of a woman. In the Paleolithic culture an ideal woman’s shape is volumpsous, full figured; but in the modern culture, the ideal woman’s shape is thin with large breast and well preserved. d. How the view of a woman’s beauty has changed between the Paleolithic culture and modern culture iii. Beauty in the Paleolithic culture largely revolved around reproduction, sustaining life; while beauty in the modern world is about preserving, extending life. e. Conclusion iv. Throughout the ages and the different cultures the regard for a woman’s beauty has changed in some ways, but has stayed the same in others. No matter what the beholder believes beauty to be, the one constant of a woman’s beauty in all cultures and throughout time is that it can be a very powerful thing.

Sunday, October 27, 2019

Total Consumption Burner And Premix Chamber Burner Comparison Biology Essay

Total Consumption Burner And Premix Chamber Burner Comparison Biology Essay Atomic emission is a process that occurs when electromagnetic radiation is emitted by excited atoms or ions. In atomic emission spectrometry the sample is subjected to temperatures high enough to cause not only dissociation into atoms, but also to cause significant amounts of collisional excitation and ionisation of the sample atoms to take place. Once the atoms and ions are in the excited states, they can decay to lower states through thermal or radiative (emission) energy transitions and electromagnetic radiation is emitted. An emission spectrum of an element contains several more lines than the corresponding absorption spectrum. FES (formerly called flame photometry) is in principle similar to emission spectroscopy, with flame as the source of excitation energy (flame atomiser). A flame provides a high-temperature source for desolvating and vaporizing a sample to obtain free atoms for spectroscopic analysis. In atomic absorption spectroscopy ground state atoms are desired. For atomic emission spectroscopy the flame must also excite the atoms to higher energy levels. The table lists temperatures that can be achieved in some commonly used flames. In atomic spectroscopy, atomization is the conversion of a vaporized sample into atomic components or the process of obtaining atomic vapor. Liquid samples are first nebulized (convert a liquid into a mist or fine spray), the fine mist is transported into the atomization source (flame or plasma), where the solvent evaporates and the analyte is vaporized, then atomized. A flame atomiser is composed of a nebulisation system with a pneumatic aerosol production accessory, a gas-flow regulation and a burner. Flame are produced by means of a burner to which fuel and oxidant are supplied in the form of gases. There are two types of aspirator-burner used, total-consumption burner and premix chamber burner. Nebulisation is a process to convert (a liquid) to a fine spray Total Consumption Burner In total-consumption burner, the fuel and oxidant (support) gases are mixed and combust at the tip of the burner. The fuel (usually acetylene), oxidant (usually air) and sample all meet at the base of flame. The sample is drawn up into the flame by the Venturi Effect, by the support gas. The gas creates a partial vacuum above the capillary barrel, causing the sample to be forced up the capillary. It is broken into a fine spray at the tip where the gases are turbulently mixed and burned. This is the usual process of nebulisation. The burner is called total consumption because the entire aspirated sample enters the flame or in other words the sample solution is directly aspirated into the flame. All desolvation, atomization, and excitation occurs in the flame. However, the total consumption burner can be used to aspirate viscous and high solids samples with more ease, such as undiluted serum and urine. Also, this burner can be used for most types of flames, both low- and high-burning velocity flames. Surface mixing Total Consumption Burner The Venturi Effect is the reduction in fluid pressure that results when a fluid flows through a constricted section of pipe Premix Chamber Burner The second type of burner, most commonly used now, is the premix chamber burner, sometimes called laminar-flow chamber. Premix burners were the first purpose-designed burners, and they can be traced back more than 100 years to the Bunsen and similar laboratory burners. A premix burner system really consists of two key components, the burner head or nozzle, and the gas-air mixing device that feeds it. The fuel and support gases are mixed in a chamber before they enter the burner head (through a slot) where they combust. The sample solution is again aspirated through a capillary by the Venturi effect using the support gas for the aspiration. Large droplets of the sample condense and drain out of the chamber. The remaining fine droplets mix with the gases and enter the flame. As much as 90% of the droplets condense out, leaving only 10% to enter the flame. The 90% of the sample that does not reach the flame will travels back through the mixing chamber and out as waste drain. The premix burners are generally limited to relatively low-burning velocity flames. The most outstanding disadvantage of the premix burner is that only low burning-velocity flames can be used. A burning velocity which is higher than the rate of flow gases leaving the burner will cause the flame to travel down into the burner resulting in an explosion commonly known as flashback. Because of this limitation it is somewhat difficult to use high burning-velocity gases, which includes oxygen-based flames. Most commercial instrument use premix burners with the option of using total-consumption burner. Premix burners are distinguished as Bunsen-, Meker-, or slot-burners according to whether they have one large hole, a number of small holes, or a slot as outlet for the gas mixture, respectively. When several parallel slots are present, they are identified as multislot burners (e.g., a three-slot burner). A popular version of premix burner is the Boling burner. This is a three slot burner head that results in a broader flame and less distortion of radiation passing through at the edges of the flame. This burner warps more easily than others, though, and care must be taken not to overheat it when using organic solvents. The difference between total-consumption burner and premix chamber burner a) Nebulisation process In total-consumption burner,the fuel (usually acetylene), oxidant (usually air) and sample all meet at the base of flame. The sample is drawn up into the flame by the Venturi Effect, by the support gas. The gas creates a partial vacuum above the capillary barrel, causing the sample to be forced up the capillary. It is broken into a fine spray at the tip where the gases are turbulently mixed and burned. This is the usual process of nebulisation. While in premix burners, the fuel and support gases are mixed in a chamber before they enters the burner head (through a slot) where they combust. The sample solution is again aspirated through a capillary by the Venturi effect using the support gas for the aspiration. Large droplets of the sample condense and drain out of the chamber. b) Size of sample droplet that enters the flame (atomization efficiency) and absorption pathlength The total consumption burner obviously uses the entire aspirated sample, but it has a shorter path length and many larger droplets are not vaporized in the sample. The path length is extremely short, since combustion occurs only at a point above the capillary tube. Although in the total-consumption burners the entire sample is aspirated, the vaporization and atomization is poor. Although a large portion of the aspirated sample is lost in the premix burner, the atomization efficiency (efficiency of producing atomic vapour) of that portion of the sample that enters the flame is greater, because the droplets are finer. Also, the path length is longer. The sample which does reach the flame is efficiently atomized. So sensitivities are comparable with either burner in most cases. c) Interference to flame In total consumption burner, the larger droplets may vaporize partially, leaving solid particles in the light path. This may result in light scattering, which is registered as an absorbance. The absorbance by the sample, that is, the atomic vapour population, is generally more dependent on the gas flow rates and the height of observation in the flame than with the premix burners. The viscosity of the sample will more greatly affect the atomization efficiency (production of atomic vapour) in the total consumption burner. The resulting drops are relatively large which will cause the flame temperature to fluctuate and will scatter the source radiation. This may cause false measurements to be detected. This interference will not happen in premix burner since fine droplets of sample is produced. d) Flame homogeneity Total consumption burner is used in flame photometry and is not useful for atomic absorption. The reason for this is that the resulting flame is turbulent and non-homogenous because it combines the function of nebulizer and burner. Here oxidant and fuel emerge from separate ports and are mixed above the burner orifices to produce a turbulent flame. Non-homogenous flame is a property that negates its usefulness in atomic absorption, since the flame must be homogeneous, for the same reason that different sample cuvettes in molecular spectrophotometry must be closely matched. One would not want the absorption properties to change from one moment to the next because of the lack of homogeneity in the flame. In premix burner, the fuel and oxidant are thoroughly mixed inside the burner housing before they leave the burner ports and enter the primary combustion or inner zone of the flame. This type of burner usually produces an approximately laminar (streamline) flame, and is commonly combined with a separate unit for nebulizing the sample. e) Noise Combustion with the premix burners is very quiet, while with the total-consumption burner it is noisy to the detector as well as to the ear, possibly on a level similar to that of a jet engine. Summary of the difference between the total-consumption burner and premix burner: No Characteristics Total Consumption Burner Premix Chamber Burner Nebulisation process The fuel and oxidant (support) gases are mixed and combust at the tip of the burner. The sample is drawn up into the flame by the Venturi Effect, by the support gas. The gas creates a partial vacuum above the capillary barrel, causing the sample to be forced up the capillary. It is broken into a fine spray at the tip where the gases are turbulently mixed and burned. The fuel and support gases are mixed in a chamberbefore they enter the burner head (through a slot) where they combust. The sample solution is again aspirated through a capillary by the Venturi effectusing the support gas for the aspiration. Size of sample droplet that enters the flame (atomization efficiency) Larger droplets  ¯atomization efficiency Many larger droplets are not vaporized in the sample. The larger droplets may vaporize partially, leaving solid particles in the light path (result in light scattering and registered as an absorbance). The viscosity of the sample will more greatly affect the atomization efficiency (production of atomic vapour) in the total consumption burner. Small droplets  ­atomization efficiency Although a large portion of the aspirated sample is lost in the premix burner, the atomization efficiencyof that portion of the sample that enters the flame is greater, because the droplets are finer. Absorption path length Shorter path length  ¯atomization efficiency Longer path length  ­atomization efficiency Interference to flame The resulting drops are relatively large which will vaporize partially, leaving solid particles in the light path. This may result fluctuation of flame temperature and light scattering, which is registered as an absorbance may cause false measurements to be detected. None (fine drops) Flame homogeneity The resulting flame is turbulent and non-homogenous Usually produces an approximately laminar (streamline) flame Noise Combustion with the total-consumption burner is noisy Combustion with the premix burners is very quiet

Friday, October 25, 2019

Drivers Essay -- essays research papers

Drivers   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Trying to sort out what type of driver a person might be is an extremely challenging task. In a person’s own mind, they think they are the aggressive type of driver, or the cautious type, but no one will ever admit that they are the “I got my license in a cracker jacks box driver';. The only fact that is certain about a person is they are never always aggressive or cautious while they drive. A person’s driving type varies from time, place, and, the most important reason, their attitudes affect one’s driving style.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  The roads today are filled with all types of drivers, but one of the worst drivers is the hot shot or aggressive drivers. All the time, day after day, people encounter these maniacs, who drive like they think they are invincible. The short period of time that I have been driving, I have encountered some crazy drivers. The first hotshot driver I saw, was actually two teens who were drag racing down the streets going ninety mph or faster. While they were racing, one of the two cars was actually driving on the opposite lane. Other experiences I have encountered were a high performance bike that was weaving in and out of cars just because he didn’t want to stop. This biker was going real fast for what he was doing he reminded me of a cop chasing a robber movie of just how fast the bike was ...

Thursday, October 24, 2019

Globalization and Environmental Effects on our planet Essay

Globalization and the Environmental Effects on our Planet We live on a very fortunate planet that allows the human race to not only survive on it, but also to thrive in its consistent temperatures, natural resources, and prosperous ecosystem. But the effects of globalization, pollution, global warming and other environmental problems threaten our survival as a species in this ecosystem. Many believe that through technology, commerce, and travel globalization will lead us to economic prosperity, while conservationists and scientists are working hard to preserve the priceless resources that our earth has to offer us. The widespread changes that are brought about due to globalization have a lasting impact on our environment and threaten our survival. These global changes make understanding our world both challenging and a necessary task if our future depends understanding these concepts in all their various forms. Our ecosystems are altered by the financial decisions we make today and the energy we use, the pollutions we create will affect our lives for our children and our children’s children, if we don’t destroy ourselves by then. Globalization is a very real phenomenon and a concept hat most people do not fully come to grasp in order to understand the ramifications of it. Globalization does not Just affect our societies economically, but also politically and socially as well. The media does an extensive Job at portraying the ideologies and opinions of globalization through politicians and activist groups, but does not accurately portray the arguments or the ever expanding inequality gap between the rich and poor and the lack of evidence to demonstrate the achievement of the â€Å"trickle down† effect. Globalization is most commonly defined as , â€Å"the increasing nterconnectedness of people and places through converging processes of economic, political, and cultural change† (Rowntree, Lewis, Price, & Wyckoff, 2003). This means that once-distant regions and cultures are now linked together through commerce, travel, and communications causing an economic reorganization of our world’s systems. Early forms of globalization have been seen since the early years of our societies, including the first era of globalization before World War I seemed to shrink our global finance capitalism system. The inventions of the steamship, telegrams, and ventually the telephone are all examples of the increase of globalization in our earlier societies that have had a huge impact on our political, cultural, and economic systems. But this â€Å"new era of globalization,† as mentioned by Thomas Friedman, is not only different in degree than the previous era of globalization, but is also driven differently and is increasing at a pace never witnessed before (Friedman, 2000). Since the Industrial Revolution, many may argue that contemporary globalization is the most fundamental reorganization of the socioeconomic structure, but few agree on whether the benefits actually outweigh the costs. In previous eras, inventions such as the railroad, steamships, and automobiles increased globalization and the falling transportation costs allowed people to get to more places cheaper and faster than ever betore. Now, the talling costs ot telecommunications allow todays era ot globalization to link the world together even tighter than before. Microchips, the internet, satellites, and cellphones allow societies and cultures of greater distances to connect quickly and cheaply in order to conduct business, form relationships, and transfer information from one geographic location to another. Travel has become aster and more cost effective, communications with other countries have become easier, and people are able to offer and exchange services globally. This is why Friedman defines globalization as, â€Å"The inexorable integration of markets, nation- states and technologies to a degree never witnessed before- in a way that is enabling individuals, corporations and nation-states to reach around the world farther, faster, deeper and cheaper than ever before, and in a way that is enabling the world to reach into individuals, corporations and nation-states farther, faster, deeper, cheaper than ever before. (Friedman, 2000, p. 9) Not everyone has profited from economic globalization, nor have the benefits been felt equally in certain world regions. The multitude of economic changes due to increases in communication, travel, and financial decisions have triggered fundamental cultural changes to many populations, which have threatened local cultural diversity. Globalization, especially in its economic form, is one of the most contentious issues today. Economic globalization is often applauded by those who believe that economic efficiency will result in a rising prosperity for the entire world, ut in actuality it will only largely benefit those who are already prosperous, increasing the gap between the rich and poor, all while reducing cultural and ecological diversity around the world. Globalization is not a natural process, instead it promotes free market and export oriented economies at the expense and exploitation of localized activities and resources. The inequality between the rich and poor from this â€Å"trickle down† effect is actually increasing the percentage of poor people in most world regions. To put this into perspective, 20 percent of the world’s richest people onsume 86 percent of the world’s resources; equally the wealthiest countries have grown much richer (Rowntree, Lewis, Price, & Wyckoff, 2003). While the richer seem to be getting richer, the poor grow more and more impoverished, with the least amount of consumption of these global resources. The poorest 80 percent use only about 14 percent of global resources, with the poorest 10 percent seeing their income decline in the past couple decades (Rowntree, Lewis, Price, & Wyckoff, 2003). Economic globalization is an unavoidable phenomenon that holds both promises and drawbacks. At certain levels, we can use globalization to reduce some economic inequalities and protect the natural environment. In order to make globalization work for our future generations and our planet, there needs to be a kind of openness in education and social cohesion that stresses the need for strong, efficient governments that can create networks of environmental and human rights groups with government policies. With these interrelations between the 2 extremes of pro- globalization and anti-globalization wings, we can create the opportunity for profit and growth through complementary institutions, such as the government and social ssurance. Although these economic activities seem to be the driving force behind globalization, the consequences attect every aspect ot lite and land in our day and age. Our ecosystem is affected due to the demand for natural resources as global commodities and our planet’s physical environment is at risk. As Rowntree, et al. points out, â€Å"our local ecosystems are altered by financial decisions made thousands of miles away†¦ these activities have profound and detrimental implications for the world’s climates, oceans, and forests† (Rowntree, Lewis, Price, & Wyckoff, 2003). Unfortunately for our global environment, the pace of destruction has worsened and our reaction to the climate crisis is much too weak if we plan on inhabiting this planet for the next 50, 60, 70+ years. Our earth is a beautiful and magnificent place for life to form and grow, but our time on this planet is not going to last much longer if we do not do something about our environmental impact on the planet’s natural resources. In a biological sense, our environment is defined as, â€Å"the complex of climatic, biotic, and social factors that acts upon an organism and determines its orm and survival† (Class lecture,week 6). Nature is our basis of well being and the biodiversity has delcined glabally 30 percetn between 1970 and 2008. As A1 gore discusses in his book An Inconvient Truth, Many people still rely on our planet as if it is big enough to sustain our habits forever. Some still assume that the earth is so big that we could never use up all its resources. Due to globalization and population growth, we are influencing many parts of our earths environments, especially the most vulnerable, the atmosphere (Gore, 2006). Humans see themselves as apart from ature, instead of as a part of nature. How we live, what we consume, all impacts our environment. The earth’s atmosphere is so thin that we are actually capable of changing its composition by the massive amount of carbon dioxide we have pumped into it. High income countries, much like the U. S. , have a footprint 5 times greater that that of low income coutnries, thus leading to the loss of biodiverstiy and impacting the ecosystem. (WWF Global, 2012). The world is undergoing major changes, glacier are melting, species are on the verge of extinction, sea levels are rising, and temperatures are heating up. Global warming is a direct result of humans living in disharomony with the planet and its natural resources. We are beginning to live in a more and more human created environment causing carbon dioxide levels to increase because of the burnign of fossil fuels. The problem that we are faced with now is that every living system in our biosphere is delcining and we are a part of nature that can not afford to lose these valuable resources. As the temperature increases all over the world, we are putting ourselves and our fellow species at risk of extinction. This begins to affect our storm systems, because the armer the oceans get the stronger the storms get. These consequences are all due to the basic understanding our earth’s atmosphere, and the most important agents, the green house gases. Due to huge quantities of human caused carbon dioxde, we are thickening this this atmopheric layer, causing the gases to trap the sun’s radiation inside and causing the planet to â€Å"heat up. (Gore, 2006). Just a suttle increase of a few degress can have a dangerous effect on our plantes ecological system. Areas around the world are experiencing undesirable amounts of rain in short periods of time, while others are facing immeasurable droughts. This shift in temperature not only causes glaciers to melt and oceans to rise, but it also disrupts migration patters, how or where certain plants grow, and the species that depend on those climates. Our ever growing demand on resources is putting an immense burden on biodiversity. The continued provsion of ecosytem resources, our furture security, our health and well being are all in Jeapordy due to the current rate of consupmtion of non-renewable resources. According to the living planet report, as of 2012 the Earth would need 1. 5 years to produce and replenish the natural resources hat we have consumed in only a single year (WWF Global, 2012). And this number has only increased since the last report. The technology and the consumption of resources in the Unites States alone contributes to 30. 3 percent to global wamring. That is more that South America, Canada, Africa, the Middle East, Australia , Japan, and Aoutheast Asia combined (Gore, 2006). It is no doubt that the U. S. is the biggest contributor to not only globalization, but also to the poor environmental quality of this planet. We have rightfully earned our name as the biggest polluter in the world, but it may not be too late. The first step in reducing our global footprint is by accepting and understanding the consequences our actions have on our planet’s environment and that there are ways we can reverse some of the negative impacts we have had on our planet. We can no longer turn a blind eye to the effects we cause on our ecostyems. A1 gore proposes many solutions to how we as individuals can help this climate crisis. Considering that this problem is a vast and complicated, we can each do our part to help reduce our carbon footprint and together we can make a difference. Sacing energy at home by using energy efficient light bulbs, turning off ights when we do not need them, and heating and cooling our house efficiently are just a few ways we can help out individually. In the communtiy, not driving so much, taking public transportation, reducing emissiones from our cars, and being conscious of our daily consumptions are all ways we can reduce pollution in our air. And most importantly consuming less, reusing water bottles, bags, buying things that last, buying local, and modifying your diet are all important changes that we can make to ensure our health for ourselves and for future generations. Globalization and our nvironmental impact are very important factors that we must always consider if we plan on existing in this environment with other species. We are fortunate to live on a planet that can sustain life and allow it to thrive, but if we are not careful, we will use up any and all resources that Earth provides. Our consumption and the effects it has on the environment is detrimental to our survival and the survival of our ecosystems. These global changes make understanding our world both challenging and a necessary task if our future depends understanding these concepts in all their arious forms.

Tuesday, October 22, 2019

Battle of Monterrey in the Mexican-American War

Battle of Monterrey in the Mexican-American War The Battle of Monterrey was fought September 21-24, 1846, during the Mexican-American War (1846-1848) and was the first major campaign of the conflict conducted on Mexican soil. Following the initial fighting in southern Texas, American troops led by Major General Zachary Taylor crossed the Rio Grande and pushed into northern Mexico with the goal of taking Monterrey. Nearing the city, Taylor was forced to launch assaults against its defenses as he lacked the artillery to conduct a siege. The resulting battle saw American troops capture the city after taking heavy casualties as they fought through Monterreys streets. American Preparations Following the Battles of Palo Alto and Resaca de la Palma, American forces under Brigadier General Zachary Taylor relieved the siege of Fort Texas and crossed the Rio Grande into Mexico to capture Matamoros. In the wake of these engagements, the United States formally declared war on Mexico and efforts began to expand the U.S. Army to meet wartime needs. In Washington, President James K. Polk and Major General Winfield Scott commenced devising a strategy for winning the war. While Taylor received orders to push south into Mexico to capture Monterrey, Brigadier General John E. Wool was to march from  San Antonio, TX to Chihuahua. In addition to capturing territory, Wool would be in a position to support Taylors advance. A third column, led by Colonel Stephen W. Kearny, would depart Fort Leavenworth, KS and move southwest to secure Santa Fe before proceeding on to San Diego. To fill the ranks of these forces, Polk requested that Congress authorize the raising of 50,000 volunteers with recruitment quotas assigned to each state. The first of these ill-disciplined and rowdy troops reached Taylors camp shortly after the occupation of Matamoros. Additional units arrived through the summer and badly taxed Taylors logistical system. Lacking in training and overseen by officers of their choosing, the volunteers clashed with the regulars and Taylor struggled to keep the newly-arrived men in line. General Winfield Scott. Photograph Source: Public Domain Assessing the avenues of advance, Taylor, now a major general, elected to move his force of around 15,000 men up the Rio Grande to Camargo and then march 125 miles overland to Monterrey. The shift to Camargo proved difficult as the Americans battled extreme temperatures, insects, and river flooding. Though well-positioned for the campaign, Camargo lacked sufficient fresh water and it proved difficult to maintain sanitary conditions and prevent disease. The Mexicans Regroup As Taylor prepared to advance south, changes occurred in the Mexican command structure. Twice defeated in battle, General Mariano Arista was relieved from command of the Mexican Army of the North and ordered to face a court-martial. Departing, he was replaced by Lieutenant General Pedro de Ampudia. A native of Havana, Cuba, Ampudia had started his career with the Spanish but defected to the Mexican Army during the  Mexican War of Independence. Known for his cruelty and cunning in the field, he was ordered to establish a defensive line near Saltillo. Ignoring this directive, Ampudia instead elected to make a stand at Monterrey as defeats and numerous retreats had badly damaged the morale of the army. Battle of Monterrey Conflict: Mexican-American War (1846-1848)Dates: September 21-24, 1846Armies and Commanders:AmericansMajor General Zachary Taylor6,220 menMexicoLieutenant General Pedro de Ampudiaapprox. 10,000 menCasualties:Americans: 120 killed, 368 wounded, 43 missingMexicans: 367 killed and wounded Approaching the City Consolidating his army at Camargo, Taylor found that he only possessed wagons and pack animals to support around 6,600 men. As a result, the remainder of the army, many of whom were ill, was dispersed to garrisons along the Rio Grande while Taylor began his march south. Departing Camargo on August 19, the American vanguard was led by Brigadier General William J. Worth. Marching towards Cerralvo, Worths command was forced to widen and improve the roads for the men following. Moving slowly, the army reached the town on August 25 and after a pause pressed on to Monterrey. A Strongly Defended City Arriving just north of the city on September 19, Taylor moved the army into camp in an area dubbed Walnut Springs. A city of around 10,000 people, Monterrey was protected to the south by the Rio Santa Catarina and the mountains of the Sierra Madre. A lone road ran south along the river to Saltillo which served as the Mexicans primary line of supply and retreat. To defend the city, Ampudia possessed an impressive array of fortifications, the largest of which, the Citadel, was north of Monterrey and formed from an unfinished cathedral. The northeast approach to the city was covered by an earthwork dubbed La Teneria while the eastern entrance was protected by Fort Diablo. On the opposite side of Monterrey, the western approach was defended by Fort Libertad atop Independence Hill. Across the river and to the south, a redoubt and Fort Soldado sat atop Federation Hill and protected the road to Saltillo. Utilizing intelligence gathered by his chief engineer, Major Joseph K. F. Mansfield, Taylor found that while the defenses were strong, they were not mutually supporting and that Ampudias reserves would have difficulty covering the gaps between them. Attacking With this in mind, he determined that many of the strong points could be isolated and taken. While military convention called for siege tactics, Taylor had been forced to leave his heavy artillery at the Rio Grande. As a result, he planned a double envelopment of the city with his men striking at the eastern and western approaches. To carry this out, he re-organized the army into four divisions under Worth, Brigadier General David Twiggs, Major General William Butler, and Major General J. Pinckney Henderson. Short on artillery, he assigned the bulk to Worth while assigning the remainder to Twiggs. The armys only indirect fire weapons, a mortar and two howitzers, remained under Taylors personal control. Major General William J. Worth. National Archives and Records Administration For the battle, Worth was instructed to take his division, with Hendersons mounted Texas Division in support, on a wide flanking maneuver to the west and south with the goal of severing the Saltillo road and attacking the city from the west. To support this movement, Taylor planned a diversionary strike on the citys eastern defenses. Worths men began moving out around 2:00 PM on September 20. Fighting began the next morning around 6:00 AM when Worths column was attacked by Mexican cavalry. These assaults were beaten off, though his men came under increasingly heavy fire from Independence and Federation Hills. Resolving that these would need to be taken before the march could continue, he directed troops to cross the river and attack the more lightly defended Federation Hill. Storming the hill, the Americans succeeded in taking the crest and capturing Fort Soldado. Hearing firing, Taylor advanced Twiggs and Butlers divisions against the northeastern defenses. Finding that Ampudia would not come out and fight, he began an attack on this part of the city (Map). A Costly Victory As Twiggs was ill, Lieutenant Colonel John Garland led elements of his division forward. Crossing an open expanse under fire, they entered the city but began taking heavy casualties in street fighting. To the east, Butler was wounded though his men succeeded in taking La Teneria in heavy fighting. By nightfall, Taylor had secured footholds on both sides of the city. The next day, the fighting focused on the western side of Monterrey as Worth conducted a successful assault on Independence Hill which saw his men take Fort Libertad and an abandoned bishops palace known as the Obispado. U.S. Army troops attack through the streets of Monterrey, 1846. Public Domain   Around midnight, Ampudia ordered the remaining outer works, with the exception of the Citadel, to be abandoned (Map). The next morning, American forces began attacking on both fronts. Having learned from the casualties sustained two days earlier, they avoided fighting in the streets and instead advanced by knocking holes through the walls of adjoining buildings. Though a tedious process, they steadily pushed the Mexican defenders back towards the citys main square. Arriving within two blocks, Taylor ordered his men to halt and fall back slightly as he was concerned about civilian casualties in the area. Sending his lone mortar to Worth, he directed that one shell be fired at the square every twenty minutes. As this slow shelling began, the local governor requested permission for noncombatants to leave the city. Effectively surrounded, Ampudia asked for surrender terms around midnight. Aftermath In the fighting for Monterrey, Taylor lost 120 killed, 368 wounded, and 43 missing. Mexican losses totaled around 367 killed and wounded. Entering surrender negotiations, the two sides agreed to terms that called for Ampudia to surrender the city in exchange for an eight-week armistice and allowing his troops to go free. Taylor consented to the terms largely because he was deep in enemy territory with a small army that had just taken significant losses. Learning of Taylors actions, President James K. Polk was irate stating that army’s job was to â€Å"kill the enemy† and not to make deals. In the wake of Monterrey, much of Taylor’s army was stripped away to be used in an invasion of central Mexico. Left with the remnants of his command, he won a stunning victory at the Battle of Buena Vista on February 23, 1847.