Thursday, October 31, 2019

Analysis of Heroism of Olympic Athletes in Olympic Advertising from Research Paper

Analysis of Heroism of Olympic Athletes in Olympic Advertising from the Semiotic Perspective - Research Paper Example Introduction Olympism is a philosophy of life, exalting and combining in a balanced whole the qualities of body, will and mind. Blending sport with culture and education, Olympism seeks to create a way of life based on the joy of effort, the educational value of good example and respect for universal ethical principle. ---The Olympic Charter (IOC,2004:9) The Olympic Games are an international sports festival that began in ancient Greece. Olympic Games, considering the fascination of viewers and spectators worldwide, are unmatched among cultural events (Alkemeyer &Richartz, 1993). Every four years, elite athletes from all over the world with coaches and officials, media representatives and hundreds of thousands of spectator have gathered for around two weeks for such a sporting event that can be spread via mass media including television, radio, print media, and the Internet by billions of people around the world. With the modernization of the Olympic Games, they are enriched as a cul tural, political and economic phenomenon, no longer just a sporting event. Particular interests see them as a media event, a tourism attraction, a marketing opportunity, a catalyst for urban development and renewal, a city image creator and booster, a vehicle for ‘sport for all’ campaigns, an inspiration for youth and a force for peace and international understanding. The report will focus on the role that Olympic Games play in inspiring the audience in terms of mass communication, particularly in Olympic advertising. Dating back to ancient Greece, the term â€Å"hero† was defined as â€Å"a superior man†, embodiment of composite idea† (Fishwick, 1985). The gods imbued the hero with exceptional human characteristics such as strength, power, and courage (Fishwick, 1985). However, as a historically and culturally delineated construct, â€Å"heroism† has evolved across time and national boundaries. (Fishwick,1985). While the ancient hero was admi red for his extraordinary physical strength and skills, the modern hero is also described in terms of social accomplishment: attractive, victorious, charismatic, individualistic, skillful, down-to-earth, a realistic role model, and a risk taker. (Fishwick, 1985). Whereas the ancient hero was generally a warrior, the modern hero is often a sports figure. As Ryan notes: â€Å"Every culture has its gods, and ours hit baseballs, make baskets, and score touchdowns† (Ryan, 1995). The Olympic games have a rich, storied reputation based on athletic competition at its highest level, not as a one-time event, but literally for thousands of years. Over the millennia, athletes have become heroes and icons, inspiring generations of fans and future athletes to work hard in pursuit of their dreams. The Olympic athletes are carrying on a tradition that has deep meaning across cultures, offering inspiration to millions of people around the world Every Olympics has had its heroes from whom many fans and observers draw inspiration. Olympic heroes succeed in capturing people’s imagination through their athletic prowess, determination, and personality. They often represent both individual and collective

Tuesday, October 29, 2019

Suffering of Woman Protagonist in Male Hegemony Society Within The Yellow Wallpaper Essay Example for Free

Suffering of Woman Protagonist in Male Hegemony Society Within The Yellow Wallpaper Essay Women role in a society has always been changing since prehistoric times so it could not be difined accuratly. However there always has been a stereotype male figure in the society which nearly has not altered since the very first. Besides the women role also differed between religions and civilisations. For instance in early Native American Tribes women were something deified , this however shifted and women no longer thought to be superior , and quite opposite they were considered to be inferior. By 19 century with the influence of the Civil War and all of the social protests demand to improve the woman rights caused many women to question the inferior role patriarchal society cast for them. Due to the fact that north won the war and so slavery was prohibited lead the women to claim their own rights in United States. During the end of 19 century many woman writers wrote various things to show their gender’s suffering living in a male dominant society. In l890 Florence Fenwick Miller , midwife turned journalist describes woman’s position succinctly. Under exclusively man-made laws women have been reduced to the most abject conditions at legal slavery in which it is possible for human beings to be held , under the arbitrary domination of another’s will and dependent for decent treatment exclusively on the goodness of heart of the individual master. ( from a speech to the National Liberal Club) â€Å" The Yellow Wallpaper† was also written during late 19 centuries by Charlotte Perkins Gilman to indicate the female suffrage under male hegemony America. The writer demostrates a common female figure who is remaining passive in all the decisions she should take for her own however instead her husband John decides for everything she should do even in her every day schedule, as the female protagonist states â€Å"I have a schedule prescription for each hour in the day ; he takes all care from me, and so I feel basely ungreatful not to value it more. † The narrator also stereotypes all o the male characters she addresses during the short story. She emphasises that they are all the same or worse than each other in treating women. She informs the reader that her husband who is a physician does not believe that she is sick instead he assures friends and relatives that she has nothing except temporary nervous depression. The narrator compares her husband’s opinion about her with her brother’s â€Å"my brother is also a physician, and also of high standing , and he says the same thing. † (13). Another male figure she compare with her husband is Weir Mitchell who is a physician too â€Å"I had a friend who was in his hands once, and she says he is just like John and my brother, only more so! (85). Due to all paralleling she made about opposite gender within the â€Å"The Yellow Wallpaper† we comprehend that the writer is feeling under oppression. Moreover the narrator writes on her secret dairy since she was prohibited to write or read anything. Actually the writer was prohibited to perform everything other than essential needs of human being. It was banned for her to write ,to imagine and to work but she does not share with them the same opinion she states â€Å"personally I believe that congenial work, with excitement and change would do me good. (13). She is unhappy with all the rights they insist her to do however she is sure that these things are not curing her. There are also repetition of some questions â€Å"but what is one to do? † (15), â€Å"and what s one to do? † (9), â€Å"what is one to do? † (10) which indicates that even though the writer does not agree with them in numerous points, she is not able to change neither her marriage nor her life since she knows that women are valued as long as they are supportive to their male companions. Another point is that John belittles and ridicules her fears , opinions and believes. Because when she told her opinion about the yellow wallpaper in which is in their room she says â€Å"he laughs at me so about this wallpaper! † (50). The writer thinks that the yellow wallpaper’s colour is â€Å"repellent† and â€Å"revolting†. Moreover we witness that the woman has worse ideas about this wallpaper â€Å" there are things in the wallpaper that nobody knows but me, or ever will. Behind that outside pattern the dim shapes get clearer every day. It is always the same shape, only very numerous. And like a woman stooping down and creeping about behind that pattern. I do not like it a bit. I wonder – I begin to think –I wish John would take me away from here! † (122). We comprehend that John did not care about her feeling concerning the wallpaper in their room therefore that caused her to sight nonfactual things and believe them the most strange fact is that John’s attitude towards her forced her not to tell all what she believes and thinks â€Å"I had no intension of telling him it was because of the wallpaper – he would make fun of me† (169). Besides we understand that John thinks she is still small because he calls her â€Å"little girl† perhaps that is the reason why he wants to control her and give all the decisions about her. Also it could be the reason why she does not tell him about her secret thoughts because he would think that she needs to be controlled. The short story has fascinatingly dramatic end which effects its reader and demonstrates that the writer wants to be freed from this male hegemony oppressing her in her every action . The narrator which was annoyed of the wallpaper in her room and believed that women were trapped beneath the wallpaper got slowly got insane due to that idea and peeled off all the paper intending to free the trapped women. She ends the story by these words â€Å" I have got out at last , in spite of you and Jane . And have pulled off most of the paper, so you can not put me back. † (265). We derive that the women she speaks about is her and all the women trapped in a male dominant society. So she belives she can free them from this pressure by peeling the wallpaper. Probably she sights the wallpaper as the society and her husband. Moreover we observe that she s tired of the role she was given by the society and wants to get rid of it. We witness a similar ending when we read Virginia Woolf’s To The Lighthouse , we face a perfect example for the gender roles : Mrs Ramsay. Mrs Ramsay is a wonderful actor in novel ,playing her role of â€Å"angel in the home† with a laudable diligence. â€Å"†¦.. she had the whole of the other sex under her protection; for reasons she could not explain, for their chivalry and valour, for the fact that they negotiated treaties ,ruled India, controlled finance. (11). Such a view is what we have seen in our houses since our childhood and what is going on in other houses. Being an angelic mother and wife who stays at home and does what her husband says is the only thing that is expected from the female figure nothing more than that is awaited. To get rid of it in order to get base in life as neither being supportive nor supported as needs a sharp cut from the past description of what and who is woman. So that is what Virginia Woolf does by killing Ramsay through the end of the novel. This example shows that many woman writers touched on the same matter in their writings In the given situation in â€Å" The Yellow Wallpaper† we have woman figure whom oppresses her feelings ,imaginations and thoughts because she knows that is the only way she can be accepted by her husband and the society she dwells in. However this female protagonist struggles to gain a base instead submitting society rules and the dominance of masculine hegemony. And finally when she thinks she got free from the place they trapped her , we witness that she got mad. So we can observe the sufferings of the writer due to the man hegemony she is oppressed.

Sunday, October 27, 2019

Electromagnetic Radiation Features

Electromagnetic Radiation Features 2.1 Electromagnetic radiation Electromagnetic radiation consists of waves of electric and magnetic energy oscillating through space at the speed of light (OET, 1999). The electromagnetic spectrum is an arrangement of various electromagnetic energy in the forms of particles and waves. These form of energy are characterized by frequency and wavelength. The wavelength is the distance per seconds covered by an electromagnetic wave, while the frequency, the number of oscillation of electromagnetic waves for one second. Figure 2.0.1 bellow shows an electromagnetic spectrum. Figure 2.0.1. Electromagnetic spectrum The electromagnetic spectrum shows the arrangement of electromagnetic sources based on their frequency and wavelength. Below is Table 2.0.1 which describes the radiofrequency sources and their allocated bands and frequency ranges. Table 2.0.1. Characteristics and frequency bands of radiofrequency field sources Band Description of signals Frequency range FM Frequency Modulation 88’108 TV/DAB Television (analogue) DAB (Digital Audio Broadcasting) 174’223 TETRA Terrestrial Trunked Radio 380’400 TV Television (analogue and digital) 470’830 GSM DL Global System for Mobile Communications from base station to mobile phone 925’960 DCS DL Digital Cellular System 1,805’1,880 UMTS DL Universal Mobile Telecommunications Service 2,110’2,170 Wi-Fi Wireless Fidelity, IEEE 802.11 standards 2,400’2,500 The most important application of electromagnetic energy is in the use of radio broadcasting, mobile telephony, microwave application and satellite communication as reported by Kelly (2011). Others include magnetic resonance imaging (MRI), Microwave ovens, radar, industrial heaters and sealing (Kelly, 2011). 2.2 Radio waves Radio-frequency (RF) is a part of the arrangement of electromagnetic energies in terms of their frequencies from 3 kilohertz (3 kHz) to 300 gigahertzes (300 GHz) (Kelly, 2011). Radio-transmitters are devices that serve as transducers for converting electrical current into electromagnetic waves. The knowledge of the presence of electromagnetic field was first discovered as far back as 1887 when a Physicist proved experimentally that electromagnetic fields can be produced and detected in space. This phenomenon was predicted three years earlier by Clarke Maxwell (1831-1879). A radio transmitter communicates with a receiver via radio waves when electric charges moves up and down the transmitters antenna and are detected when the electric charge oscillate up and down a receivers antenna. In the process, when the charges moves, they produce magnetic fields. The resulting changing electric-magnetic fields (electromagnetic waves) are able to travel long distances through an empty space (Vacu um). The ability of a transmitter to send signal to a receiver or another transmitter nearby depends on the oscillation of the charges up and down its antenna at a particular resonant frequency. 2.3 Characteristics of radiofrequency (RF) antenna There are a number of physical parameters and principles that define the type of wave and intensity of the radio waves generated and broadcasted into the environment. These parameters are relevant in understanding the behavior of the antennas. These are the antenna element, element array, gain or directivity, radiation pattern, radiation intensity, beam-width and power density. 2.3.1 Antenna Elements The antenna element is a basic unit of the antenna. They may exist as individuals or as a group of elements. There are three most common types: dipole, monopoles and loop. A dipole antenna is most commonly a linear metallic wire or rod with a feed point at the center. It has two symmetrical radiating arms. A monopole antenna on the other hand has a single radiating arm. A number of authors have performed calculation and measurements on the pattern generated by these field on mobile handset in air and also against the head (Jensen Rahmat-Samii, 1995) ; (Okoniewski Stuchly, 1996) and (Lazzi, 1998). Other works on wireless devices such as cellular telephones using monopole antenna has also being reported in literature (Luebbers, 1992). An interesting application of loops is the wireless telemetry for medical devices and used for the first pacemaker (Greatbatch Holmes, 1991). 2.3.2 Antenna Arrays To yield a highly directive patterns, multiple antennas or elements can be arranged in space, in various geometrical configurations to yield a pattern (Stutzman Thiele, 1998); (Bucci, Ella, Mazzarella, Panariello, 1994) ; (Balanis, 2005); (Elliott, 2003) and (Mailloux, 1994). This antenna configuration are called arrays. The field from an array can add constructively or destructively in others. When well-engineered, the array can be used to control the beam by changing the phase of the excited currents of the individual elements (Elliott, 2003) ; (Dolph, 1946) ; (Safaai-Jazi, 1994) and (Shpak Antoniou, 1992). By so doing, an optimum radiation beam can be generated. The geometry of the arrangement of the element also affects the performance. Other factors are distance between the elements, amplitude of the excited currents, phase excitation and radiation pattern. 2.3.3 Directivity and Gain Another parameter used to describe the directional properties of an antenna is the directivity or gain. The directivity of an antenna, is a figure of merit that quantifies the antenna directive properties by comparing them with those of a hypothetical isotropic antenna that radiates the same total power as the antenna being characterized. Antenna such as dipoles and loops generates omnidirectional pattern, (McDonald, 1978) and (Pozar, 1993) derived a formula for such. The gain of an antenna is a measure that takes into account the efficiency of the antenna as well as its directional capabilities. The total antenna efficiency accounts for losses at the input terminals and the structure of the antenna due to reflection, conduction and dielectric losses. 2.3.4 Radiation Pattern       Besides the parameters described above, the radiation pattern is the property used to describe the resulting shape of the beam generated. Radiation or an antenna pattern is a mathematical function of the antenna that describe the space coordinates (Balanis, 2005). The main beam is the region where the radiation is strongest and the other directions forms the side-lobes. The half-power beam width is the measure of the direction of maximum radiation. The beam width or Half-Power Beam width (HPBW) is the width of the power pattern at the location where the beam is 3 dB below its maximum value (half-power points) or the location where the field is 1/Ã… ¡2 of its peak. It is often used as a trade-off between it and the side lobe level (The ratio of the radiation intensity of the largest side-lobe to the maximum radiation intensity). The HPBW varies inversely as the side lobe level. The most common resolution criterion states that the resolution capability of an antenna to distinguish betw een two sources is equal to half the first-null beam width (FNBW/2), which is usually used to approximate the half-power beam width (HPBW) (Kraus, 1996) and (Kraus Marhefka, Antennas, 2002). 2.3.5 Polarization Furthermore, the generated wave can oscillate up and down, left and right or characteristic between these. These behavior describe the kind of polarization the wave exhibits. Polarization of a radiated wave is defined as that property of a wave in a time-varying direction and relative magnitude of the electric field vector. In general, however, when the shape of the electric field appears in the form of an ellipse, the polarization is described as elliptical. When the shape appear linear or circular, the polarization is described as such. The polarized radiated wave by the antenna can also be represented on the Poincares sphere (Balanis C. A., 1989) ; (Poincar ´e, 1892) ; (Deschamps, 1951) and (Bolinder, 1967). 2.3.6 Radiation Intensity Another important property of the antenna is the radiation intensity. The radiation intensity is the power radiated per unit solid angle subtended by the antenna. It is the property of the far field. The radiation intensity is obtained by multiplying the density by the square of the distance. The power pattern is also a measure of radiation intensity. To be able to obtain the total power density, one need to integrate the radiation intensity. 2.3.7 Power Density Finally, the radiation power density describes the power associated with an electromagnetic wave. The power density is the total power crossing a closed surface by integrating the normal component of the Poynting vector over the entire surface. 2.4 Electromagnetic field around an antenna An electromagnetic field is the region created around a source of electromagnetic radiation. An antenna is a device which changes electrical charges or current into electromagnetic waves into space. The distribution of RF energy from an antenna was found from literature to obey a directional pattern and varies with distance from the antenna. The fields created around an antenna can be grouped into two: 2.4.1 Near Field The near field is the region around an antenna such that; the electric and magnetic fields are decoupled, quasi-static and are not uniform. And the impedance which is the resistance in air and the power associated with the field varies with distance. 2.4.2 Far Field The far field on the other hand have plane fronts which do not depend on the shape of the source but rather unchanging. The radiated power decreases inversely with distance from the antenna theoretically. The electric and magnetic fields are uniquely defined by approximately a constant impedance of the medium. Figure 2.0.2 below illustrate the field regions around an antenna. Figure 2.0.2. Electromagnetic field regions around a typical antenna 2.5 Advances in field modeling A modeling is a good approximation of a problem to a real world solution. There are various mathematical modeling methods available in literature to date (Sarkar, Ji, Kim, Medouri, Salazar-Palma, 2003) ; (COST-231, 1999) and (Correia, 2001). Extensive theoretical and experimental research on electromagnetic field Levels has been carried out and reported in literature (Lin, 2002) ; (Cicchetti, 2004) and (Nicolas, Lautru, Jacquin, Wong, Wiart, 2001). Currently, the studies in electromagnetic fields can be grouped into two dominant channel modelling approaches: theoretical and empirical (Rappaport, 2002). While theoretical models depend on the knowledge of the physical laws of the wireless channel, such as the electrical properties of the ground, empirical models are based on actual radio frequency (RF) measurements of wireless channels. Furthermore, one can regroup it into Monte Carlo, Empirical and Physical models (Rappaport, 2002). Monte Carlo method are statistical in nature and m ake use of statistical and distribution functions such as channel characteristics of radio-transmitters and ray-optics. (Okumura, Ohmori, Kawano, Fukuda, 1968) Found out from measurement that for a situation where one cannot have a line of sight with the transmitter, the fading (attenuation) of the received voltage approximates relay distribution. Okumura also developed a correction factor to be used together with the data to correct for the field strength. When Okumuras measured results were averaged, the results showed properties of a lognormal distribution. (Okumura, Ohmori, Kawano, Fukuda, 1968) And (Mogensen, Eggers, Jensen, Andersen, 1991). The style of settlement and nature of buildings also affect the propagation of the radio waves when traveling from a source into the environment. Also random variation of building also contributed to loss of propagation of the waves. Some earlier work suggested that radio waves propagates over buildings and are diffracted down to street levels (Parsons, 1992). To be able to obtain reliable statistics, a lot more of measured data was required. Diffraction is when the path of the beam is obstructed by surface of irregular shape edges. Diffraction methods were developed and used to account for diffractions at rooftop (Ikegami, Yoshida, Takeuchi, Umehira, 1984). Variations in building height contributed to the shadow loss of propagation over low buildings. The most general approach uses numerical integration of physical optics integrals (Walfisch Bertoni, 1988) and (Bertoni, 2000). Measurement has shown that Monte Carlo methods need to consider the effect of trees (Mogensen, Eggers, Jensen, Andersen, 1991), (Rizk, Mawira, Wagen, Gardiol, 1996), (Vogel Goldhirsh, 1986) and (LaGrone, 1977). Trees are able to attenuate the signal to the order of 10 dB (Vogel Goldhirsh, 1986). The Monte Carlo methods even though are good when adequate measured data is used, suffers from modifications to the buildings and terrains and are very expensive to carry out. Empirical methods make use of information gathering on the basis of systematic experimentation instead of making use of logic or mathematics. The empirical model uses extensive measured data and analysis tools to formulate relationship between parameters of interest. Measurements have shown that a simple two-ray model consisting of the direct and the ground-reflected ray was sufficient to predict the path gain (loss) for propagation over a flat earth (Rustako, Jr., Owens, Roman, 1991) and (Xia, Bertoni, Maciel, Lindsay-Stewart, Rowe, 1993). Reflection occurs when the wave from a source hit an object whose dimension is large as compared to the wavelength of the wave. The path loss represents the signal attenuation in decibel (dB). The path loss is the difference between effective transmitter and receiver power. Most published work concerning outdoor propagation depends on free space and two-ray models (Pande, Choudhari, Pathak, 2012), (Willis Kikkert, 2007), (Neto, Neto, Yang, Gl over, 2010). The free space model assumes that both transmitter and receiver, use line-of-sight communication with no obstruction or reflection of any form. The free space model obeys the relation: (2. 1) Where f is the frequency in MHz and d is the separation distance between the transmitting and receiving antennas in meters. The receiver power has been found to falls off as the square of the transmitter-receiver separation distance. The receiver power decays at the rate of 20dB per decade. When the effect of ground ray reflection is considered, a Plane Earth model was used. The model is given as: (2. 2) Where d is the distance as above and and are the elevations of the transmitter and receiver heights in meters respectively. The separation distance (d) in this model is assumed to be much larger than and .In our real environment today, there are obstruction everywhere and the propagation of the electromagnetic waves are affected by it (Mao, Anderson, Fidan, 2007). The radio signals in our environment are attenuated by reflection, diffraction and scattering. Scattering occurs when an object in a medium are smaller as compared to the wavelength of the incoming wave. To be able to account for location characteristics and the impact of vegetation, it was found in literature that the average signal power decreases logarithmically with distance (Rappaport, 2002). To be able to estimate the path loss due to real world approximation, a log-distance model was developed. The average path loss for a typical distance between a transmitter and a receiver can be represented as an expression of distance by using the exponent n. The path loss is given as (Liao Sarabandi, 2005): (2. 3) Where is the path loss in dB at a reference distance and n is the path loss exponent that represent the rate of the path loss decrease as a function of distance. The value n also characterizes the propagation environment. Table 2.0.2 below summarizes the characteristic of the exponent n in the environment. Table 2.0.2. Characteristics of typical propagation environments Environment Path loss exponent values (n) Free space 2.0 Urban area, cellular radio 2.7 to 3.5 Shadow Urban cellular radio 3.0 to 6.0 In-buildings, line-of-sight 1.6 to 1.8 Obstructed in buildings 4.0 to 6.0 Obstructed in factories 2.0 to 3.0 The reference distances from research was taken to be between 100 m to 1 km depending on the height of the transmitter. The International Telecommunication Union (ITU) recommended that in a situation where majority of the signal propagates through trees or vegetation, the ITU-R model can be used (Rappaport T. S., 1996). (2. 4) Where the frequency used was between 200 MHz and 95 GHz. One of the most important fully empirical prediction method was conducted by (Okumura, Ohmori, Kawano, Fukuda, 1968). Okumuras method was based entirely on an extensive measurement in Tokyo city. Okumura developed a set of curves given the median attenuation relatively to free space in the urban area over a quasi-smooth terrain. From these curves, Okumura deduce from the graphs a simple power law which was a function of the environment and it characteristics. The model was applicable to frequency range between 200 MHz and 2 GHz and covers a distance of 100 km. Okumuras data was further modified by (Hata, 1980) who made it into a series of graphs. However, other methods disagree with the predictions of the Okumuras methods. Others have also tried to improve the method by applying building density (Kozono Watanabe, 1977) but was rejected by the scientific community. The Okumura-Hata model, together with related corrections was found to be one of the most common and single model used in designing real systems. Lee in 1982 came out with a power law model which was based on measurement and takes into account the variation in terrain (Lee, 1982). The model was environment specific because it was based on the assumption of the characteristics of the environment. It will be very difficult to tell which environment characteristics one need to use since the environment varies from one country to the other. Even though empirical method was easy to implement and their ability to include all environment-related factors that affect the propagation of radio waves in practice (Rappaport T. , 2002), they suffer from parameter ranges; the environment must be classified which may vary from one place to the other .the method also do not provides insight into propagation mechanism and analytical explanations. The Physical model method attempt to produce deterministic field strength at specified points. (Ikegami, F.; Takeuchi, T.; Yoshida, S., 1991). The model makes use of characteristics of the environments, physical optics and other theories to account for the intended parameter of interest. A careful assessment of the exposure of urban populations to electromagnetic fields requires the use of deterministic models that take into account the interferences caused by the buildings in the propagation of the field. Deterministic models were developed to account for terrain in the absence of buildings based on geometric theory of diffraction (Bullington, 1977), (Luebbers R. J., 1984) and (Lampard Vu-Dinh, 1993). Other methods such as parabolic equation method (Janaswamy Andersen, 1998) and (Levy, 1990) takes the detail terrain profile into account. The method uses detail map of an area taking into consideration building configurations and using a ray optics to trace the waves. There are 3-D (three dimensional) ray tracing models that are able to accurately estimate site-specific propagation situations (Catedra, Perez, Saez de Adana, Guiterrez, 1998). Although it accounts reasonable well for close in variation of field strength, it suffers from unrealistic assumptions, theories and underestimate in some cases (Saunders, 1999). Other works also uses numerical methods such as method of moments (MOM) to analyze the electromagnetic field of antennas (Johnson, Shin, Eidson, 1997), (Wanzheng, Yan, Anmin, 2000), (Povinelli DAngelo, 1991), (Lou Jin, 2005) and (Tofani, dAmore, Fiandino, 1995). However, these Methods require higher mathematical and programming skills such as large sparse matrix solution as well as more computer resources such as larger memory and multiple CPUs than the analytical method (Johnson, Shin, Eidson, 1997). A semi-analytical treatment has been carried out for cases where the horizontal separation between the base station and first row of building s is knownand all the buildings are of the same height (Xia, Bertoni, Maciel, Lindsay-Stewart, Rowe, 1993), (. Bertoni Maciel, 1992). From the above analysis, it was evident from literature to date that there is no one method which will be able to predict accurately well and also help us understand and make meaning of the physics involved in the process of study. This research work focuses on the need for a hybrid model (Semi-Empirical) which will achieve a good level of accuracy and also help us understand the physical interaction of the parameters involved and also serve as an advancement on this field. 2.6 Advances in measurements The natural electromagnetic energy comes from terrestrial and extra-terrestrial sources such as electrical discharges during thunder storms in the atmosphere and radiation from sun and space. It is of interest to note that the blackbody radiation from a person in the RF-band is approximately 3 mW/m2. The man-made source originates from mainly telecommunication and broadcasting services in the environment. There are several methods developed to assess the electromagnetic fields (EMF) exposure level in literature. One of them was the use of a personal exposure measurement methods (Viel, Cardis, Moissonnier, Seze, Hours, 2009), (Urbinello, Joseph, Huss, 2014), (Bolte Eikelboom, 2012), (Urbinello, Huss, Beekhuizen, Vermeulen, Rà ¶Ãƒ ¶sli, 2014), (Radon, Spegel, Meyer, 2006) and (Frei, Mohler, Bà ¼rgi, 2009). Another method is the used of stationary measurement approach (Bà ¼rgi, Frei, Theis, 2010), (Calin, Ursachi, Helerea, 2013), (Pachà ³n-Garcà ­a, Fernà ¡ndez-Ortiz, . P aniagua-Sà ¡nchez, 2015), (Ozen, Helhel, Colak, 2007), (Korpinen Pà ¤Ãƒ ¤kkà ¶nen, 2015) and (Verloock, Joseph, Goeminne, 2014) where measurement is made at a define period of time such as 6 minutes averaging. The 6-minute averaging time comes from the time constant for the thermoregulation of the body (ICNIRP, 1998) to occur. FM and TV broadcast transmitters, GSM and UMTS base stations are important sources of RF EMF in terms of exposure level in the environment. In general, FM and TV broadcast transmitters were installed in places far off distance from the city center in the past but in todays world, they are installed within our communities. In 1980, Tell and Mantiply published a study of RF fields measured at 486 sites across 15 major metropolitan areas in the USA which at that time, accounted for nearly 20 % of the nations population of 226.5 million people (Tell Mantiply, 1980). The measurements covered the low VHF TV (54-88 MHz), FM radio (88-108 MHz), high VHF TV (174 -216 MHz) and UHF TV (470-806 MHz) bands. They reported a median wideband time-averaged field level of 0.005 mWcm-2, with an estimated 1 % of the population exposed to fields with power densities of 1 mW cm-2. In addition, the fields from FM radio broadcasts were clearly dominant over the fields from the other three bands. Typically for High-power broadcast transmitters, the effective radiated power (ERP) was 250 kW per channel for FM radio and 500 kW per channel for television. The antennas were mounted towards the top of a 300 m mast. For medium-power broadcast and telecommunications transmitters, the transmitted powers were in the region of 100-200 W per channel. The exposure to the general public was very small relatively to people living in the immediate neighborhood of medium and short-wave stations (Jokela, Puranen, Gandhi, 1994). People working in FM and TV towers which are near to high power FM or TV broadcast antennas were exposed to high levels in the range of 50 to 800 MHz (Jokela Puranen, Occupational RF exposures, 1999) and (Hansson-Mild, 1981). Other studies have been carried out in the domains of exposure field measurement by (Viel, et al., 2009a), (Viel J. , Cardis, Moissonnier, R., Hours, 2009b) and possible consequences of human exposure to such fields(Hossmann Hermann, 2003). A study of ambient RF fields conductedmostly outdoors in Gothenburg, Sweden reportedan average wideband power densities of between 0.04and 0.05 mW cm-2 (Ahlbom, Feychting, Hamnerius, Hillert, 2012).European studies reported generally, in the five-country analysis, the totalexposures were lowest in the urban residential environment(range of means 8.5E-03 to 1.45E-02  µW cm-2). The results for a set of African countries was qualitatively and quantitatively similar to the results of RF measurement surveys conducted in the Americas, Europe and Asia (Rowley Joyner, 2012) where the global weighted average was 0.073 mW cm-2. The mean for the selected South Africa n data set was 0.016 mW cm-2. Some of the conclusion drawn was that the signal strengths for the cellular bands was unchanging in both time and across countries. Even though introduction of 3G and 4 G services are on the increase, the field levels are log-normally distributed and more data points makes the FM signal strengths relatively constant. In addition to these findings, several studies have reported that residential (and outdoor) fields from broadcast and cell downlink sources are lower in rural areas compared with fields in urban and suburban areas (Breckenkamp, et al., 2012), (Viel, et al., 2009a) and (Joseph, Vermeeren, Verloock, Heredia, Martens, 2008). Cancer has been the primary concern among populations in the immediate vicinity of broadcast transmitters. Scientific evidences point toward heating effect from high levels of exposure, and most safety limits are based on it. Among these are the exposure limits proposed by the International Commission for Non-Ionizing Rad iation Protection (ICNIRP) (ICNIRP, 1998) or Institute of Electrical and Electronics Engineers (IEEE) (IEEE, 2005) to prevent such effects (WHO, 2006).There is little scientific evidence on the risks associated with long-term exposure to low levels of RF EMF (ICNIRP, 1996). In 2012, the International Agency for Research on Cancer classified RF EMF as possibly carcinogenic (Group 2B), based on studies on mobile phone usage (IARC, 2012). Mobile phone usage has increased tremendously, with about 6.8 billion subscriptions by the end of 2013 (ITU, 2013) and nearly 7 billion cell phone subscribers in 2014 (ITU). Statistics show that as at May 2008, the number of mobile phone users in Ghana was well over 8 million but as at the end of January 2016, the number of mobile phone users in Ghana rose to 26.09 million, according to the latest figures from the National Communications Authority (NCA). Urban areas are mostly affected by the over population of Base Station Transceivers (BTSs). Their closeness to homes and schools are raising concern about some health risks that might be associated with them (Khurana, et al., 2010). Numerous studies have demonstrated that a very significant part of the human exposure in the radiofrequency (RF) band is due to mobile communications radiation (Bornkessel, Schubert, Wuschek, Schmidt, 2007), (Genc, Bayrak, Yaldiz, 2010), (Joseph, Verloock, Goeminne, Vermeeren, Martens, 2010), (Kim . Park, 2010), (Rufo, Paniagua, Jimenez, Antolà ­n, 2011), (Joseph, W.; Verloock, L.; Goeminne, F.; . Vermeeren, G; Martens, L., 2012a), (Joseph, W.; Verloock, L.; Goeminne, F.; Vermeeren, G.; Martens, L., 2012b), (Rowley . Joyner, 2012). The maximum output powers of a radio channel used in GSM and UMTS networks are 10-40W and 20-60 W, respectively (Koprivica, Neskovic, Neskovic, Paunovic, 2014).It has been shown t

Friday, October 25, 2019

Exploring Dyslexia and its Implications Essay -- Exploratory Essays Re

Exploring Dyslexia and its Implications Introduction Imagine yourself in a crowded room. You are sitting at a table with other people your age, reading a book out loud, and it is your turn. You look up at the other people, terrified because nothing is coming out of your mouth. You can't manage to force even one word out because you don't know how to read. Now, imagine yourself as a teenager. This is what it was like for fourteen-year-old Anita, a dyslexic. Life was horrible for her. She said that "Dyslexia makes you an outcast, and people think you are dumb...It's like racism; people are just prejudiced" (McConville, 2000). Feeling useless, she got herself into a lot of trouble: drinking, smoking and two attempts of suicide. Dyslexia seems like such a minimal disorder, but what is it really, what causes it, and how can it be treated? What is dyslexia? Dyslexia is a reading disorder. It is something that affects not only the reader's life, but also the lives of everyone around him or her. It is a very random condition, but it is four times more common in males than in females. The race, culture, and society are not considered when dyslexia decides who it will attack, but when it does, it causes symptoms that differ from difficulty in spelling to lack of self confidence and difficulty in pronunciation to a bad short-term memory (Bee, 2000). There are many theories of how dyslexia is caused. One is that it is inherited. Another is the lack of certain nutrients obtained by eating some foods. Whatever the cause, it is still a serious condition that needs to be treated. Symptoms If someone was to have dyslexia, how would it be recognized? Here are some common characteristics of dyslexia. 1. There are no hearin... ...f the research in the causes and many treatments of dyslexia. References Bee, P. (2000, June 27). Early warning system to detect dyslexia. The Times (London). pylons, and yet another miracle cure for dyslexia. The Guardian (London), pp. 15. Connor, S. (2002, January 4). Cause of dyslexia narrowed down to single chromosome. The Independent (London), pp. 5. Ellis, R. (2002, January 21). Lessons leart [sic] in treating dyslexia. Courier Mail, pp. 6. Fraser, L. (2001, April 15). Fish oils 'help to improve dyslexics' concentration'. Sunday Telegraph, pp. 10. Hagin, R. & Silver, A. A. "Dyslexia". Collier's Encyclopedia. 1993 Kaluger, G. & Kolson, C. J. (1978). Reading and learning disabilities. Ohio, Bell and Howell Company. McConville, B. (2000, March 21). Hope for dyslexics. The Times (London). (1999, September 11). Dyslexia gene. The Lancet. Exploring Dyslexia and its Implications Essay -- Exploratory Essays Re Exploring Dyslexia and its Implications Introduction Imagine yourself in a crowded room. You are sitting at a table with other people your age, reading a book out loud, and it is your turn. You look up at the other people, terrified because nothing is coming out of your mouth. You can't manage to force even one word out because you don't know how to read. Now, imagine yourself as a teenager. This is what it was like for fourteen-year-old Anita, a dyslexic. Life was horrible for her. She said that "Dyslexia makes you an outcast, and people think you are dumb...It's like racism; people are just prejudiced" (McConville, 2000). Feeling useless, she got herself into a lot of trouble: drinking, smoking and two attempts of suicide. Dyslexia seems like such a minimal disorder, but what is it really, what causes it, and how can it be treated? What is dyslexia? Dyslexia is a reading disorder. It is something that affects not only the reader's life, but also the lives of everyone around him or her. It is a very random condition, but it is four times more common in males than in females. The race, culture, and society are not considered when dyslexia decides who it will attack, but when it does, it causes symptoms that differ from difficulty in spelling to lack of self confidence and difficulty in pronunciation to a bad short-term memory (Bee, 2000). There are many theories of how dyslexia is caused. One is that it is inherited. Another is the lack of certain nutrients obtained by eating some foods. Whatever the cause, it is still a serious condition that needs to be treated. Symptoms If someone was to have dyslexia, how would it be recognized? Here are some common characteristics of dyslexia. 1. There are no hearin... ...f the research in the causes and many treatments of dyslexia. References Bee, P. (2000, June 27). Early warning system to detect dyslexia. The Times (London). pylons, and yet another miracle cure for dyslexia. The Guardian (London), pp. 15. Connor, S. (2002, January 4). Cause of dyslexia narrowed down to single chromosome. The Independent (London), pp. 5. Ellis, R. (2002, January 21). Lessons leart [sic] in treating dyslexia. Courier Mail, pp. 6. Fraser, L. (2001, April 15). Fish oils 'help to improve dyslexics' concentration'. Sunday Telegraph, pp. 10. Hagin, R. & Silver, A. A. "Dyslexia". Collier's Encyclopedia. 1993 Kaluger, G. & Kolson, C. J. (1978). Reading and learning disabilities. Ohio, Bell and Howell Company. McConville, B. (2000, March 21). Hope for dyslexics. The Times (London). (1999, September 11). Dyslexia gene. The Lancet.

Thursday, October 24, 2019

Examples in “The Brutal Business of Boxing”

â€Å"The Brutal Business of Boxing† written by author John Head, uses all three forms of examples: the extended example, the sentence length example, and the single example. The extended example lies in the form of the entire essay. The entire essay is a description of one person is specific, where physical and personality characteristics are developed paragraph by paragraph. Due to the fact that the descriptions are centered on one person, this is an extended example.However, sentence length examples are included in every paragraph. Every paragraph in the essay is quite short and centers on providing information around the central topic of the essay, Muhammad Ali. The author uses single examples most often in the essay. Some examples of this are: â€Å"confident, articulate, charismatic† (par. 2); â€Å"lightning quick jabs† (par. 4); and â€Å"slow shuffle† (par. 6). These single examples highlight the character traits that the author would like the rea der to envision.â€Å"The Brutal Business of Boxing† uses all three types of examples throughout the essay to develop it. The entire essay is an extended example; each paragraph contains sentence level examples, and each sentence contains colorful single examples. The essay is a wonderful and multi leveled model of an example essay. Reference Head, John. â€Å"The Brutal Business of Boxing. † Found in Wordsmith:A Guide to Writing. 3rd ed. by Pamela Arlov. Prentice Hall: NJ. 2006. p. 589-90.

Tuesday, October 22, 2019

History of Film: Film Distribution

There were many changes in marketing and distribution of films from end of the silent period to the modern digital period. There was a studio system that existed at the end of the silent period and collapsed in 1949 with a court ruling. During this same time a sales era of marketing existed. After the Second World War the sales era was replaced with a new way of thinking and sales and marketing were not synonymous anymore.Marketing after World War II meant finding out what consumers’ needs and wants were and providing them with products to satisfy those needs and wants. Globalization began to occur rapidly in the 90’s and expansion in foreign market meant marketers had to concentrate on this market more than they had in the past. The digital period also meant changes of first runs and second runs for films. The studio system was a means of film production and distribution dominant in Hollywood from the early 1920s through the early 1950s.The term studio system refers to the practice of large motion picture studios (a) producing movies primarily on their own filmmaking lots with creative personnel under often long-term contract and (b) pursuing vertical integration through ownership or effective control of distributors and movie theaters, guaranteeing additional sales of films through manipulative booking techniques. A 1948 Supreme Court ruling against those distribution and exhibition practices hastened the end of the studio system.In 1954, the last of the operational links between a major production studio and theater chain was broken and the era of the studio system was officially dead. The period lasted from the introduction of sound to the court ruling and the beginning of the studio breakups; about 1927 to 1954, when the studios no longer participated in the theatre business. During the Golden Age, eight companies comprised the so-called major studios responsible for the studio system.Of these eight, five were fully integrated conglomerates, combining ownership of a production studio, distribution division, and substantial theater chain, and contracting with performers and filmmaking personnel: Fox (later 20th Century-Fox), Loew's Incorporated (owner of America's largest theater circuit and parent company to Metro-Goldwyn-Mayer), Paramount Pictures, RKO (Radio-Keith-Orpheum), and Warner Bros. Two majors, Universal Pictures and Columbia Pictures were similarly organized, though they never owned more than small theater circuits.The eighth of the Golden Age majors, United Artists, owned a few theaters and had access to two production facilities owned by members of its controlling partnership group, but it functioned primarily as a backer-distributor, loaning money to independent producers and releasing their films. The ranking of the Big Five in terms of profitability (closely related to market share) was largely consistent during the Golden Age: MGM was number one eleven years running, 1931 to 41.With the exception of 193 2 when all the companies but MGM lost money. One of the techniques used to support the studio system was block booking, a system of selling multiple films to a theater as a unit. Such a unit, frequently twenty films, typically included no more than a few quality movies, the rest perceived as low-grade filler to bolster the studio's finances. On May 4, 1948, in a federal antitrust suit known as the Paramount case but brought against the entire Big Five, the U. S. Supreme Court specifically outlawed block booking.Holding that the conglomerates were indeed in violation of antitrust, the justices refrained from making a final decision as to how that fault should be remedied, but the case was sent back to the lower court from which it had come with language that suggested divorcement the complete separation of exhibition interests from producer-distributor operations was the answer. The Big Five, though, seemed united in their determination to fight on and drag out legal proceedings for years as they had already proven adept at after all, the Paramount suit had originally been filed on July 20, 1938.The sales era is called the sales era because many companies' main priority was to move their products out of the factory using a variety of selling techniques. The sales era lasted from the early 20’s to the end of the World War II. Compare this to the cinema and both the sales era and studio system era align closing on a time period. During The sales era, companies felt that they could enhance their sales by using a variety of promotional techniques designed to inform potential customers about and/or persuade them to buy their products. This type of thinking was initiated by the economic climate of the time.The selling concepts related markets that already existed, where globalization hadn’t yet occurred and creating profit pools hadn’t even been thought of yet. However October 29, 1929—†Black Tuesday†Ã¢â‚¬â€marked the beginnin g of the Great Depression. This was the single most devastating financial day in the history of the New York Stock Exchange. Within the first few hours that the stock market was open, prices fell so far as to wipe out all the gains that had been made in the previous year. Since the stock market was viewed as the chief indicator of the American economy, public confidence was shattered.Between October 29 and November 13 (when stock prices hit their lowest point), more than $30 billion disappeared from the American economy— comparable to the total amount the United States had spent on its involvement in World War I (Schultz, 1999). The amount of disposable and discretionary income that consumers had to spend on necessities and luxuries also decreased dramatically as the unemployment rate approached 25 percent. Companies found that they could no longer sell all the products that they produced, even though prices had been lowered via mass production.Firms now had to get rid of the ir excess products in order to convert those products into cash. In order to get rid of products, many firms developed sales forces and relied on personal selling, advertising signs, and singing commercials on the radio to â€Å"move† the product. Theodore Levitt(1960), a prominent marketing scholar, has noted that these firms were not necessarily concerned with satisfying the customer, but rather with selling the product. This sales orientation dominated business practice through the 1930s until World War II, when most firms' manufacturing facilities were adapted to making machinery and equipment for the war effort.Of course, the war dramatically changed the environment within which business was conducted. This also changed companies' philosophies of doing business. The marketing concept era, a crucial change in management philosophy can be linked to the shift from a seller’s market, where there were more buyers for few good and services, to a buyer’s market, w here there were more goods and services than people were willing to buy them. When World War II ended, factories stopped manufacturing war supplies and started turning out consumer products again, an activity that had practically stopped during the war.The relationship marketing era follows the marketing concept era. Relationship marketing succeeds the marketing concept era; however most firms are still practicing the marketing concept use of marketing. The advent of a strong buyers market created the need for consumer orientation by businesses. Companies had to market good and services, not just produce them, but sell them to. This realization has been identified as the emergence of the marketing concept. Marketing would no longer be regarded as supplemental activity performed after completion of the production process. Instead, the marketer ould play a leading role in product planning. Marketing and selling would no longer be synonymous terms. Today’s fully developed market ing concept is a companywide consumer with the objective of achieving long-run success. All facets and all levels of management of the organization must contribute first to assessing and then to satisfying customer wants and needs. Even during tough economic times, when companies tend to emphasize cutting costs and boosting revenues, the marketing concept focuses on the objective of achieving long-run success instead of short term profits.The firm’s survival and growth are built into the marketing concept; companywide consumer orientation should lead to greater long-run profits. Gone With the Wind, released December 15th 1939, was no doubt a cash cow. In the film’s 8th closing week it had already earned $5,567,000, where it began to see profit. By June 1st 1940 the film had already made its year and half goal of over 20 million; a very sizeable profit for the producers of the film. It did however require a large amount of investment from its producer David O. Selznick, of almost 4 million in production costs, and another million in marketing expenses.Adjusted for inflation it would have nearly been 50 million in production costs alone. David Selznick must have known his film was going to be a big hit. He paid $50,000 for the rights to a New York Times bestselling book. If the film was going to do as well as the book he knew he was going to see a large profit from his cash cow. It wasn’t common to have a worldwide release during the studio system era like it is today. Typically films would be released in their native country first and then a few months later it would be released in countries with speaking languages the same as the country of origin.In North America the first run of a film refers to the exhibition of theatres it would play in. A first run of a film would only play in the major cities in the downtown areas in the â€Å"de luxe† first run film theatre. These theatres would seat anywhere between 1500 to 5000 people in on e room to one screen. This is of course before the days of digitization where people can view the film on DVD, and before the days of multiplexes. First run films had a higher ticket premium than that of second run or subsequent runs of the film. Gone With the Wind is said to have charged $0. 5 for a matinee viewing of the film and up to $2. 20 at Manhattan’s Astor in its first run. Compare this to the $0. 23 average ticket price in that year, the price was very high. Gone With the Wind’s first run lasted two and half years and was seen by 203 million people. It played in 156 theatres in 150 cities domestically. Gone With the Wind was eventually released around the world. Box office revenue for foreign release is much harder to calculate. Gone With the Wind made $30 million in domestic revenue and $19 million in foreign revenue in its first run.Adjusted for inflation that amount would total about $755,821,500. 00 today. (Dollar Times) Most of Gone With the Wind’ s came from domestic revenue, about 63. 3 percent. Enter 2009. Many things have changed. Firstly a new marketing era is now in place. The studio system has collapsed. Globalization is not a competitive advantage of the studio system period, it is a competitive necessity. Films that do not compete in the global market do not compete at all. First runs last only weeks, months if the film is a really big hit.First runs are not only in the downtown theatres but also in the neighborhood theatres, and now in the muitlplex theatres. A second run in today’s language is when the film hits the new release section of the rental shop. In its third month Avatar is a big hit. At the time of this writing it is still playing in its first run. How does it compare to Gone With the Wind? Avatar is currently being seen on 3,452 theatres in hundreds of countries. Estimated to cost $280 million to make Avatar is much more expensive to make, even for adjusting inflation that Gone With the Wind.Curr ently domestic box office revenue is $710,842,764, and its foreign box office revenue amounts to $1,839,000,000. This is prove of the globalization of the cinema industry. The majority of the box office revenue no longer comes from domestic revenue but rather from the foreign market. Avatar is not only seen on the traditional 2D screens that Gone With the Wind was but it also seen on 3D screens, and IMAX screens, allowing for price alterations between the different formats the film is viewed in. It will be interesting to see how Avatar does when it ends its first run and enters its second run.A film that has ended its first run and second run is much more accurate to compare with Gone With the Wind since the film would have been shown at neighborhood theatres two and half years after it was first released. Titanic was released in 1997 and has ended both its first and second run. How did these two films compare? Titanic’s production budget was $200 million compared to Gone Wit h the Wind’s adjusted for inflation budget of 50 million. Total gross revenue for Titanic has reached $1,843,201,268, while Gone With the Wind has reached $400,176,459.Adjusted for inflation Titanic would have reached nearly 3 billion in total gross revenue at $2,996,049,690. If Gone With the Wind were adjusted for total gross revenue it would reach $3,099,918,548. Total gross revenue includes first run, second run, and all other revenue that comes from the film, including T. V rights, rentals, VHS and DVD sales. It can be concluded that the importance of globalization in the film industry is more important now than it was during the studio system period. The way in which films are exhibited today is very different than it was during the studio period.First run theatres do not exist in the same way they did during the studio system period. Second runs of films were in theatres and now they are a way in which the audience may view the film on their terms, following the marketi ng concept idea. Consumers choose the way in which they consume products. The industry adapts to this and finds new ways to market their ideas and invents new products for the consumer to consume.Works Cited ‘Avatar' Passes ‘Titanic's' Overseas Record. The Hollywood Reporter, 24 Jan. 2010. Web. . Boone, Louis E. and David L. Kurtz. Contemporary Marketing. [Mason, Ohio]: Thomson South-Western, 2006. Print. Box Office, Associated Publications. â€Å"What If the Government Wins Its Suit? † Editorial. Boxoffice 1 June 1940. Print. Crane, Fredrick G. , Roger A. Kerin, Steven W. Hartley, Eric N. Berkowitz, and William Rudelius. Marketing 6th Canadian Edition. Toronto: McGraw-Hill Ryerson, 2006. Print. Frankly, My Dear â€Å"Gone with the Wind† Revisited. Yale University Press, 9 Feb. 2009. Web. . HBrothers. â€Å"Inflation Calculator The Changing Value of a Dollar. Web. IMDb. com, Inc. â€Å"Avatar, Titanic, Gone With the WInd. † Avatar, Titanic, Gone Wit h the WInd. IMDb. com, Inc. , 4 Mar. 2010. Web. . King, Clyde Lyndon, Frank A. Tichenor, and Gordon S. Watkins. The Motion Picture in Its Economic and Social Aspects. New York: Arno, 1970. Print. Rebecca Keegan, Rebecca. â€Å"How Much Did Avatar Really Cost? † Vanity Fair 22 Dec. 2009: 112. Print. Shindler, Colin. Hollywood in Crisis: Cinema and American Society, 1929-1939. London: Routledge, 1996. Print. TIME. â€Å"SHOW BUSINESS: Record Wind. † TIME

Melting Point Vs. Freezing Point

Melting Point Vs. Freezing Point You may think the melting point and freezing point of a substance occur at the same temperature. Sometimes they do, but sometimes they dont. The melting point of a solid is the temperature at which the vapor pressure of the liquid phase and the  solid phase are equal and at equilibrium. If you increase the temperature, the solid will melt. If you decrease the temperature of a liquid past the same temperature, it may or may not freeze! This is supercooling and it occurs with many substances, including water. Unless there is a nucleus for crystallization, you can cool water well below its melting point and it wont turn to ice (freeze). You can demonstrate this effect by cooling very pure water in a freezer in a smooth container to as low as −42 degrees Celcius. Then if you disturb the water (shake it, pour it, or touch it), it will turn to ice as you watch. The freezing point of water and other liquids may be the same temperature as the melting point. It wont be higher, but it could easily be lower.

Sunday, October 20, 2019

Cameron Greer Essays - United States, Free Essays, Term Papers

Cameron Greer Essays - United States, Free Essays, Term Papers Cameron Greer 03 Oct 2016 Intro to Political Science Professor Baptist Presidential Debate 26 Sep 2016: Analysis Throughout the first presidential debate this year there was a lot of information covered. There was also a lot of questions avoided . To me, Mr. Trump was not prepared for the debate. He tended to answer around a lot of questions that asked for specific policy. The ones that were most obvious were how he planned to stop police shootings of African-American s , and how to improve the black community. He solely responded by stating we need "Law and Order" which is an extremely vague policy that provides little to no help for the black community. When Mr. Trump was asked about how he will increase jobs, as well as his entire economic plan he stated some policies that were questionable. First , he said that he would stop companies from leaving the U.S. He believes that if we tax companies hard when they import their goods into the United States that they will not want to leave, thus creating more jobs for Americans. In my opinion, I believe it is a good idea to tax companies when they import goods into the U.S. and thought this policy is not a bad idea. Trump then mentioned that he wants even more tax cuts for the wealthy so that they can expand their companies and provide more jobs. Now, I do not believe this policy is smart, because greed is a factor and the company will most likely keep the money at the top. Over many years, it has shown that majority of the money stays with in the heads of the companies and not with the working and middle class. To move on to the next point, Mr. Trump was asked about cyber security and our national security as a whole. He stated that, as far as cyber security, that we should be better than anyone else at technology and that we need to use our technology to take threats out. I am not sure what he meant by this but it just another slew of vague statements made by Mr. Trump during the debate. He also talked about how we need to use NATO to take out ISIS and that other countries in NATO need to pay us. I agree that NATO can be used as a coalition force to take out ISIS, and that it would be much easier this way. As far as other countries paying us, at this point in time I don't believe it is that big of a deal. Lastly , he mentioned the Iran deal and how bad it was. I personally believe it was a good deal because, although it was temporary, something needed to be done. Mrs. Clinton to me was very poised and prepared for all questions and rebuttals from Mr. Trump. When asked about her policy for creating jobs and improving the economy she was more detailed in her plan. She started by saying that we need to have an economy that works for everyone not just the upper class . Also, that we need to focus on infrastructure, energy, small business and raising minimum wage the most to create more jobs. As it pertains to the economy, she has a plan that closes corporate loop holes and focuses on making in an investment in the economy where everyone can and will have the opportunity to grow. I generally agreed with Mrs. Clinton's policies about the economy. She really talked about improving and placing a re-emphasis on the middle class. Now, when she was asked about cyber security and our national security she had similar answers to Mr. Trump. She also stated that we should have a greater capacity online to defeat ISIS and other hackers around world attacking our databases. Also referring to ISIS, she believes we should use airstrikes and put a focus on taking out their leaders. I could not agree more with Mrs. Clinton that we need to take out ISIS leadership and make it our priority. As it pertains to NATO, she agrees with Mr. Trump that they should put more focus on terror. However, she

Saturday, October 19, 2019

Financial Performance of Apple Inc from 2002 to 2011 Research Paper

Financial Performance of Apple Inc from 2002 to 2011 - Research Paper Example The current ratio position of the company, which is â€Å"greater than 1,† offers a high leeway of liquidity (Kennon p 1). The Quick Ratio of Apple, which is 1.35, indicates their liquidity in terms of readily available finance. Similarly, the leverage ratio is 1.52 which indicates a sound debt and equity position, where investors can expect high returns. Moreover, a proportionate increase occurred in the Earning Per Share. The EPS has risen from 0.09 $ to 27.68 $ during the period between 2002 and 2011. This accounts for an appreciation of over three thousand times in ten years or an average growth of 300 times a year. Thus, the company stands to attract a lot of investors, which, in turn, will further escalate the rate of their share and add further value to the company. The price/sales ratio was 4.23%, compared to companies such as HP, Google, and Samsung. Apple, a leading manufacturer of computer hardware and software, iPods, mobile phones and other gadgets, was founded by Steve Jobs and Steve Wozniak, with the former as the CEO. Tim Cook was appointed for new CEO. From 2011, Dr. Arthur D Levinson was the director of Apple. The SVP heads are Mr. Jeffery E Williams (SVP Operation), Mr. Peter Oppenheimer (SVP and CFO), Mr. Guy Tribble (VP Software Technology), Mr. John Browett (SVP Retail), Mr. Eduardo H. Cue (SVP Internet Software and Services), Mr. Craig Federighi (SVP Software Engineering), Mr. Scott Forstall (SVP iOS Software) and Mr. Jonathan Ive SVP Industrial Design.  

Friday, October 18, 2019

The benefits of mechanical improvements in cardiopulmonary bypass Thesis Proposal

The benefits of mechanical improvements in cardiopulmonary bypass - Thesis Proposal Example This allows the cardiac operation to take be performed in a less chaotic and stationary environment thus reducing chances of error. During the procedure, the blood is gravity drained to a reservoir, it is then oxygenated and returned to the arterial system via a pump. One of the main concerns for physicians is the damage that is sustained by the blood and blood cells during friction as it is being propelled from the pump. The commonly used rolled pumps utilize a basic mechanism with tubing lined along a raceway with rollers massaging the tubing to propel the blood forward. This style of pump requires the clinician to keep the occlusion balanced at a level that ensures adequate forward blood flow with minimal damage the fragile blood cells inside the tubing. Roller pumps have been found to cause shear stress in blood that can lead to haemolysis, realease of vasoactive substances and spallation which is a breakdown in the tubing wall. Hemolysis and the corresponding increase in plasma free hemoglobin are severely detrimental to the patient outcome and prolong the patients’ recovery following cardiac surgery utilizing cardiopulmonary bypass. However, there has been some decrease in the occurrence of these risks by the utilization of centrifugal pumps that were specifically developed to eliminate intermittent tubing occlusion. Research has been done to provide evidence that the damage to red blood cells, platelets, and plasma proteins is minimized with the use of centrifugal pumps as compared to the common roller pumps.

The Themes and Purposes of Art Essay Example | Topics and Well Written Essays - 500 words

The Themes and Purposes of Art - Essay Example However, as to the purpose of a work of art, I have often been confused whether art is for art’s sake or art is for man’s sake. Now, the online visit to the National Gallery of Art (NGA) helped me a lot to resolve much of the conundrum of the purpose of art. I found that whereas Benton’s visionary appeals value a human being’s sake or art’s aesthetic purpose, its static dynamism is for art’s sake, which is for eyes that are more skilled. After reviewing Thomas Hart Benton’s Trail Riders, I realized that his iconography is too simple that its naturalistic majesty can be exaggerated in no way. Amid the three-dimensional landscape of heartland America, the iconographic presence of the horse riders who are seen from a remote panorama conveys the static dynamism of his theme. The vantage point of the artist is such that it turns the remote objects and horse-riders almost into abstraction with the use of contours in implied lines. Though the use of light and shadow clearly contributes to the realism of Benton’s work, the glow of the light surpasses the reality of its atmosphere and adds to its surrealism to a great extent. It is the surrealism that evokes motion of spirit in the minds of the viewers. Remoteness is also a prevailing theme and perspective of this piece of Benton’s artwork. It has thoroughly been maintained through the manipulation of shapes of the contents within the works. Even the nea rest objects such as the flowers, the bush, and the stones do not have the individual clarity. Remoteness as well as the zenith of the mountain contributes to the silence of the artwork in which the motion of the riders refers to the fourth dimension, Time. Also Benton’s work can be interpreted from atmospheric perspective. From this perspective, human being’s kinetic smallness has been contrasted with the vastness of the universe. Though the overlapping of the

Why has United Nations been more successful than League of Nations Essay

Why has United Nations been more successful than League of Nations - Essay Example Any comparison between the two international bodies as League of Nations and United Nations can be done only by tracing their origin.In order to amply answer the thesis question, one needs to analyze in details the prevailing world situation when these two bodies were formed. One needs to appreciate that both the bodies were formed in the aftermath of World Wars that ravaged a large part of the developed world when nations tired of war were thinking of some permanent solution to banish war for ever from the face of the earth. The nations thought of creating some international supervisory body that would mediate and diffuse tensions that might brew between nations and ensure that such tensions never spill over into full fledged armed conflict. The bane of war was very much realized by all the combatants what with European economy in tatters after the savagery and mindless destruction that was unleashed during the two World Wars. It seemed that all parties concerned had come to their s enses and have realized the hard way that war can never be a solution; one war inevitably leads to other wars more savage and more ferocious than the previous one. The stage was set, as one would assume, for the creation of one such international body at the end of First World War. This body would, or at least those who took leading part in its formation thought so, would be an international mediator that would diffuse the glowing embers of a potential armed confrontation before it turned into an uncontrollable inferno (Knock, 1995). Inception of League of Nations By mid-December 1918 World War I was practically over, the shooting part, that is, and USS George Washington was approaching French coastline with US President Woodrow Wilson on board. The President was buoyant with notions of setting up a world order that would usher in everlasting peace in world. The idea and mission was surely a laudable one but little did the President know of the pitfalls that lay ahead in implementin g his grandiose and eminently lofty plans that would prevent forever any war from erupting into a frenzy of genocide and destruction. This effort of his earned him the Nobel Prize for peace in 1919 but Wilson was perhaps not aware that his allies were determined that Germany atone for her sins by paying heavily and were in no mood to forgive and forget and start afresh. But why blame only the European nations? Many Americans also feared that the birth of any multinational body like the League of Nations would take on the role of a global monitor and prevent member nations from pursuing their independent foreign policies. This strain of isolationism had pervaded foreign policy relations of United States right from its arrival on the international scene as a power of consequence. This trait perhaps had a direct link with its geographical location being bound on either sides by oceans and thus not having to share boundaries with equally powerful nations as most European countries had t o. Canada on the north and Mexico on the south were so inferior in military and economic strength as compared to itself that United States had never faced the predicament of dealing with a prickly and potentially dangerous neighbor. Hence, the general feeling among American public was not favorable towards the formation of an international body. They, instead, felt their independence in charting their foreign policy course to be much more important than engaging in some sort of understanding and compromise with fellow developed countries so that a World war is never repeated. League of Nations thus started its journey amid much misgivings and mutual distrust and was doomed perhaps even before it was formally brought into being (Lerner, 2004).

Thursday, October 17, 2019

Source evaluation Essay Example | Topics and Well Written Essays - 750 words - 4

Source evaluation - Essay Example study were subjected to writing as well as reading assessments and the parents were given a set of questionnaires to fill in information regarded their views on video games. The parents were to do some analysis as well as to take up roles in the study behavior of their children as to see if what they had been instructed to undertake were actually true over a period of four months. (Weis, Robert, and Brittany C. Cerankosky, 12-17). They then filled in another set of questionnaire at the end of the four months and those results were heavily relied upon by the two psychologists to come up with their final analysis and conclusion. The study revealed that young boys who did not own video games put their parents under pressure to buy them such items. Upon receiving them, they become their main source of addiction. They noted that boys who had acquired the system began to register low academic performance in schools. Their research proved that video games were not appropriate among school going boys who could no longer concentrate in their studies thus having lower reading and writing scores. Video games have caused a displacement in the after-school activities such as artistic games that had a lot of positive impact on the academic performance of students. This book is of great importance to this research owing to the fact that it expands on the role of video games on academic performance among children. The article was published in Sage on 18th February 2010 which makes it very relevant for the purpose of this study. They developed an understanding on the correlation between playing of violent video games and violent antisocial behavior in the society. The book considers how playing video games may lead to children and the youth acquiring violent attitudes. The relationship between the two variables is best evident among children who spent a lot of time playing the games. There are fewer factors to cause alarm in the video gaming industry provided that adequate steps

New London Airport Research Proposal Example | Topics and Well Written Essays - 2000 words

New London Airport - Research Proposal Example In UK the latest government forecasts predict a 239% change on 1995 by 2015 of the terminal passenger numbers. I t shows a requirement equivalent of an extra 3 -4 airports the size of Heathrow. The country needs to follow the sustainable development policy of its own and of EU. The required framework of aviation should reduce impact, increase growth and protect environment. (DETR 1997 as cited inWhitelegg 2000). A few recommendation suggested are: putting an environment charge based on emissions, ending of all subsidies and tax exemptions and more stringent noise and emission standards.(Whitelegg 2000) Environmental data and criteria: The London mayor is particularly optimistic about environmental safety by moving airport into the Thames estuary. It would cut noise since planes could approach the airport over the North Sea. Moreover, the Heathrow expansion would put pressure on the dense west London while there is alternative to the east. The noise has been the complaint of many residents and the levels of global warming emission gases have gone beyond EU and Environmental Agency norms (Katz 2008). Ben Stewart of Greenpeace argues that increase in number of flights from a four runway would negate the environmental benefits. He feels that new runways are new runways and we should rather think about bringing emission down by funding for railways and other low carbon emission transports. (Murray 2008). Unite, Britain's biggest union feels that Thames is not the best place due to tidal and storm surges, which can increase sea level by several metres. The noise would not be solved when airport is moved, it will only shift to another area. The resort towns of North Kent and South Essex would suffer the noise pollution instead of the population of west London. The area is also a bird sanctuary raising the worries of bird strikes causing aircraft engines and windscreens to fail (PR News, 2008).The environmental data must clearly indicate levels of CO, SO2, NOX, O3, Particulate Matter and Lead generated and their effects on coastal resources, fish and wild life, wading birds. The scientific study must also provide data on light emission and visual impacts on people around airport (Halcrow group 2003). In the light of these suggestions and protests the data needed by the minister for environment are : NOISE: The noise damages health and quality of life. It can cause sleep disturbances, psychological and mental disturbances, annoyance and can make one hearing impaired ( WHO 1993 as cited in Whitelegg 2000). The idea of new airport into Thames estuary is attractive to some planners because planes could fly over the North Sea, alleviating concerns about noise pollution and allowing it to operate 24 hours a day (Katz 2008). How many are already living in the 57Decibel or higher (for a restful life it is upto 55dB) contour of noise and how many more would be added in coming years. This aspect seems in favour of Thames estuary airport as zero population would be added by 2015. While expansion of Heathrow may add another 107000, Stansted 3000 and at Gatwick a further addition of 9000, who would be living in this noise contour if further expansion of the later three airports is allowed . Expansion of Heathr

Wednesday, October 16, 2019

Why has United Nations been more successful than League of Nations Essay

Why has United Nations been more successful than League of Nations - Essay Example Any comparison between the two international bodies as League of Nations and United Nations can be done only by tracing their origin.In order to amply answer the thesis question, one needs to analyze in details the prevailing world situation when these two bodies were formed. One needs to appreciate that both the bodies were formed in the aftermath of World Wars that ravaged a large part of the developed world when nations tired of war were thinking of some permanent solution to banish war for ever from the face of the earth. The nations thought of creating some international supervisory body that would mediate and diffuse tensions that might brew between nations and ensure that such tensions never spill over into full fledged armed conflict. The bane of war was very much realized by all the combatants what with European economy in tatters after the savagery and mindless destruction that was unleashed during the two World Wars. It seemed that all parties concerned had come to their s enses and have realized the hard way that war can never be a solution; one war inevitably leads to other wars more savage and more ferocious than the previous one. The stage was set, as one would assume, for the creation of one such international body at the end of First World War. This body would, or at least those who took leading part in its formation thought so, would be an international mediator that would diffuse the glowing embers of a potential armed confrontation before it turned into an uncontrollable inferno (Knock, 1995). Inception of League of Nations By mid-December 1918 World War I was practically over, the shooting part, that is, and USS George Washington was approaching French coastline with US President Woodrow Wilson on board. The President was buoyant with notions of setting up a world order that would usher in everlasting peace in world. The idea and mission was surely a laudable one but little did the President know of the pitfalls that lay ahead in implementin g his grandiose and eminently lofty plans that would prevent forever any war from erupting into a frenzy of genocide and destruction. This effort of his earned him the Nobel Prize for peace in 1919 but Wilson was perhaps not aware that his allies were determined that Germany atone for her sins by paying heavily and were in no mood to forgive and forget and start afresh. But why blame only the European nations? Many Americans also feared that the birth of any multinational body like the League of Nations would take on the role of a global monitor and prevent member nations from pursuing their independent foreign policies. This strain of isolationism had pervaded foreign policy relations of United States right from its arrival on the international scene as a power of consequence. This trait perhaps had a direct link with its geographical location being bound on either sides by oceans and thus not having to share boundaries with equally powerful nations as most European countries had t o. Canada on the north and Mexico on the south were so inferior in military and economic strength as compared to itself that United States had never faced the predicament of dealing with a prickly and potentially dangerous neighbor. Hence, the general feeling among American public was not favorable towards the formation of an international body. They, instead, felt their independence in charting their foreign policy course to be much more important than engaging in some sort of understanding and compromise with fellow developed countries so that a World war is never repeated. League of Nations thus started its journey amid much misgivings and mutual distrust and was doomed perhaps even before it was formally brought into being (Lerner, 2004).

Tuesday, October 15, 2019

New London Airport Research Proposal Example | Topics and Well Written Essays - 2000 words

New London Airport - Research Proposal Example In UK the latest government forecasts predict a 239% change on 1995 by 2015 of the terminal passenger numbers. I t shows a requirement equivalent of an extra 3 -4 airports the size of Heathrow. The country needs to follow the sustainable development policy of its own and of EU. The required framework of aviation should reduce impact, increase growth and protect environment. (DETR 1997 as cited inWhitelegg 2000). A few recommendation suggested are: putting an environment charge based on emissions, ending of all subsidies and tax exemptions and more stringent noise and emission standards.(Whitelegg 2000) Environmental data and criteria: The London mayor is particularly optimistic about environmental safety by moving airport into the Thames estuary. It would cut noise since planes could approach the airport over the North Sea. Moreover, the Heathrow expansion would put pressure on the dense west London while there is alternative to the east. The noise has been the complaint of many residents and the levels of global warming emission gases have gone beyond EU and Environmental Agency norms (Katz 2008). Ben Stewart of Greenpeace argues that increase in number of flights from a four runway would negate the environmental benefits. He feels that new runways are new runways and we should rather think about bringing emission down by funding for railways and other low carbon emission transports. (Murray 2008). Unite, Britain's biggest union feels that Thames is not the best place due to tidal and storm surges, which can increase sea level by several metres. The noise would not be solved when airport is moved, it will only shift to another area. The resort towns of North Kent and South Essex would suffer the noise pollution instead of the population of west London. The area is also a bird sanctuary raising the worries of bird strikes causing aircraft engines and windscreens to fail (PR News, 2008).The environmental data must clearly indicate levels of CO, SO2, NOX, O3, Particulate Matter and Lead generated and their effects on coastal resources, fish and wild life, wading birds. The scientific study must also provide data on light emission and visual impacts on people around airport (Halcrow group 2003). In the light of these suggestions and protests the data needed by the minister for environment are : NOISE: The noise damages health and quality of life. It can cause sleep disturbances, psychological and mental disturbances, annoyance and can make one hearing impaired ( WHO 1993 as cited in Whitelegg 2000). The idea of new airport into Thames estuary is attractive to some planners because planes could fly over the North Sea, alleviating concerns about noise pollution and allowing it to operate 24 hours a day (Katz 2008). How many are already living in the 57Decibel or higher (for a restful life it is upto 55dB) contour of noise and how many more would be added in coming years. This aspect seems in favour of Thames estuary airport as zero population would be added by 2015. While expansion of Heathrow may add another 107000, Stansted 3000 and at Gatwick a further addition of 9000, who would be living in this noise contour if further expansion of the later three airports is allowed . Expansion of Heathr

The Impact of Pre-Cooling as an Intervention Strategy to Minimize Cardiovascular system Essay Example for Free

The Impact of Pre-Cooling as an Intervention Strategy to Minimize Cardiovascular system Essay The aim of this report was to investigate whether the utilization of pre-cooling (cooling vest) prior to a 10, 000m road-race run within a hot and humid environment, would result in improved performance. The report also aimed to examine any performance-related effects, and their underlying physiological mechanisms. Fourteen (n=14) well-trained adult runners participated in two 10,000m-time trials, spaced 72 hours apart. Ambient conditions of both the control and experimental conditions were T= 32.5 Â °C, rel. humidity= 65% and T= 32.8’C, rel. humidity= 63% respectively. Procedure consisted of a 30 minute warm up (20 minutes steady state running at RPE 13, 10 minutes individualized stretching activity). During the warm up, the control condition required participants to wear a normal tee shirt, with the experimental condition requiring participants to wear a commercially available gel-based cooling vest. Conclusion of the 30 min warm up saw the tee shirt or ice- vest replaced with the race singlet, before commencing the 10, 000 m time trial. Time, pre and post body mass, heart rate, skin temperature and core temperature were all variables measured and recorded. Participants were able to complete the 10,000m road-run in less time following the pre-cooling condition, suggesting that pre-cooling as an intervention strategy improved endurance performance. Results indicate this occurrence was due to significantly lower starting core and skin temperatures, reduced starting heart rate as well as an overall lower sweat rate. These factors allowed for a greater capacity of heat storage, minimizing thermoregulatory and cardiovascular strain and therefore allowing the body to operate at a higher level of performance before reaching critical limiting temperature. Results Figure 1 displays the difference between time trials obtained in both the control and pre-cooling conditions. The pre-cool time trial was significantly shorter than the control time trial (p0.05). The difference between baseline and post body mass (BM) were recorded to calculate sweat rate (L/hr.). Figure 2 displays the difference in sweat rate between the control and pre-cool conditions. Control sweat rate was significantly higher then sweat rate recorded for the pre-cool condition. The above graph (Figure 3) depicts the mean heart rates and standard deviations for both control and pre-cool conditions. HR was recorded and displayed over three phases of the time trial (start, mid and end). Statistical analysis determined that there was a significant difference in HR between the three phases of the time trial (p0.05). Statistical significance also occurred between control start HR and pre-cool start HR, with control start HR 5.10% greater than pre-cool start HR. Skin temperature was also recorded and statistically analysed. Figure 4 displays the mean and standard deviations for skin temperature (Tsk) over three phases of the time trial for both the control and pre-cool conditions. Significant differences between both control and pre-cool conditions were found (p0.05). Significant statistical differences were also discovered between each of the phases of the time trial (p0.0167). Figure 5 depicts mean and standard deviations for core temperature (Tc). Significant statistical difference occurred between the three different stages of the time trial (p0.05). When compared separately, significant differences were found between all stages of the time trial (start vs. mid, start vs. end, mid vs. end) (p0.0167). Discussion The purpose of this study was to investigate whether pre-cooling through the utilization of a cooling vest would augment endurance performance undertaken in the heat. Findings obtained from the study indicate that pre-cooling did improve performance, as the pre-cooling condition time trials were significantly shorter than the control condition (p0.05). This ability to perform at a higher intensity, decreasing time taken to complete the 10,000m run can be explained by the physiological mechanisms behind pre-cooling. The ability to exercise under hot and humid conditions is significantly impaired (Nielsen, Hales, Strange, Christensen, Warberg Saltin, 1993) when ambient temperature exceeds skin temperature. Reduced heat loss that would normally occur through convection and radiation, results in an increase in body temperature (Marino Booth, 1998). By lowering pre-performance body temperature, the body’s ability to capacitate metabolic heat production is increased (Siegel Laursen, 2012), therefore increasing the time to reach critical limiting temperature, at which exercise performance deteriorates or can no longer be maintained (Marino et al. 1998). Sweat rate was lower following pre-cooling compared to the control condition. A number of studies have also obtained similar results, finding greater heat storage capacities and subsequent sweat rates as a result of precooling (Olschewski Bruck, 1988)(Lee Haymes, 1995)(White, Davis Wilson, 2003). This can be explained by the greater heat storage, that is stimulated by precooling, delaying the onset of heat dissipation and subsequent sweat threshold (White et al. 2003). Furthermore, by minimizing sweat rate, the flow of blood to the skin surface is also reduced. This allows more blood to be distributed to the active muscles, reducing cardiovascular strain (White et al, 2003). Another physiological mechanism stimulated through pre-cooling, that aids in reducing cardiovascular strain is heart rate (HR)(Kay, Taaffe Marino, 1999). Recorded data over both conditions showed an increase in HR from the start to the end of the time trial. However, the only significant difference between control and precooling was found between the starting HR recordings. The precooling start HR was 5.10% lower than the control start HR (p0.0167). This significant difference was not maintained throughout mid and end recordings, with both the control and precooling end HR reaching approximately 191 bpm. Kay et al. (1999) found similar results, with HR slightly reduced following precooling within the first 20 minutes of exercise, however this difference was not maintained at 25 and 30 minutes of exercise. A review of relevant literature by Marino (2002) also indicated a lower HR during the start of exercise that was not seen throughout the rest of the exercise bout. These findings can be explained by greater central blood volume, a result of reduced body temperature and therefore no need to distribute blood flow to skin to lose heat. A greater central blood volume produces an increase in stroke volume, ultimately reducing HR and cardiovascular strain (Marino, 2002). Skin temperature results were also recorded over three phases of the time trial. Similar to HR, a significant difference between control and precooling start skin temperature recordings were found (p0.0167), but diminished throughout the remaining two phases of the time trial. Through the use of precooling and consequent lower skin temperature recordings, blood flow was not required at the skin, centrally withholding blood volume and assisting in reducing cardiac strain (Drust, Cable Reilly, 2000). The final variable assessed in this study was core temperature. According to Neilsen et al. (1993), high core temperature is the most important factor leading to exhaustion and impaired performance during exercise under hot and humid conditions. This may be due to brain and core body temperature having a corresponding relationship. Therefore, an increase in core temperature may result in an increase in brain temperature, resulting in central fatigue and affecting motor performance (Nybo, 2012). Core temperature results showed similarities to the findings for HR and skin temperature. Statistical significant differences were found between each phase that core temperature was recorded (p0.05)(start, mid and end time trial), showing a gradual rise from the start of the time trial to the end. A comparison of means via a T-test between start core temperature (control) and start core temperature (precool) showed a significant difference (p0.0167), which was not seen between samples during m id and end time trial. The findings from this study indicate and present the benefits precooling has on improving endurance performance in hot and humid environments. A number of studies and reviews studying precooling as an intervention strategy (Kay et al. 1993)(Marino, 2002)(Marino et al. 1998) have all shown the positive physiological mechanisms that arise from precooling. Time trials were significantly shorter in time following precooling, showing an improvement in performance. The significantly lower heart rate, skin temperature and core temperature stimulated by precooling at the start of the time trial, all contribute to a greater capacity for metabolic production. This greater capacity provides precooled subjects with the ability to work at a higher intensity for longer, before critical limiting temperature is reached, ultimately improving endurance performance.

Monday, October 14, 2019

Indias Foreign Exchange System: An Analysis

Indias Foreign Exchange System: An Analysis CHAPTER-2 LITERATURE REVIEW 2.1 Introduction: It is a fact that the currencies of different countries have different values that is based upon their actual economic and monetary strength. It is from this difference that the genesis of foreign exchange occurs. Foreign exchange can be termed as the act of matching the different values of the goods and services that is involved in the international business transaction process in order to attain the exact value that is to be transferred between the parties of an international trading transaction in monetary terms. Foreign exchange as an activity had started the day civilization and independent principalities got established in the world. But in those days it was a case of exchanging value in the form of transfer of goods and services of identical value that is commonly identified with barter system. Moreover the transactions were done on a one-to-one basis, and the terms and conditions were determined by the parties entering into such transactions. There was no universal system or rule that determined these transactions. In that way foreign exchange and international monetary system is a modern day trend that gained an institutional form in the first half of the twentieth century and has been developing since then. 2.2 Foreign Exchange: According to International Monetary Fund (IMF), Foreign Exchange is defined as different forms of financial instruments like foreign currency notes, deposits held in foreign banks, debt obligations of foreign banks and foreign governments, monetary gold and Special Drawing Rights (SDR) that are resorted to make payments in lieu of business transactions that is done by two business entities or otherwise, of nations that have currencies having different inherent monetary value (www.imf.org). Leading economist Lipsey Richard G.,1993 has mentioned that the foreign exchange transactions are basically a form of negotiable instrument that are resorted to deliver the cost of goods and services that form a part of trading transactions and otherwise, between business and public entities of nations of the global economy. Sarno, Taylor and Frankel, 2003 gives the definition of foreign exchange as denoting the act of purchase and sale of currencies of different economies that is performed over the counter for various purposes that includes international payments and deliverance of cost of various business transactions, where the value is usually measured by tallying the value of the currencies involved in the foreign exchange transaction with that of the value of U.S. Dollar. According to Clark and Ghosh 2004, Foreign Exchange denotes transactions in international currency i.e. currencies of different economies. In such transactions the value of a currency of one country is tallied and exchanged with similar value of the currency of the country in order to exchange the cost of a business transaction or public monetary transfer that is taking place between two entities of these economies. 2.2.1 Foreign Exchange Transactions: Transactions in foreign exchange are done through various types and various modes between different countries of the world. According to information mentioned in the Reuters Financial Training Series, 1999,TOD Transactions, TOM Transactions, Swap Rates, Spot Rates, Forward Rates, Margin Trading and Buy / Sell on Fixed Rates foreign exchange transaction methods are some of the commonly used methods that are widely used by global managers for their foreign exchange transaction activities. 2.2.1.1 TOD Operations: TOD Operations are foreign exchange transaction methods where the trader uses the exchange rate of the day on which the foreign exchange transaction order is to be executed. In other words TOP operations are commonly used in intra-day foreign exchange transactions. As a result they are commonly resorted to by speculators in foreign exchange transactions and those who general speculate on the rates of different foreign exchange markets of the globe. 2.2.1.2 TOM Operations: In this type of transactions the transaction process carried forward to the next day instead of it being an intra-day trading. TOM transactions rate is fixed on the day the transaction is signed, but the rate of exchange is agreed upon to be that of the next day. 2.2.1.3 SPOTTransactions: SPOT Transactions can be compared with TOM transactions because here also the exchange rate is fixed at a value that prevails over the exchange rate of intra-day trading of shares. But SPOT transactions have been separated as a different category because unlike TOM transactions, SPOT transactions contracts are executed on the third day after the signing of agreement between the Bank and the client. 2.2.1.4 Forward Contract: Forward contracts are those exchange rate contracts where the currency conversion exchange rate agreement is decided at a certain rate at a time that is well before the date of execution of the exchange contract. In that way they are similar to TOM transactions. The only differ from them in the fact that these transactions are made for a long term i.e. generally for one year, and the parties involved in making this foreign exchange transaction deposit five percent of the contract value with the bank involved in facilitating the transaction at the time of executing the contract which is then returned to the client after execution of the exchange transaction. The need for depositing this amount is to secure the transaction against any loss due to market fluctuations. 2.2.1.5 SWAP: The greatest advantage of SWAP transactions is that the clients involved in the foreign exchange get prior information about the exchange rate of the currencies that are part of the transaction. In this type of transaction the bank first buys the amount of transaction form the client and resells it to the client after a few days after disclosing the exchange rate of the currencies involved in the transaction process. SWAP transactions are much sought after by traders because here they get to know beforehand the exchange rate of the currencies involved in the transaction process that helps them in avoiding fluctuations in market rate and gives them the advantage of determining the prices of goods, the nature of the currency market notwithstanding. . 2.2.1.6 MarginTrading: The key element of Margin trading is that any trader can opt for SPOT trading round the clock by going through the margin trading mode. The other key element of margin trading is that the traders can make deals with a minimal spread for a huge amount of funds by projecting fraction of the needed amount. In that way it is a unique form of global financial transaction where the threshold value that can be transacted through the margin trading mode is $ 100000 with bigger deals being multiples of $ 100000. But in order to deal in margin trading the trader has to make a security deposit of five recent of the contract value that has to be replenished from time to time in order to maintain the amount from which the probable losses from margin trading transactions are accommodated. 2.2.1.7 Buying/Selling on Fixed Rate Order: This is a mutual agreement between the buyer and seller of foreign exchange. Neither its rate nor its other terms and conditions are based upon actual conditions. Rather the deal is based keeping the mutual profitability of the buyer and seller intact where both of them get their desired amount. 2.3 Global Foreign Exchange Market: According to the table depicting the Triennial Bank Survey of Foreign Exchange and Derivatives Market Activity done by Bank for International Settlements (BIS)2007, as shown below the global foreign exchange market has an average daily turnover of over $ 2 trillion, which is an increase of around forty percent in terms of volumes . This rise in foreign exchange transactions it is observed has been due to rise in the volume of trading in Spot and Forward markets. This is indicative towards increase in volatility of foreign exchange markets around the world. (www.bis.org). Global Foreign Exchange Market Turnover Daily averages in April, (in billions $) Year 1989 1992 1995 1998 2001 2004 Spot Transactions 317 394 494 568 387 621 Outright Forwards 27 58 97 128 131 208 Swaps in Foreign Exchange 190 324 546 734 656 944 Gaps in Reporting (Estimated) 56 44 53 60 26 107 Total Turnover (Traditional) 590 820 1,190 1,490 1,200 1,880 Memo: Turnover (At April 2004 Exchange Rates) 650 840 1,120 1,590 1,380 1,880 (BIS Triennial Central Bank Survey, 2004) As observed by Jacque Laurent L.1996, Studies in foreign exchange point to the fact that the volume involved in foreign exchange transactions in the total markets around the globe has the potential to affect the overall functioning of the global financial system due to the systematic risks that are part and parcel of the foreign exchange transaction system. Most of the transactions occur in the major markets of the world with the London Exchange followed by New York and Tokyo Stock Exchange accounting for over sixty percent of the foreign exchange transactions done around the globe. Among these transactions the largest share is carried out by banks and financial institutions followed by other business transactions i.e. exchange of value for goods and services as well as dealers involved in securities and financial market transactions. According to the studies by Levi Maurice D., 2005, in foreign exchange transactions most of the transactions happen in the spot market in the realm of OTC derivative contracts. This is followed by hedging and forward contracts that are done in large numbers. The central banks of different countries of the world and the financial institutions operating in multiple markets are the main players that operate in the foreign exchange market and provide the risk exchange control mechanism to the players of the exchange market and the system where around $ 3 trillion amount of money is transacted in 300000 exchanges located around the globe. The largest amount of transactions takes place in the spot rate and that too in the liquidity market. The quotation on price in these markets sometimes reaches to around two thousand times in a single day with the maximum quotations being done in Dollar and Deutschemark with the rates fluctuating every two to three minutes with the volume of transaction for a dealer in foreign exchange i.e. both individual and companies going to the range of $ 500 million in normal times. In recent years the derivativ e market is also gaining popularity in OTC dealings with regards to the foreign exchange market. 2.4 Global Foreign Exchange Market Management Risks: According to the researcher Kim S. H., 2005, Foreign exchange transactions are identified by their connection with some financial transactions occurring in some overseas market or markets. But this interconnectivity does not affect the inherent value of the currency of the country which is determined by the economic strength of that country. This means that the inherent value of each currency of the world is different and unequal. So when the need arises to exchange the value of some goods or service between countries engaged in such activity it becomes imperative to exchange the exact value of goods and services. Considering the complexity and volume of such trading and exchange activity occurring in the global market between countries it is but natural that the currencies of individual countries is subject to continual readjustment of value with the currency with which its value has to be exchanged. This gives rise to the importance of foreign exchange transactions as a separate ar ea of study and thereby needs much focus for its understanding (Frenkel , Hommel and Rudolf , 2005). In addition to this it is to be realized that with the growing pace globalization and integration of global economic order there has been a tremendous increase in international business transactions and closer integration of economic systems of countries around the world especially between the members of WTO, that has led to the increase in economic transactions and consequent activity in international foreign currency exchange system (Adams, Mathieson and Schinasi, 1998). Added to this is the fact that the exchange value of currencies in the transactions is not determined by the respective countries but by the interplay of value of the currencies engaged in an international foreign exchange transaction and the overall value of each currency in the transaction prevailing at that time. In fact each country in the global economic order would want to determine the value of its currency to its maximum advantage, which was possible a few years ago in when the countries used to determine the value of their currency according to the existing value of their economy. The individual countries till the early nineties used to follow a policy of total or partial control over the exchange value of their currency in the global market. At the same time there also were a group of countries that followed the policy or system in determining the exchange value of their currency i.e. left it to the interplay of global economic activity where the value was determined by its economic performance. The currencies of countries that provide full or partial amount of control in the international exchange value of its currency are known to follow a Fixed Rate whereas the currencies of countries that allow its currency to seek its inherent value through its performance in the global economic system are termed as following the Floating Rate of foreign exchange conversion mechanism. Though lo gically both the type of mechanism of foreign exchange face the effect of exchange rate fluctuations and consequent volatility in rate it is the currencies having a floating rate that are continually affected by the fluctuations in exchange rate in the global market when in the case of currencies with a fixed rate it is more of a controlled and regulated affair (Chorafas Dimitris N., 1992). 2.5 Foreign Exchange Risks Prevailing in the Global Market: Risks related to the exchange rate of a currency in the global market as has been mentioned, occurs due to the interplay of inherent value of each currency of the respective countries that are part of the global financial mechanism. Risks related to foreign exchange come into picture and are also inevitable in this world marching towards increased interaction due to globalization. The risks will occur due to business interaction and consequent exchange of value for goods and services. According to Kodres LauraE., 1996, the risks related to foreign exchange occur when there is increased interaction between the currency of a country with that of other countries in the international market and that too if the currency has a floating exchange rate. In that case the value of the currency is continually affected by its business and financial performance. This relation with other currencies in the market affects it during the time when the need arises to exchange it with another currency for settlement of financial transaction in some business or financial purposes and gives rise to various types of risks. The prominent risks associated during this situation are Herstatt Risk, and Liquidity Risk. 2.5.1 Herstatt Risk: Herstatt risk is a risk that is named after a German Bank that got liquidated by the German Government in the seventies of the last century and made to return all; the claims accruing to its customers. This is because its creditworthiness was affected and it could not pay the settlement claims to its customers and also on behalf of its customers to their clients. It is basically connected to the time aspect of foreign exchange value claim settlements in which the foreign exchange transactions do not get realized as the bank loses its ability to honour the transaction in the intervening period due to some causes. In the particular case the German bank failed to honour the financial settlement claims of its clients to their counter parties that were to be paid in values of U.S Dollars. The main issues that arose were regarding quantifying the amount to be delivered and the time of the transaction process due to the two countries financial systems being located and working according to different or separate time zones. This case has established a phenomenon in foreign exchange market where there may erupt situations in which the working hours of banks located in different time zones may never match with each other leading to foreign exchange settlement transactions getting affected during the mismatch of the two banks closing and opening time. In fact the Alsopp Report that studied this phenomenon in detail said that though the foreign exchange transactions are made in pen and paper on a single day the actual transfer of value takes place within three to four days. And with the exchange value of currencies operating in the international market always remaining in a state of flux they either get jacked up or devalued. In either case it affects the clause of transactions that was decided on an intra-day rate, as the value of both the currencies in the international market has changed during these days. 2.5.2 Risks related to Liquidity: There can crop up different problems related to the banking systems operations and dynamics i.e. in both technical and management systems as well as inability in terms of volume of available liquidity strength or in mismatch in tallying of time etc; that can affect the capacity of banks to honour foreign exchange transactions in terms of transfer of liquidity. These types of risks are being commonly witnessed in newly emerging economies that are being unable to cope with the sudden surge in volume of global business transactions thereby leading to exchange rate settlement and payment delays, outstanding payments and dishonouring of financial commitments in the exchange rate transaction market. 2.5.3 Financial Repercussions: According to the Studies in foreign exchange related risks by Dumas and Solnik, 1995 aver that risk related to transactions in foreign exchange have increased with globalization and the rise of global economic integration process with the countries getting affected in relation to the volume of their transactions in the global financial and business marketplace. This is because the market is now more oriented towards market value driven convertibility of currencies that is influenced by the global financial movements and transactions, and any independent transaction especially of transnational and multinational companies; will automatically affect other transactions happening in the global financial marketplace (Klopfenstein G.,1997). However, according to another study by Gallati Reto R., 2003, these multinational and transnational companies are simultaneously being affected by the fluctuations in exchange rate of different currencies of the global market that is exposing their business operations in different global markets to exchange rate related risks especially due to difference in Spot and Forward rates and the inevitable fluctuations (Choi , 2003) that give rise to foreign exchange settlement related problems. 2.5.4 Remedies to Foreign Exchange Settlement Risks: As there risks that have cropped up in foreign exchange transactions due to increase in volume and frequency of transactions mainly as a result of globalization so, also there have come up remedies to minimize the risk related to adverse conditions in foreign exchange transactions. The Bank for International Settlements (BIS) in one of its studies in 1999 has said that settlement of claims is the most predominant risk that is related to foreign exchange transactions, especially the speed with which these transactions are materialized and the roadblocks that they may face in the process due to tremendous increase in volume of foreign exchange transactions that cannot be cleared in expected times. The solution to these risks according to the study is to simultaneously clear transactions on either side i.e. for both the parties side so that they simultaneously give and receive payments at the agreed rate of exchange. This would solve the problem of extended time of actual payment when the rate of exchange fluctuates, thereby creating problems for both the parties. This arrangement is related to deals being processed simultaneously, which requires the concurrence and common cause of both the parties. This is because the party that is expecting a hike in value of it s currency may not agree to such a proposal. In that case there should be some law or arrangement that would make it mandatory for both the parties to settle their intra-day payments on that day itself so that there is no scope left for speculation by them. According to the study, such arrangements have been made in USA and Europe where systems like Fedwire and Trans- European Automated Real-Time Gross Settlement Express Transfer (TARGET) have been established. Fedwire facilitates payments in foreign exchange transactions under the mode of Real Time Gross Settlements (RTGS)and TARGET facilitates intra-day transfer of foreign exchange between parties of member countries of Europe on the same day itself. But, for simultaneous release of funds by both the parties and the intra-day settlement of claims to succeed it is imperative that the member countries of the global economic system should come together have concurrence on these issues. This is because all said and done the foreign exchange transaction related rules and laws are still governed by the respective countries. And most of these countries are reluctant to make any headway in linking their currency system to the global currency system for speedy disposal of foreign exchange transactions for fear that such a move would expose their currency end financial system to the baneful effects of risks and volatility of global foreign exchange system (Hagelin and Pramborg, 2004). At the level of international trading corporations there has been initiated some steps whereby they have formed a private arrangement known as Group of Twenty. They are a group of twenty internationally acclaimed global clearing banks who have formed an system called the Global Clearing Bank that acts as a connection between the payment systems of different countries and verifies international foreign exchange transactions in order to simultaneously satisfy both the parties regarding authenticity of the process of transaction. The thing is that this system puts a high amount of strain on the financial and foreign exchange system as well as reserves of individual countries along with requiring them to bring about some amount of commonality between the financial rules and regulations of individual countries which is easier said than done. All the same the establishment of Bilateral Netting System and Multilateral Netting Systems as well as of Exchange Clearing House (ECHO) are trying t o facilitate foreign exchange transactions and minimize the inherent risks involved (McDonough ,1996). 2.6 Indian Foreign Exchange System: 2.6.1 Historical Background: The historical background of foreign exchange system in India was a saga of excess control and monitoring with even minor transactions being made to undergo the rigorous scrutiny of concerned government authorities to avoid any risks associated with such transactions and save the scarce foreign exchange reserves from being frittered away in some transactions considered unimportant or anti-national by the government. The Foreign Exchange Regulation Act (FERA) that was enacted in 1947 and made more stringent in 1973 was the embodiment of the prevailing sentiment of the governments of those days, which was to completely regulate and control all the foreign exchange transactions and protect the foreign currency reserves. (Mehta, 1985) All these changed in the nineties of the last century with the opening up of Indian economy in 1991 in keeping with the recommendations of the High Level Committee on Balance of Payments set up under the chairmanship of Dr C. Rangarajan by the Ministry of Finance, Government of India and subsequent entry of India into World Trade Organization (WTO) in 1994. This was preceded by the liberating of current account transactions and establishing full convertibility of current account transactions in 1993. In 1994 also the Government of India accepted Article VIII of Agreement of the International Monetary Fund that established the system of current account convertibility and the exchange value of rupee came to be determined according to the market rates with only the convertibility of capital account being under the control of the government (Krueger,2002) as the Tarapore Committee on Capital Account Convertibility of 1997 (Panagariya A., 2008) suggested the government to keep adequate sa feguards before allowing the convertibility of capital account to be determined according to the market forces as there was need to consolidate the financial system and have an accepted inflation target before such a venture. The Tarapore Committee also suggested that the legal framework governing the foreign exchange transaction system in India also needs to be modernized before going for total convertibility of the capital account due to which the Government repealed the FERA Act of 1973 and promulgated the Foreign Exchange Management Act (FEMA) in 2000. This new act did away with the system of regulation and control and established a system of facilitation and management of foreign exchange transactions thereby promoting all the activities related to foreign exchange transactions. The most important thing that was done by FEMA was to recognize violations or mistakes in foreign exchange transactions as a civil offence instead of a criminal offence as was done by FERA. FEMA also shifted the responsibility of proving the violation or mistake in foreign exchange transaction and related rules from the prosecutor to the prosecuted. And if the prosecuted was proved guilty he or she was to pay only monetary fine or compensation instead of being jailed as was the earlier provision under FERA. FEMA also simplified many of the rules and notified specific time frames for delivering judgments related to violations of foreign exchange rules and regulations and provide rules for establishing special tribunals and forums to deal with such cases. Th e compounding rules were also made less stringent and all matters related to compounding rules were notified to be dealt by Reserve Bank of India (RBI) instead of the previously assigned Enforcement Directorate. RBI was made the designated Compounding Authority in all related matters. Only the cases involving hawala transactions were left from its purview As per Mecklal and Chand