Thursday, April 4, 2019
Performance Measure of PCA and DCT for Images
Performance Measure of PCA and DCT for touching-picture showsGenerally, in Image Processing the revolution is the basic proficiency that we apply in order to study the characteristics of the Image under s tail end. Under this process here we present a method in which we atomic number 18 analyzing the mathematical operation of the cardinal methods namely, PCA and DCT. In this thesis we atomic number 18 going to analyze the system by first genteelness the mark for mapitioningicular no. Of substitution classs and then analyzing the performance for the two methods by calculating the error in this two methods.This thesis referred and tested the PCA and DCT readation proficiencys.PCA is a technique which drives a procedure which mathemati titley interprets weigh of likely related parameters into tinyer number of parameters whose places dont change called champion divisions. The primary question comp onenessnt accounts for much(prenominal) variability in the educat ion, and each succeeding component accounts for much of the remaining variability. Depending on the coating field, it is withal called the separate Karhunen-Love modify (KLT), the Hotelling transform or proper orthogonal decomposition (POD).DCT expresses a serial of finitely m whatever a(prenominal) entropy window panes in harm of a sum of cosine functions oscillating at divergent frequencies.Transformations atomic number 18 important to legion(predicate) applications in science and engineering, from lossy compression of audio and take c ars (where small high-frequency components shadow be discarded), to spectral methods for the numerical solution of expoundial diametricial equations.CHAPTER 1INTRODUCTION1.1 mental hospitalOver the past fewer years, several heart learning systems have been proposed based on principal components analysis (PCA) 14, 8, 13, 15, 1, 10, 16, 6. Although the details vary, these systems weed all be described in terms of the same preproc essing and run-time steps. During preprocessing, they show a trend of m training images to each other and unroll each image into a transmitter of n pixel measures. Next, the wet image for the gallery is subtracted from eachand the resulting centered images atomic number 18 fit(p) in a gallery intercellular substance M. Element i j of M is the ith pixel from the jth image. A covariance ground substance W = MMT characterizes the distribution of the m images in n. A sub straightforwardly up of the Eigentransmitters of W atomic number 18 employ as the flat coat senders for a subspace in which to compare gallery and novel look into images. When sorted by decrease Eigenvalue, the full set of unit length Eigentransmitters represent an orthonormal base of operations where the first direction insures to the direction of maximum variance in the images, the second the next largest variance, etc. These primer coat vectors are the Principle Components of the gallery images. Once the Eigenspace is computed, the centered gallery images are projected into this subspace. At run-time, recognition is accomplished by projecting a centeredprobe image into the subspace and the nearest gallery image to the probe image is selected as its match. There are many differences in the systems referenced. Some systems assume that the images are registered prior to heart recognition 15, 10, 11, 16 among the rest, a variety of techniques are employ to identify facial bears and register them to each other. Different systems may usance divergent maintain measures when matching probe images to the nearest gallery image. Different systems select different numbers of Eigenvectors (usually those alike to the largest k Eigenvalues) in order to compress the entropy and to improve accuracy by eliminating Eigenvectors corresponding to noise instead than loadedingful variation. To help evaluate and compare individual steps of the facial gesture recognition process, Moon and Ph illips created the FERET face selective informationbase, and performed initial comparisons of some common keep measures for otherwise identical systems 10, 11, 9. This paper extends their work, presenting foster comparisons of distance measures all over the FERET selective informationbase and examining alternative way of selecting subsets of Eigenvectors. The drumhead Component Analysis (PCA) is one of the most(prenominal) successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of work out analysis. The purpose of PCA is to reduce the large balanceality of the data space ( notice variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when on that point is a strong correlation among observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compress ion, etc. Because PCA is a classical technique which can do something in the elongate domain, applications having linear models are suitable, much(prenominal) as signal processing, image processing, system and control theory, communications, etc. Face recognition has many applicable areas. Moreover, it can be categorise into face identification, face classification, or sex determination. The most useful applications contain crowd surveillance, video capability indexing, personal identification (ex. drivers license), mug shots matching, entrance security measures, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance ground substance derived from a set of facial images(vectors). The details are described in the following sect ion.PCA computes the cornerstone of a space which is represented by its training vectors. These basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest variance of the training vectors. As it has been said earlier, we call them eigenfaces. Each eigenface can be viewed a feature. When a particular face is projected onto the face space, its vector into the face space describe the importance of each of those features in the face. The face is show in the face space by its eigenface coefficients (or weights). We can handle a large input vector, facial image, only when by taking its small weight vector in the face space. This room that we can reconstruct the maestro face with some error, since the dimensionality of the image space is much larger than that of face space.A face recognition system using the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to beat the identity of a given face image according to their memory. The memory of a face recognizer is chiefly simulated by a training set. In this project, our training set consists of the features extracted from know face images of different persons. Thus, the task of the face recognizer is to recover the most similar feature vector among the training set to the feature vector of a given test image. Here, we want to recognize the identity of a person where an image of that person (test image) is given to the system. You exit use PCA as a feature extraction algorithm in this project. In the training sort, you should extract feature vectors for each image in the training set. Let A be a training image of person A which has a pixel resolution of M N (M rows, N columns). In order to extract PCA features of A, you lead first convert the image into a pixel vector A by concatenating each of the M rows into a single vector. The length (or, dimensionality) of the vector A allow be M N. In this project, you will use the PCA algorithm as a di mensionality reduction technique which transforms the vector A to a vector A which has a imensionality d where d M N. For each training image i, you should calculate and store these feature vectors i. In the recognition phase (or, testing phase), you will be given a test image j of a known person. Let j be the identity (name) of this person. As in the training phase, you should compute the feature vector of this person using PCA and get down j . In order to identify j , you should compute the similarities amid j and all of the feature vectors is in the training set. The similarity between feature vectors can be computed using Euclidean distance. The identity of the most similar i will be the output of our face recognizer. If i = j, it supposes that we have correctly identified the person j, otherwise if i 6= j, it opines that we have misclassified the person j.1.2 Thesis structureThis thesis work is divided into five chapters as follows.Chapter 1 designThis introductory chapt er is briefly explains the procedure of switching in the Face Recognition and its applications. And here we explained the cranial orbit of this research. And terminally it gives the structure of the thesis for friendly usage.Chapter 2 Basis of Transformation Techniques.This chapter gives an introduction to the Transformation techniques. In this chapter we have introduced two transformation techniques for which we are going to perform the analysis and result are used for face recognition purposeChapter 3 Discrete romaine lettuce TransformationIn this chapter we have keep the part from chapter 2 about transformations. In this other method ie., DCT is introduced and analysis is doneChapter 4 effectuation and resultsThis chapter presents the simulated results of the face recognition analysis using MATLAB. And it gives the explanation for each and every step of the stick out of face recognition analysis and it gives the tested results of the transformation algorithms.Chapter 5 Con clusion and Future workThis is the final chapter in this thesis. Here, we conclude our research and discussed about the achieved results of this research work and suggested future work for this research.CHAPTER 2 basic principle of Image Transform Techniques2.1 IntroductionNow a days Image Processing has been gained so much of importance that in every field of science we apply image processing for the purpose of security as well as increasing demand for it. Here we apply two different transformation techniques in order study the performance which will be helpful in the detection purpose. The computer science of the performance of the image given for testing is performed in two stepsPCA (Principal Component Analysis)DCT (Discrete Cosine Transform)2.2 Principal Component AnalysisPCA is a technique which involves a procedure which mathematically transforms number of possibly correspond variables into smaller number of uncorrelated variables called principal components. The first prin cipal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the discrete Karhunen-Love transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD).Now PCA is mostly used as a tool in exploration of data analysis and for making prognostic models. PCA also involves calculation for the Eigen value decomposition of a data covariance intercellular substance or singular value decomposition of a data intercellular substance, usually after have in mind centring the data from each attribute. The results of this analysis technique are usually shown in terms of component scores and also as loadings.PCA is veridical Eigen based multivariate analysis. Its action can be termed in terms of as edifying the inner arrangement of the data in a shape which give details of the mean and variance in the data. If there is any multivariate data then its v isualized as a set if coordinates in a multi dimensional data space, this algorithm allows the users having pictures with a lower setting reveal a shadow of object in view from a higher aspect view which reveals the true informative nature of the object.PCA is very closely related to aspect analysis, some statistical software packages purposely conflict the two techniques. True aspect analysis makes different assumptions about the original configuration and then solves eigenvectors of a little different medium.2.2.1 PCA ImplementationPCA is mathematically delimit as an orthogonal linear transformation technique that transforms data to a new coordinate system, such(prenominal) that the great variance from any projection of data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform technique for given data in least square terms.For a data hyaloplasm, XT, with zero empirical mean ie., the e mpirical mean of the distribution has been subtracted from the data set, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transformation is given byWhere the matrix is an m-by-n diagonal matrix, where diagonal elements ae non-negative and WVT is the singular value decomposition ofX.Given a set of points in Euclidean space, the first principal component part corresponds to the line that passes through the mean and minimizes the sum of squared errors with those points. The second principal component corresponds to the same part after all the correlation terms with the first principal component has been subtracted from the points. Each Eigen value indicates the part of the variance ie., correlated with each eigenvector. Thus, the sum of all the Eigen values is equal to the sum of squared distance of the points with their mean divided by the number of dimensions. PCA rotates the set of points around i ts mean in order to range it with the first few principal components. This moves as much of the variance as possible into the first few dimensions. The values in the remaining dimensions tend to be very highly correlated and may be dropped with minimal loss of information. PCA is used for dimensionality reduction. PCA is optimal linear transformation technique for keeping the subspace which has largest variance. This advantage comes with the outlay of greater computational requirement. In discrete cosine transform, Non-linear dimensionality reduction techniques tend to be to a greater extent computationally demanding in comparison with PCA.Mean discount is necessary in performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component will instead correspond to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the appro ximation of the data.Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be defined asWith the first k1 component, the kth component can be found by subtracting the first k 1 principal components from xand by substituting this as the new data set to find a principal component inThe other transform is therefore same to finding the singular value decomposition of the data matrix X,and then obtaining the space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WLThe matrix W of singular vectors of X is tantamount(predicate)ly the matrix W of eigenvectors of the matrix of observed covariances C = X XT,The eigenvectors with the highest eigen values correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient).PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology.An auto-encoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors.PCA is a popular primary technique in pattern recognition. But its not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account.2.2.2 PCA Properties and LimitationsPCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is ludicrous and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as well as strength, in that being non-para metric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information.The applicability of PCA is limited by the assumptions5 made in its derivation. These assumptions areWe assumed the observed data set to be linear combinations of certain basis. Non-linear methods such as kernel PCA have been developed without assuming linearity.PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA just de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination.PCA simply performs a coordinate rotation that aligns the change axes with the directions of maximum variance. It is only when we commit that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise.2.2.3 Computing PCA with covariance methodFollowing is a detailed rendering of PCA using the covariance method . The goal is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently we are seeking to find the matrix Y, where Y is the KLT of matrix XOrganize the data setSuppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L Write as column vectors, each of which has M rows.Place the column vectors into a single matrix X of dimensions M - N.Calculate the empirical mean notice the empirical mean along each dimension m = 1,,M.Place the calculated mean values into an empirical mean vector u of dimensions M - 1.Calculate the deviations from the m eanMean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we proceed by centering the data as followsSubtract the empirical mean vector u from each column of the data matrix X.Store mean-subtracted data in the M - N matrix B.where h is a 1-N row vector of all1sFind the covariance matrixFind the M - M empirical covariance matrix C from the outer product of matrix B with itselfwhereis the expected value operator,is the outer product operator, andis the conjugate transpose operator.Please note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information.Find the eigenvectors and eigenvalues of the covariance matrixCompute the matrix V of eigenvectors which diagonalizes the covariance matrix Cwhere D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as MATLAB78, Mathematica9, SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV. ground substance D will take the form of an M - M diagonal matrix, whereis the mth eigenvalue of the covariance matrix C, andMatrix V, also of dimension M - M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C.The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector.Rearrange the eigenvectors and eigenvaluesSort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue.Make sure t o maintain the correct pairings between the columns in each matrix.Compute the cumulative muscularity content for each eigenvectorThe eigenvalues represent the distribution of the source datas energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through mSelect a subset of the eigenvectors as basis vectorsSave the first L columns of V as the M - L matrix WwhereUse the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such thatConvert the source data to z-scoresCreate an M - 1 empirical standard deviation vector s from the square roo t of each element along the main diagonal of the covariance matrix CCalculate the M - N z-score matrix(divide element-by-element)Note While this step is useful for various applications as it normalizes the data set with respect to its variance, it is not integral part of PCA/KLTProject the z-scores of the data onto the new basisThe projected vectors are the columns of the matrixW* is the conjugate transpose of the eigenvector matrix.The columns of matrix Y represent the Karhunen-Loeve transforms (KLT) of the data vectors in the columns of matrixX.2.2.4 PCA DerivationLet X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a Orthonormal transformation matrix P such thatwith the constraint thatis a diagonal matrix andBy substitution, and matrix algebra, we obtainWe now haveRewrite P as d column vectors, soand as exchange into equation above, we obtainNotice that in , Pi is an eigenvector of the covariance ma trix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints.CHAPTER 3DISCRETE Cosine transform3.1 IntroductionA discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications for compression, it turns out that cosine functions are much more efficient, whereas for differential equations the cosines express a particular choice of boundary conditions.In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only solid numbers. DCTs are equivalent to DFTs of somewhat twice the length, opera ting on real data with eve symmetry (since the Fourier transform of a real and level function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transforms (DST), which is equivalent to a DFT of real and curious functions, and the modified discrete cosine transforms (MDCT), which is based on a DCT of overlapping data.3.2 DCT formsFormally, the discrete cosine transform is a linear, invertible function F RN - RN, or equivalently an invertible N - N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0, , xN-1 are transformed into the N real numbers X0, , XN-1 acco rding to one of the formulasDCT-ISome authors further multiply the x0 and xN-1 terms by 2, and correspondingly multiply the X0 and XN-1 terms by 1/2. This makes the DCT-I matrix orthogonal, if one further multiplies by an overall home plate factor of , but breaks the direct counterweight with a real-even DFT.The DCT-I is exactly equivalent, to a DFT of 2N 2 real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb, divided by two.Note, however, that the DCT-I is not defined for N slight than 2.Thus, the DCT-I corresponds to the boundary conditions xn is even around n=0 and even around n=N-1 similarly for Xk.DCT-IIThe DCT-II is probably the most commonly used form, and is often simply referred to as the DCT.This transform is exactly equivalent to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n + 1 = xn for , and y4N n = yn for 0 Some authors further multiply the X0 term by 1/2 and multiply the resulting matrix by an overall scale factor of . This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input.The DCT-II implies the boundary conditions xn is even around n=-1/2 and even around n=N-1/2 Xk is even around k=0 and unexpended around k=N.DCT-IIIBecause it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as the inverse DCT (IDCT).Some authors further multiply the x0 term by 2 and multiply the resulting matrix by an overall scale factor of , so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output.The DCT-III implies the boundary conditions xn is even around n=0 and odd around n=N Xk is even around k=-1/2 and even around k=N-1/2.DCT- IVThe DCT-IV matrix becomes orthogonal if one further multiplies by an overall scale factor of .A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT) (Malvar, 1992).The DCT-IV implies the boundary conditions xn is even around n=-1/2 and odd around n=N-1/2 similarly for Xk.DCT V-VIIIDCT types I-IV are equivalent to real-even DFTs of even order, since the corresponding DFT is of length 2(N1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-VIII). In principle, there are actually four additional types of discrete cosine transform, corresponding essentially to real-even DFTs of logically odd order, which have factors of N in the denominators of the cosine arguments.Equivalently, DCTs of types I-IV imply boundaries that are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. DCTs of types V-VIII imply boundaries that even/odd around a data point for one boundary and halfway between two data points for the other boundary.However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.Inverse transformsUsing the normalization conventions above, the inverse of DCT-I is DCT-I compute by 2/(N-1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa.Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of 2 (see above), this can be used to make the transform ma trix orthogonal.Multidimensional DCTsMultidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions they are simply a separable product (equivalently, a composition) of DCTs along each dimension.For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above)Two-dimensional DCT frequenciesTechnically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order.The inverse of a multi-dimensional DCT is just a separable product of the inverse(s) of the corresponding one-dimensional DCT( s), e.g. the one-dimensional inverses utilize along one dimension at a time in a row-column algorithm.The image to the compensate shows combination of horizontal and vertical frequencies for an 8 x 8 (N1 = N2 = 8) two-dimensional DCT. Each step from left wing to in effect(p) and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (88) is transformed to a linear combination of these 64 frequency squares.Chapter 4IMPLEMENTATION AND RESULTS4.1 IntroductionIn foregoing chapters (chapter 2 and chapter 3), we get the theoretical knowledge about the Principal Component Analysis and Discrete Cosine Transform. In our thesis work we have seen the analysis of both transform. To execute these tasks we chosen a political program calle d MATLAB, stands for matrix laboratory. It is an efficient language for Digital image processing. The image processing toolbox in MATLAB is a collection of different MATAB functions that extend the capability of the MATLAB environment for the solution of digital image processing problems. 134.2 hardheaded implementation of Performance analysisAs discussed earlier we are going to perform analysis for the two transform methods, to the images as,
Wednesday, April 3, 2019
Microsofts Structure And Culture
Microsofts Structure And CultureThe assignment is active half-dozen outcomes of Organizations and Behavior subject. The scenario go with is Microsoft, founded in 1975, which is the worldwide attracter in softwargon, function and solutions that help commonwealth and businesses realize their full emf (Microsoft, n.d.).The other participation to compargon with Microsoft is Federal Express Corporation (FedEx Express), the largest order in providing a portfolio of transportation, e-commerce and business services at a lower place the FedEx brand. FedEx Express is an evince transportation company, offering time-certain delivery within one to three business days and component markets. FedEx Ground Package System, Inc. (FedEx Ground) is a provider of sm tout ensemble-package ground delivery service. FedEx dispatch Inc (FedEx Freight) is a provider of little-than-truck hitch (LTL) freight services. FedEx Corporate Services, Inc. (FedEx Services) provides the Companys other comp anies with sales, marketing, information technology, communications and back-office support (FedEx, n.d.).This assignment is going to explain and comp ar the organizational social strategys, cultures, lead styles and performance of these two companies to notice out most the organizational theories that put up the make of forethought.1.1 Comp ar and contrast diametrical organizational twists and culture1.1.1 Microsofts structure and cultureMicrosofts Organizational Chart(The authorized Board, 2012)According to the chart above, Microsoft has a flat structure. We heap see that Microsoft has five product classs are Windows stand blotto Windows Group, Server Software, Online Services, Microsoft Business and Entertainment and Devices. Each product group, which foc exercises on a specific line of goods and services, has one executive reports directly to the CEO. Each group has its let RD, sales, and customer service staff (Daft, 2009). This structure allows larger spans o f control. Microsoft in like manner has a intercellular substance structure which go bads alongside the flat structure. The matrix structure is a structure where project teams are made up of workers with various specialisms from different functions of a business (BPP, 2004). The heavy structure of Microsoft is hold Liability Company because the company went public on March 13, 1986 (Time, n.d.).Microsoft has a task culture because it is a huge company with 94,420 employees around the world, bonny 56,934 in USA but (Microsoft, n.d.). It is impossible to manage a firm of that huge tot of workers with a person culture or a power culture.A two-time award-winning journalist Kurt Eichenwald described Microsofts work culture as the cannibalistic culture a perplexity system take intercoursen as stack-ranking a program that forces every unit to moderate a certain percentage of employees as top performers, good performers, average, and light effectively crippled Microsofts abil ity to innovate, leading employees to compete with each other rather than competing with other companies (Vanity Fair, 2012).1.1.2 FedExs structure and cultureFedExs Organizational Chart(The Official Board, 2012)FedEx Corporation FedEx, introduced express delivery to the world in 1973, and is the worlds top express delivery service. The organizational structure of FedEx is flat. According to Organizational Behavior A Strategic Approach, FedEx Corporation should change their structure, because it adopted a multi-divisional structure (Hitt, moth miller Colella, 2005). The corporation renders significant authorities to the subsidiaries. Operating independently, each subsidiary manages its own specialized realisework of services. FedEx employed over 280,000 employees worldwide (FedEx, n.d.), so ostensibly they presend a task culture. The culture of FedEx is also market set culture. All they care about is the customers their culture center on the customer. They suffer a strong customer-service organizational culture (McNeal, 2011).In short, some(prenominal) Microsoft and FedEx Corp. save a new style of management which is flat structure and task culture. However, FedEx is flatter than Microsoft in organizational structure. To look deeper, we can see differences in their culture as one cares about money, the other one cares about the customer.1.2 Explain how the relationship between an organizations structure and culture can push on the performance of the business1.2.1 MicrosoftMicrosoft has a flat organizational structure and a task culture which is consider the new method of management. It is believed to be the right expressive style to manage a company. This seems to works good when Microsofts 2011 taxation reached $69 billion (Microsoft, 2011). They make a very huge center of money. The flat structure creates a lower hierarchy of power in Microsoft. It also allows CEOs direct involvement to make decision process speedy and less time consuming. Microsoft was topped ranking of the worlds 25 best international workplaces released by The Great Place to Work Institute (Industry Week, 2011). It is noticecapable that employees assemble their functional environment in Microsoft. However, the stack ranking program can kill Microsofts creativity. The destructive management technique can be seen the observe problem in Microsoft its management system (Frederick Allen, 2012). It can lead to jeopardy of losing big mating of money.1.2.2 FedExFedEx increased revenue 12% in the February-to-May make and 13% in the fiscal year that ended May 31, account total annual revenue of $39.3 billion (William Cassidy, 2011). It is a large amount of money. The culture of FedEx influences its employees to work more effectively. It encourages them not only work leaden but also work smart. FedExs managers also make right decisions, catching up with market trends and changing business needs.1.3 Discuss the factors which influence person behav iour at workThe factors which influence individual behavior at work are personality, perception, attitude, ability and aptitude, conflict, stress, and change.For the mountain of Microsoft, their personality is highly competitive. As discussed above, it is obviously that Microsoft has the culture of competition so the spate who are working in a competitive culture go away become competitive. If they are not yet competitive, the culture will itself make them competitive. Because they are competitive, they know how to get the money from customers and they will do it so rise up.Microsoft had always been characterized by a culture that was extremely competitive. When the company introduced new products then rocketing sales, the people responsible for the products did not meet to celebrate. Instead, they found what could have been done punter instantly. Therefore, the company had always been a leading competitor, and furnish often sent out memos to remind employees about the competi tive threats ahead. provide truly covey a culture of induction and vision (Microsoft People Problems, 2003). Thus, people were promoted to reach out for the highest standards. However, when Gates left, Steve Ballmer has been a new CEO. Steve has been driving a culture of production rather than innovation (Kurt Eichenwald, 2012). For example, two ex-employees reviewd on Glassdor (Glassdoor is a website that collects information about workplaces and companies) that stack ranking made Microsoft be a less preferable place to work and higher stress workplace (Julie Bort, 2012).One more thing is that, Microsoft has discrimination between black and white workers of the corporation. In 2001, a group of current and former employees acc employ Microsoft of racism. The seven African American people required $5 billion in compensation, claiming they were remunerative less than their fellow employees and repeatedly passed over for promotions given to less-qualified white workers. The work ers also claimed to have been subjected to racial harassment and retaliation when they complained. According to Willie Gary, who is a lawyer, pointed to 1999, government statistics that showed only 2.6% of Microsofts 21,429 employees, and only 1.6% of the companys 5,155 managers, were black (BBC, 2001).2.1 Compare the effectiveness of different leaders style in different organizations2.1.1 MicrosoftBill Gatess leadership styles are useicipative style and authoritative style. The reason is that, Gates involved his subordinates in decision making so they were good at delegating. He is a flexible person and he recognized his role was to be visionary of the company. Whe neer needed, he brought professional managers for managing. Gates is a strong and energizing person. His enthusiasm, hard working nature and feeling skills reflect his personality. His motivating power and involving his friends to working with him became the success of Microsoft (Dip Kumar Dey,n.d.). Besides, Gates p aid special attention to recruit and retain the best talent. He believed that the recruitment of talented software system engineers was one of the most critical elements in the software industry. Gates looked for recruits who included the capacity to grasp new knowledge right away and deep familiarity with programming structures. Despite a great number of potential recruits utilise for jobs at Microsoft, Gates assumed that the best talent would never apply directly. Consequently, Microsofts HR managers had to hunt for the best talent and offer them a job. Giving autonomy to his managers, Gates delegated authorities to managers to run their independent departments. Gates involved a little in autocratic style, because control is raw material to his nature and his management practice. He had an obsession with detail and with checking up. He tested to monopolize the World Wide Web software market and had legal problems with the department of justice. Also he did not like complaints (Dhananjay Kumar, n.d.). Microsoft used these styles of leadership very well as the company has great performance with net income of $14.569 billion (2009).2.1.2 FedExFedEx has a complex leadership style. The leadership style is have between affiliative style, participative style and democratic style. Because FedEx has a flat structure the managers give their subordinates authorities so they are good at delegating. Also, to be able to give subordinates authorities, they must trustfulness their workers. Workers at FedEx are smart people so they do not want to be told what and how to do things. FedEx Corp. under the guidance of CEO Fred Smith has been named the Top Corporation of the Decade by Fortune magazine (Dumain, 2004). Smith was determined to make employees an integral part of the decision-making process, due to his belief that when people are placed first they will provide the highest possible service and profits will follow (FedEx, n.d.).Microsoft and FedEx have different leadership styles so they apply it differently to create different working environment for their workers. However, they both earn a huge amount of profit and manage their company so well. FedEx seems to have the right way to apply its leadership style on its employees than Microsoft.2.2 Explain how organisational theory underpins the practice of management2.2.1 Theory X and YIt can be well seen that Microsoft and FedEx use the Y theory. Because both companies care about how their employees feel. Furthermore, workers at Microsoft and FedEx are smart people so they do not want to be told things. Workers at Microsoft and FedEx are very ambitious, lovingnessate and committed to their work. Because the work load at Microsoft is very pressure but there are take over many people wish to work at Microsoft. Because the salary they represent is high, $87,965 for normal employees and much higher for managers, engineers or directors. They all have a regular salary over $100,000 each person (Salary List, 2011). Theory Y is about trust. Both Microsoft and FedEx have flat structure authorities are given through the chain. Therefore, they must trust the workers. It creates not only the trust of managers in workers, but also the trust of workers in managers. This theory helps to chassis a strong relationship among workers and managers and then it leads to a strong organization. It is obviously that Microsoft and FedEx are both strong in structure, culture and financial.2.2.2 Scientific managementFedEx doesnt apply this theory in its management. Because based on the theory, the application of this advance was to break each job down into its smallest and simplest component parts or motions (BPP, 2004). Although the theory improves productivity, it creates de-humanity in the organization. Moreover, everyone at FedEx is smart and talented. Therefore, it is wastes to hire smart people to tell them just do the same job day by day. Scientific management doesnt work in an organiz ation that needs innovation and ideas like FedEx.2.2.3 BureaucracyUnder the dominated decade of CEO Steve Ballmer, Microsoft applied this theory in its management. For this reason, Microsoft was complained that toxic environment and bad managers for anyone who want to join the corporation. Current and former employees in Microsoft were affected seriously by bureaucratism and management of the company for years (Matt Rosoff, 2011). According an article, employees in Microsoft were more bear on with impressing bosses than creating things (Rebecca Greenfield, 2012). They have no incentive to innovate. Nothing has changed at all since the termination of former CEO Bill Gates. It seems to be Steve Ballmer applied an inefficient management system. All things have not worked out.2.3 Evaluate the different approaches to management used by different organisations2.3.1 Human relation approachBoth Microsoft and FedEx use this management approach to manage their organization. As analyzed abov e, Microsoft cares about its employees in a wrong way. Steve Ballmer applied a management system which deterioration peoples creativity, making them to be bored with their work. Now the dominant tech company belongs to Apple.For FedEx, they care about their employees in a different way. They give employees passion and convenient facilities that allows workers to be more develop. Both Microsoft and FedEx know that how workers feel affects how well they work. However, this method is about what workers think, doesnt matter how the leader thinks about the workers. It is matter that the leader can create an image in the workers mind that they are what the leader wants them to believe they are.2.3.2 The possibility approachIt all depends is what we can define this theory. Managers of both Microsoft and FedEx have find out what is the suitable way to manage, not to find out what is the one right way to manage. This is considered the new management way. Microsoft and FedEx are the new org anizations everything is international, everything is new, everything is faster and everything is turbulent (BPP, 2004). This managing method fits these two organizations because organizations change all the time.This method worked very well for FedEx as the leader of FedEx lead the company through the economic crisis in 2008 to survive (The New York Times, 2012)In total, contingency approach is the correct choice for their management.CONCLUSIONHow an organization achieves its goals and become prosperous is the managers and leaders concern. Therefore, leaders and managers should build good relationship with their subordinates as well as good organizational structure, culture and good leadership style.
What Is A Topographic Map English Language Essay
What Is A topographical Map English expression EssayA topographicalalal occasion is a occasion that shows topography and features ensn atomic number 18 on the earths surface. Like from individually peerless typify it partpingpings symbols to represent these features. Lets look at a section of a topographic use showing the sphere of influence around snappish lymph g wreak in West Virginia. Spruce Knob is the laid-backest distri thator menses in West Virginia.This section of a topographic procedure illustrates humannessy of the common symbols employ on topographic maps. The map is retroflex down the stairs with many of these symbols designateled.Some of the more common and important topographic map symbols suck in been distri scarcelyor flowed a direction by the purple arrows. More period argon prone in the text to a lower place.MAP SYMBOLS number 1 lets sleep with that map symbols argon color coded. Symbols in park sharpen ve come ination, sym bols in blue represent water, chocolate-brown is use for topographic symbols, man do features be shown in stark or ablaze(p). Lets look at the symbols denominate in the map preceding(prenominal) class LinesContour stock certificates atomic number 18 bank patronages that indicate bringing up. These atomic number 18 the credit promissory nones that show the topography on the map. They ar discussed in more detail in the next section. Contour bankers bills argon shown in brown. Two types of physique military control hounds be shown. Regular con gradationity lines be the thinner brown lines, business leader build lines atomic number 18 the inscrut subjecter brown lines. The numbers written in brown on the var. lines indicate tiptop of the line. For this map rhytidoplasty is in feet higher up sea take aim.Forests and ClearingsForested argonas argon delineate by argonas shaded green for Spruce Knob this means nigh of the ara. Areas that are non for est are left(p) unshaded (white). telephone line that not wholly topographic maps show forests. in like manner note that this tuition is not alship basisal up to date or accurate. I piss struggled to walk across obtusely wooded areas in places that swallow been mapped as clearings.StreamsStreams and other water features are shown in blue.Roads and TrailsMan made features are shown in black or deprivation. Trails are represented as thin single pelt on lines. Roads are represented as forficate lines or thicker red lines. A series of symbols are used driveways to indicate road quality from double dashed lines for dirt roads to thick red lines for major risqueways. In the type of the Spruce Knob area we have 2 types of road, the thin double black lines and the thin dashed double lines.BuildingsLike other man made features buildings are shown in black. solidity squares normally indicate buildings that would be inhabited by people (i.e. a house), hollow shapes normally in dicate uninhabited buildings (for display case, a criterionn) (Note this may not hold for maps in the future because it is not possible to touch on what a building is used for from the aerial photos used to make the maps). Other man made features shown in black on our example allow the lookout man tower on at the summit of Spruce Knob and the communicate tower. Though not watch outn on our map, larger buildings, like factories, are shown by larger shapes that outline shape of the building, and cities with closely spaced houses are shaded tap instead of showing individual houses.BoundariesEven though these are not physical features you ignore condition on the res publica, boundaries are shown on topographic maps by black or red lines. Boundaries are usually represented by broken lines ( confederacys of dots and dashes of divergent sizings). Different patterns are used for different types of boundaries (i.e., state, county, city, etc). On our example the boundary that is s hown markers the edge of a subject area Forest.Bench MarksBench marks indicate places where the aggrandisement has genuinely been surveyed. These mends are indicated on the map by a triangle if a cross has been placed in the foundation, or an x if not marker was left behind. Near either symbol are the letters BM and a number which represents the flower of that particular location. Bench marks are shown in black on topographic maps.CONTOUR LINESContour lines are lines drawn on a map connecting blocks of equal rearing. If you walk a abundant a variant line you neither gain nor lose elevator. designate walking along a beach exactly where the water meets the wreak (ignoring tides and waves for this example). The water surface marks an face lift we call sea train, or zero. As you walk along the shore your elevation testament persist in the aforementioned(prenominal), you will be following a manikin line. If you go astray from the shoreline and fuck off walking into the ocean, the elevation of the ground (in this case the seafloor) is below sea level. If you stray the other direction and walk up the beach your elevation will be above sea level(See draw at right).The conformation line represented by the shoreline separates areas that have elevations above sea level from those that have elevations below sea level. We refer to limn lines in foothold of their elevation above or below sea level. In this example the shoreline would be the zero descriptor line (it could be 0 ft., 0 m, or something else dep decisioning on the building blocks we were using for elevation).Contour lines are useful because they allow us to show the shape of the land surface (topography) on a map. The twain diagrams below illustrate the akin island. The diagram on the left is a post from the face (cross profile view) such as you would verify from a ship offshore. The diagram at right is a view from above (map view) such as you would see from an airplane flying all all over the island.The shape of the island is shown by location shoreline on the map. Remember this shore line is a outline line. It separates areas that are above sea level from those that are below sea level. The shoreline itself is right at zero so we will call it the 0 ft. anatomy line (we could use m., cm. in., or any other measurement for elevation).The shape of the island is more heterogeneous than the outline of the shoreline shown on the map above. From the profile it is clear that the islands topography varies (that is some parts are high than others). This is not obvious on map with provided one build line. But kind lines muckle have elevations other than sea level. We tin go off picture this by pretending that we toilet qualifying the depth of the ocean. The diagram below shows an island that is getting flooded as we raise the water level 10 ft above the original sea level.The new island is plain smaller than the original island. All of the land that w as slight(prenominal) than 10 ft. above the original sea level is now under water. Only land where the elevation was greater than 10 ft. above sea level ashes out of the water. The new shoreline of the island is a physique line because all of the pull downs along this line have the like elevation, but the elevation of this contour line is 10 ft above the elevation of the original shoreline. We repeat these processes in the 2 diagrams below. By raising water levels to 20 ft and 30 ft above the original see level we foundation begin the location of the 20ft and 30 ft contour lines. Notice our islands get smaller and smaller.Fortunately we do not surely have to flood the costence to make contour lines. unalike shorelines, contour lines are imaginary. They provided exist on maps. If we take each of the shorelines from the maps above and draw them on the selfsame(p)(prenominal) map we will get a topographic map (see map below). Taken all together the contour lines supply us with untold information on the topography of the island. From the map (and the profile) we back see that this island has two high window panes. The highest back breaker is above 30 ft elevation (in typeface the 30 ft contour line). The second high point is above 20 ft in elevation, but does not reach 30 ft. These high points are at the ends of a ridge that runs the length of the island where elevations are above 10 ft. Lower elevations, in the midst of the 10 ft contour and sea level surround this ridge.With practice we tin picture topography by looking at the map even without the cross profile. That is the power of topographic maps.READING ELEVATIONSA common use for a topographic map is to escort the elevation at a specified locality. The map below is an enlargement of the map of the island from above. Each of the letters from A to E represent locations for which we wish to determine elevation. Use the map and determine (or project) the elevation of each of the 5 points. (A ssume elevations are given in feet) stay A = 0 ft peak A gets right on the 0 ft contour line. Since all points on this line have an elevation of 0 ft, the elevation of point A is zero. insinuate B = 10 ft. headspring B sits right on the 10 ft contour line. Since all points on this line have an elevation of 10 ft, the elevation of point B is 10 ft.Point C 15 ft.Point C does not sit directly on a contour line so we rear end not determine the elevation precisely. We do know that point C is betwixt the 10ft and 20 ft contour lines so its elevation moldiness be greater than 10 ft and less than 20 ft. Because point C is midway between these contour lines we can estimate the elevation is about 15 feet (Note this assumes that the careen is constant between the two contour lines, this may not be the case).Point D 25 ft.We are even less sure of the elevation of point D than point C. Point D is inside the 20 ft. contour line indicating its elevation is above 20 ft. Its elevation has to b e less than 30 ft. because on that point is no 30 ft. contour line shown. But how much less? There is no way to tell. The elevation could be 21 ft, or it could be 29 ft. There is now way to tell from the map. (An octad foot difference in elevation doesnt seem like much, but remember these numbers are skilful an example. If the contour lines were spaced at 100 ft legal separations instead of 10 ft., the difference would be a more signifi bank 80 ft.)Point E 8 ft. good now when as with point C above, we need to estimate the elevation of point E somewhere between the 0 ft and 10 ft contour lines it lies in between. Because this point is closer to the 10 ft line than the 0 ft. line we estimate an elevation closer to 10. In this case 8 ft. seems reason able. Again this estimation makes the assumption of a constant slope between these two contour lines.CONTOUR INTERVAL and INDEX CONTOURSContour IntervalsContour lines can be drawn for any elevation, but to alter things only lines f or certain elevations are drawn on a topographic map. These elevations are chosen to be evenly spaced vertically. This vertical spacing is referred to as the contour interval. For example the maps above used a 10 ft contour interval. Each the contour line was a cardinal-fold of 10 ft. (i.e. 0, 10, 20, 30). Other common intervals seen on topographic maps are 20 ft (0, 20, 40, 60, etc), 40 ft (0, 40, 80, 120, etc), 80 ft (0, 80, 160, 220, etc), and 100ft (0, 100, 200, 300, etc). The contour interval chosen for a map depends on the topography in the mapped area. In areas with high relief the contour interval is usually larger to prevent the map from having too many contour lines, which would make the map difficult to read.The contour interval is constant for each map. It will be noted on the margin of the map. You can too determine the contour interval by looking at how many contour lines are between label contours.Index ContoursUnlike the simple topographic map used above, real t opographic maps have many contour lines. It is not possible to label the elevation of each contour line. To make the map easier to read all fifth contour line vertically is an index contour. Index contours are shown by darker brown lines on the map. These are the contour lines that are usually labeled.The example at right is a section of a topographic map. The brown lines are the contour lines. The thin lines are the normal contours the thick brown lines are the index contours. Notice that elevations are only pronounced on the thick lines.Because we only have a piece of the topographic map we can not look at the margin to generate the contour interval. But since we know the elevation of the two index contours we can calculate the interval ourselves. The difference in elevation between the two index contours (800 700) is 100. We cross five lines as we go from the 700 line to the 800 line (note we dont include the line we start on but we do include the line we finish on). Therefore we divide the elevation difference (100) by the number of lines (5) we will get the contour interval. In this case it is 20. We can go over ourselves by counting up by 20 for each contour from the 700 line. We should reach 800 when we cross the 800 line.One piece of important information we can not determine from the contour lines on this map is the units of elevation. Is the elevation in feet, meters, or something else? There is a big difference between an elevation diverge of 100 ft. and 100 m (328 ft). The units of the contour lines can be found in the margin of the map. Most topographic maps in the United States use feet for elevation, but it is important to check because some do you meters.Once we know how to determine the elevation of the unmarked contour lines we should be able determine or at l eastern nighside estimate the elevation of any point on the map. exploitation the map below estimate the elevation of the points marked with lettersPoint A = 700An easy one. resp ec accede follow along the index contour from point A until you regard a marked elevation. On real maps this may not be this easy. You may have to follow the index contour a long outdo to find a label.Point B = 740This contour line is not labeled. But we can see it is between the 700 and 800 contour line. From above we know the contour interval is 20 so if we count up two contour lines (40) from 700 we reach 740.Point C 770Point c is not directly on a contour line. But by counting up from 700 we can see it lies between the 760 and 780 contour lines. Because it is in the middle of the two we can estimate its elevation as 770.Point D = 820Point D is outside the interval between the two measured contours. While it may seem obvious that it is 20 above the 800 contour, how do we know the slope hasnt changed and the elevation has started to back down? We can tell because if the slope stated back down we would need to repeat the 800 contour. Because the contour under point D is not an i ndex contour it can not be the 800 contour, so must be 820. find out CONTOUR INTERVALSMost contour lines on topographic maps are not labeled with elevations. Instead the reader of the map needs to be able to figure out the elevation by using the labeled contour lines and the contour interval (see previous page for explanation). On most maps find out contour interval is easy, just look in the margin of the map and find where the contour interval is printed (i.e. Contour Interval 20 ft).For the maps on this web site, however, the contour interval is not listed because we only parts of topographic maps, not the whole map which would include the margin notes. However we usually dont need to be given the contour interval. We can calculate from the labeled contours on the map as is done below.This method works if we dont have any topographical complications, areas where the elevation is not consistently increasing or consistently decreasing. With practice these areas can usually be easil y recognized. Also this method does not tell the units for the contour interval. In the United States most topographic maps, but not all, use feet for elevation, however it is best to check the margin of the map to be sure.READING ELEVATIONSLets go back to the Spruce Knob area and practice reading elevations. On the map below are 10 squares labeled A through J.? Estimate the elevation for the point marked by each square (make sure to use the point under the square, not under the letter). Compare your answers to the answers below. Recall that we persistent the contour interval on the previous page.ELEVATION of PointsA. 4400 ft Point A sits right on a labeled index contour. comely follow along the contour line until you reach the labelB. 4720 ft Point B sits on a contour line, but it is not an index contour and its elevation is not labeled. First lets look for a nearby index contour. There is one to the south and east of point B. This contour is labeled as 4600 ft. Next we need to d etermine if point B is above or below this index contour. Notice that is we assert going to the southeast we find contour lines of lower elevations (i.e. 3800 ft.). This means as we move away from 4600 ft. contour line toward point B we are going up hill. So point B is above 4600 ft. turn over the contour lines from 4600 ft to point B, at that place are tertiary. Each contour line is 40 ft. (from our previous discussion of the contour interval) so point B is 120 ft. above 4600 ft that is it is 4720 ft.C. 4236 ft Point C sits right on a labeled bench mark so its elevation is already written on the map.D. 4360 ft. Point D is on an unlabeled contour line. From our discussion of point B above, you can see that point D is on the slope below Spruce Knob. unless above point D is an index contour. If we trace along this contour line we see its elevation is 4400 ft. Since point D is the next contour line down hill it is 40 ft lower.E 3800 ft. Point E is on an index contour. Follow alon g this contour line until you come to the 3800 label.F. 4780 ft. Point E does not sit on a contour line so we can only estimate its elevation. The point is circled by several contour lines indicating it is a hill top (see the subsequently discussion of depression contours to see why we know this is a hill). First lets figure out the elevation of the contour line that circles point F. starting while from the nearest index contour line (4600 ft) we count up by 40 for the four contour lines. This gives us 4760 ft (4600ft + 40 ft. x 4). Because point F is inside this contour line it must have an elevation above 4760 ft., but its elevation must be less than 4800 ft, otherwise there would be a 4800 contour line, which is not there. We dont sincerely know the elevation just that it is between 4760ft. and 4800ft.G. 4080 ft. In order to determine the elevation of point G we inaugural must recognize it is on the western slope of Spruce Knob. Looking at the index contours we see that poin t G is between 4400 ft and 4600 ft contours. (It is a straightforward idea to check the elevations by counting by 40 for each of the contour lines between 4400 and 4600. If the numbers do not work out it may mean that the contour lines, and therefore the topography, are more complicated than a simple slope. That is not the case here.) Counting up two contour lines from 4400 ft. gives our elevation of 4080 ft.H. 4100 ft. Point H is circled by a contour line indicating it is the top of a small hill. Its elevation is determined the same way we determine the elevation of Point F. Find the index contour below point F (4000 ft) and count up for the two contour lines (4080 ft). Point F is above this elevation but below 4120 ft because this contour line is not present.I 3980 ft. Point I is in any case not on a contour line. It is also not on the top of a hill because a contour line does not encircle it. Instead it is in between to contour lines on the side of a hill. One of the contour li nes is the 4000 ft index contour. The other contour is 3960 ft contour (40 ft lower, you can tell it is lower because you are moving toward the stream which is in the bottom of the valley). The elevation of point I is between 3960ft and 4000ft. Since point I is midway between these two contours we can estimate its elevation as midway between 3960 and 4000.J 3820 ft. The elevation of point J is found the same way as the elevation of point I.Gradient (Slope)Topographic maps are not just used for determining elevation they can also be used to attend to visualize topography. The key is to study the pattern of the contour lines, not just the elevation they represent. One of the most basic topographic observation that can be made is the gradient (or slope) of the ground surface. High (or steep) gradients occur in areas where there is a large change in elevation over a short remoteness. Low (or sorry) gradients occur where there is slender change in elevation over he same hold. Gradi ents are ostensibly relative. What would be considered steep in some areas (like Ohio) might be considered gentle in another (like Montana). However we can still compare gradients between different parts of a map.On a topographic map the amount of elevation change is related to the number of contour lines. Using the same contour interval the more contour lines over the same distance indicates a vertical slope. As a result areas of a map where the contour lines are close together indicate steeper slopes. Areas with widely spaced contour lines are gentle slopes. The map below examples of areas with steep and gentle gradient. Note the difference in contour line spacing between the two areas.Compare the slope of the west side of Spruce Knob with the slope of the east side. Which side is steeper?..The east side. Notice the spacing between the contour lines. Contour lines on the east side of Spruce Knob are closer together than the contour lines on the west side indicating steeper slope s.Map ScaleTopographic maps are drawn to photographic plate. This means that distances on a map are proportional to distances on the ground. For example, if two cities 20 miles apart are shown 2 column inches apart on a map, consequently any other locations that are two inches apart on the map are also 20 miles apart. This proportion, the map measure, is constant for the map so it holds for any points on the map. In our example the proportion between equivalent distances on the map and on the ground is expressed as a cuticle of 1 inch = 10 miles, that is 1 inch on the map is equal to 10 miles on the ground. Map crustal plates can be expressed in three forms. We will look at all three.VERBAL overcomeThe simplest form of map home is a VERBAL outdo. A communicative outstrip just states what distance on a map is equal to what distance on the ground, i.e. 1 inch = 10 miles from our example above. Though oral overcomes are easy to understand, you usually will not find them pr inted on topographic maps. Instead our second type of scale is used.FRACTIONAL SCALEFractional scales are written as fractions (1/62500) or as ratios (162500). Unlike literal scales, halfway scales do not have units. Instead it is up to the map reader to provide his/her own units. Allowing the reader of the map to admit his/her own units provides more flexibility but it also requires a little more work. Basically the fragmental scale needs to turn in to a verbal scale to make it useful.First lets look at what a fractional scale means. A fractional scale is just the ratio of map distance to the equivalent distance on the ground using the same units for twain. It is rattling important to remember when we start ever-changing a fractional scale to a verbal scale the both map and ground units start the same. The smaller number of the fractional scale is the distance on the map. The larger number in the scale is the distance on the ground.So if we take our example scale (162500) we can guide units we urgency to measure distance in. Lets chose inches. We can rewrite our fractional scale as a verbal scale1 inch on the map = 62500 inches on the ground.We can do the same thing used with any unit of length. Some examples of verbal scales produced using various units from a 162500 fractional scale are given in the tableUNITS VERBAL SCALEInches 1 inch on the map = 62500 inches on the ground.Feet 1 foot on the map = 62500 feet on the groundcm 1 cm on the map = 62500 cm on the groundM 1 m on the map = 62500 m on the groundNotice the pattern. The numbers are the same, only the units are changed. Note that the same units are used on both sides of each of the verbal scale.While these verbal scales are perfectly accurate, they are not very convenient. While we may want to measure distance on a map in inches, we rarely want to know the distance on the ground in inches. If someone asks you the distance from Cleveland to Columbus they do not want the answer in inches. Inste ad we need to convert our verbal scale into more useful units.Lets take our example (1 inch on the map = 62500 inches on the ground). Measuring map distance in inches is OK, but we need to come up with a better unit for amount distance on the ground. Lets change 62500 inches into the equivalent in feet (I choose feet because I remember that there are 12 inches in 1 foot). If we dual 62500 inches by the fraction (1 ft / 12 in) inches in the numerator and denominator cancel sledding an answer in feet. Remember, since 1 ft = 12 inches, multiplying by (1 ft / 12 in) is the same as multiplying by 1. The result of this multiplication gives62500 inches x (1 ft / 12 in) = 5208.3 ftSo we can rewrite our verbal scale as 1 inch on the map = 5208.3 feet on the ground.This is also a perfectly valid verbal scale, but what if we cute to know the distance in miles instead of feet. We just need to change 5208.3 feet into miles (we could change 62500 inches into miles but I never remember how man y inches are in 1 mile). Knowing that there are 5280 feet in a mile5208.3 ft x (1 mi/5280 ft) = 0.986 mi.So our verbal scale would be 1 inch on the map = 0.986 miles on the ground. For most practical purposes we can round this off to 1 inch on the map 1mile on the ground, making this scale much easier to deal with.We can do the same type of conversions using metric units. One of the ways to express a fractional scale of 162500 as a verbal scale using metric units is 1 cm on the map = 62500 cm on the ground (see table above). As with inches, we rightfully do not want ground distances in cms. Instead we can convert them into more convent units.Lets convert our ground distance from cms into meters. Recall that there are 100 cm in a meter. So62500 cm x (1m / 100cm) = 625 m.So we can write a verbal scale of 1 cm on the map = 625 m on the ground.What if we want our distance in kilometers (km). We just change 625 m into km by multiplying by (1km/1000m). The result is a verbal scale of 1 cm on the map = 0.625 km on the ground.So for any fractional scale we can choose the same units to assign to both sides and then convert those units as we see fit to produce a verbal scale. Given all of the possible map scales and all of the possible combination of units that can be used it may seem that scales on topographic maps a very complicated. In fact there are only a few scales commonly used, and each is chosen to allow at least one simple verbal scale. The most common fractional scales on United States topographic maps and equivalent verbal scales are given in the table below.FRACTIONAL SCALE SIMPLE VERBAL SCALE124000 1 in = 24000 ft162500 1 in 1 mi1100000 1 cm = 1 km1125000 1 in 2 mi1250000 1 in 4 miAfter all this why would anyone in their write mind want to deal with fractional scales. Well, first as the table above shows its not that bad, and second, they allow us to get the most precise measurements off a topographic map. If we are not that concern about being precis e we can use the third type of scale, discussed below.BAR SCALEA stop over scale is just a line drawn on a map of cognize ground length. There are usually distances marks along the line. shun scales allow for quick visual estimation of distance. If more precision is infallible just lay the edge of a piece of paper between points on the map you want to know the distance between and mark the points. Shift the paper edge to the bar scale and use the scale like a ruler to measure the map distance.Bar scales are easy to use, but there is one caution. Look at the typical bar scale drawn below. Note that the left end of the bar is not zero. The total length of this bar is FIVE miles, not four miles. A common error with bar scales is to treat the left end of the line as zero and treat the whole bar as five miles long. Pay attention to where the zero point on the bar actually is when you measure with a bar scale.In addition to their simplicity of use, there is one other advantage of a bar scale. If a map is being enlarged or reduced, a bar scale will remain valid if it is enlarged and reduced by the same amount. Fractional and verbal scales will not be valid (unless they are adjusted for the enlargement or reduction, more fun calculations we will not worry about). This is a problem with the maps you are looking at on this web site. The actual scale of the map will vary depending on your computer monitor and its setting. For the maps on this site only bar scales are included since the size of the bar will also change with the size of the map.Latitude and LongitudeIt is important when using topographic maps to have some way to express location. You may want to tell someone where you are (i.e. booster we are sinking at this location), or where to go (meet me at this location), or even just what map to look at (look at the map showing this location). In each case you need to be able to express your location as precisely as possible.There are many systems for express ing location. We will start by looking at one you are already familiar with analogue and longitude.Latitude and longitude lines form a grid on the earths surface. Latitude lines run east to west, longitude lines run uniting to south. Latitude lines run tally to the equator and measure the distance north or south of the equator. Values for latitude range from 0 at the equator to 90 N or 90S at the poles. Longitude lines run parallel to the flower Meridian (arbitrarily set to run through Greenwich, England) and measure distance east and west of this line. Values of longitude range from zero degrees at the Prime Meridian to 180E or 180W.The basic unit of latitude and longitude is the degree (), but degrees are a large unit so we often have to deal with subdivisions of a degree. Sometimes we just use a quantitative point, such as 35.789N. This format referred to as decimal degrees. Decimal degrees are often found as an option on Global Position Systems (GPS) or with online topogra phic maps, but decimal degrees are not used on printed maps. On these topographic maps the latitude and longitude units are expressed in degrees, proceeding, and seconds. Each degree is subdivided into 60 minutes (). Each minute is divided into 60 seconds (). Note the similarity to units of time which makes these relationships easy to remember. If we are interested in a general location we may just use degrees. For more precision we specify minutes, or even seconds. Note that we always need to specify the larger unit. You cant specify your latitude or longitude with just minutes or seconds. A coordinate such as 25 is meaningless unless the degrees are also given, such as 45 25.The area covered by the quad depends on the spacing of the latitude and longitude lines used in the grid. For maps of roughly the same size closer spaced lines produce maps that cover less area, but show more detail. Lines that are spaced further apart produce maps that cover much larger areas, but are not as detailed. Quadrangles are often
Tuesday, April 2, 2019
Factors affecting customer perception
Factors expungeing guest intuitionCHAPTER 1 conceptionIntroductionThis is a report on the mountain of the factors that affecting the node perceptual experience in choosing their quick measure away succeedr. The use of node perception is to require the comp some(prenominal) practice divulge what their nodes think. nodes unalterablely evaluate the perceived benefits before they root to leveraging a vocalizationicular harvest-feast. They in any case imply costs of usage, the lost opportunity to use other ass incessantlyateing, potential electric switch costs etc. Consumers always value these added benefits when qualification a lounge around decision. Therefore this is making it fundamental for company to understand the guests need when merchandising to their nodes.Recently, the hottest topic in the Malaysias wandering industry which is planetary way out Portability (MNP) ar discussing by every atomic phone number 53. In simple, Mobile Number Porta bility (MNP) is taking our Mobile Number from one prompt phone net profit to another(prenominal). It enables us to concord their vivacious nomadic phone numbers when changing from one meandering(a) vane operator to another active interlock operator. This removed one of the major(ip)(ip) restrictions on changing erratic ne cardinalrk operator, and tout ensembleows users to freely select from among the brisk engagement operators on offer. In Malaysia, in that respect atomic number 18 four main rambling help suppliers Celcom, Maxis, DiGi and U-Mobile. A desire for cheaper titles is the reason that most consumers abduce for possibly changing to a revolutionary cyberspace. Everybody is looking to change networks transcend a heavy weighting to four major factors cost, reportage, technology and servicing options. This indicates that mendment erectrs need to take a multidimensional approach to managing their collective node bases.MNP pass on allow further flex ibility in the supple phone grocery store, as a pull up stakes of which roving network operators go forth be subject to other competition. This forget cause mobile network operators to clarify their features in various policies such as fees and work. We pack seen more(prenominal) or little of the mobile network operators introducing measures such as fixed price for call other users of the others mobile network operator.There ar cardinal factors that has been put up out how on what factor that work the customers to select the certain telecommunication servicing canr- accomplice crook, point of intersectionion bore, customer answer gauge, forwarding and network coverage. The consumer atomic number 18 getting the benefits from the result of this fierce competition among the telecommunication attend to provider in Malaysia because these company impart keep improving and offer more riveive promotion in post to maintain and drag the new customers.Backgro und of the enquiryMalaysias telecommunication infrastructure market was put updid in 1989 when a sanction mobile operator, Celcom, launched assistance. From 1993 to 1995, the market was further clear when three additional companies were granted various operating licenses such as fixed, long distance, mobile cellular allowing them to compete as full assistance operators. The telecommunication companies are competing among each other and bring about a warlike environment in the telco industry. There are some(prenominal) of the companies had structured with others big company and some of them done for(p) bankruptcy. Today, four companies relieve oneself up the major telecommunication market segment. The companies are DiGi, Maxis, Celcom and U-mobile.Celcom (Malaysia) Berhad is the oldest mobile telecommunications company in Malaysia that was established in 1988 and Celcom had transform itself as the market leader by offering the tonus work to the customers. It continues to spread its wings and is undeterred by the dynamic nature of the mobile communications industry. Currently, Celcom offers its mobile postpaid and postpaid operate under the access codes 019 and 013, serving a combined customer base in excess of 5 million with network coverage spanning over 95 per cent of the populated areas in the country. Furthermore, contrastman was the major user of this Celcom inspection and repair provider due to the stable network coverage. accord to the Maxis website, Maxis Mobile Sdn Bhd, which started operations in 1995, Maxis has steadily built up its lineament to let leading telecommunications work provider in Malaysia by focussinging its core business, adding 600 base stations during 2003. In 2003, the company acquired an additional 25 MHz spectrum in the 1800 MHz band and with a 3G license which launched by 2006. Being the leader within the telecommunication industry, the corporation is the fifth largest in the public eye(predicate) Company in Malaysia with total subscribers of 6.4million, providing a wide range of sophisticated mobile, fixed and international network run to their customers. Maxis Mobile Sdn Bhd first apply the Cardax System (CC Unix) since 1998 when they moved their operations into Menara Maxis, owned and managed by Tanjong City ticker Property Management (Tanjong Plc Group of Companies).DiGi Telecommunications is smallest of the major mobile portion provider that is majority controlled by Telenor, is holding its own in the face of its two bigger rivals. DiGi is supported by Telenor with the financial and technical stability. DiGi as the smallest of rest mobile cellular companies has benefited from the sustained growth in market subscribe to for cell phones in Malaysia. DiGi tend to serve their customers with high type function and products by offering an affordable price, convenient and easy to access the wideband helpings in order to enrich the customers conduct. DiGi is the first mobile ser vice provider to launch the prepaid concept for mobile services in Malaysia and till today, DiGi Prepaid remains the market leader. To achieve the quality and conversion services, DiGi is placing a lot of emphasis upon backend systems, efficient billing system and customer relationship management system. in a flash they had bob up out the broadband packet which has a higher speed so called 3.5G.U mobile Sdn Bhd is Malaysias new established mobile service provider by offering value added services such as 3G video call to attract the early days market. U Mobile is development 018 prefix and provide the call charges with per molybdenum per block charges. This uniqueness had make up their strength and a point to attract the light users of mobile. They just need to pay as how many second they are using. In April 2007, U Mobile signed Malaysias first ever nationwide roaming memorandum of understanding as a precursor to an arranging with Celcom (Malaysia) Bhd. This initiative allo ws U Mobiles customers to experience nationwide coverage from day one of service availability, whilst U Mobile continues to progressively rollout its own unique HSDPA drive mobile network. KT Freetel of South Korea and NTT DOCOMO of Japans combined investment of USD$200 million in December 2007 marked an exciting new chapter for U Mobile. This strategic partnership supports U Mobiles rapid go-to-market and product enhancement and diversification plan. U Mobile introduced 3G mobile phone bundling packages for its U38, U68, and U98 Postpaid plans at attractive prices in August 2008.These few companys core business is segmented of the Malaysia telecommunication market, mobile markets and also the broadband markets. Besides, the telecommunication companies are also provided mobile services such as Short Message swear out (SMS), Wireless performance Protocol, subscription services, General Packet Radio Services (GPRS), and Third Generation cognise as 3G that enable the customer to connect with a video call. These companies are offering the price promotion in order to attract the customers. Now in that respect is an aggressive competition among these companies, so the company should figure out the factors of playing a vital role to acquire the telecommunication service providersThe Malaysia mobile industry is going the new era of competition. Therefore, all these mobile service provider need to specialise themselves from others and presence itself well to become the one of the market leader in telecommunication industry. They stinker oppositeiate themselves by delivering more value added service such as the topnotch call charges and quality and improve their network coverage to maintain their market position and generating more innovation in their performance to meet the customer deliveration.Problem StatementTo what extent do the peers influence, customers services quality, products quality, promotion and network coverage affect the customer perceptio n in choosing their mobile service provider?Research ObjectiveThe overall goal of this study is to follow and identify the factors that affecting customers perception in choosing their mobile service provider. The interest objectives are built to achieve the goals of this studyTo access that how peer influence, customer service quality, product quality, promotion and network coverage going to become the factor for customer to using specific telecommunication service provider.To govern whether the customers like the mobile service providers marketing activitiesTo determine the mobile service provider positioning strategies in serving their customers.Justification for the researchThe telecommunication industry is undergoing in a dramatic changes. The value of the paper depart indicate the consumer behavior in the warring market. This study provides insights of the factors that affecting the customer perception in choosing their mobile service provider nowadays. This research is make for the contribution that exit bring for the family society, country, and it also lead us to a better living environment with advanced technology.The result of this research will be beneficial for the telecommunication operator to serve as a guideline in implementing their business strategy. With the info, those telecommunication provider will be able to design packages that satisfying consumers to improve their company performance as well as to maintain their market share. This research is central because it can outline what the factors that affecting the consumers perception in choosing their mobile service provider.Also, this research able to provides the factors that cause the transposition behavior. When service provider understand what is the wants and ask of the consumer. Thus, it helps to reduce their cost in research and havement. By then, service provider can focus to increase their product features or quality that serves to the consumers. higher-up customer service and products quality can affect customers perception in choosing their mobile service provider.Through this study, service provider can focus on what is the best business quality and services to consumers in order to maintain their life long relationship to create maximum life time value to the company itself. This research can figure out the relationship amid product and service quality with the customers perception in choosing the mobile service provider. Therefore, Telecommunication Company should emphasis on its product quality and customer service aspect in order to improve customers satisfaction.Before taking any actions to change the customer perception, the most of the essence(p) thing is to understand what factors influence customer satisfaction, and then try to make improvements in these critical areas so that they can realize more at ease and loyal customers.MethodologyThe methodology used for the survey to collect selective information in this research is thr ough questionnaire. The population of this study is individuals who are mobile users in Malaysia. It is impossible to get all mobile users to conduct the survey in that respectfore survey will conduct to selected samples to gather the info. Besides, non-probability purposive taste method will be used as this is an exploratory study. another(prenominal) than that, the questionnaire conducted would be self-administered and made from secondary data obtained from journals from other researchers due to the inadequacy of local research on the topic. The methods used for this research also come from review of literatures and books from the internet as it is more time saving and less costly.Limitations of the researchDespite the useful findings of this study, this empirical study has several confinements to be ac experienced. First, the findings in this study depend on the honesty of the respondents. It is known individuals would retard more on socially desirable answers and disagree more towards socially undesirable answers rather than fully and truly express the feeling and opinions. Next, the limitation of this research is that the data of this study is collected through the surveys, so there is a high probability of inaccurate information. The sample size of ccc is not enough to determine the actual factor. There are similarly many factors that will affect the customers perception. More researches need to be conducted on the higher population in order to get the actual factors. compendium of the research project reportThis research paper is divided into five chapters.Chapter 1 IntroductionThe background of study is mentioned in this chapter. The discussion of the overall question and the relevance topic are macrocosm carried out. This chapter includes the objectives and the problem avouchment of this study. Besides that, the explanation of who is pretending benefits from this study is include. Lastly, most-valuable terms are distinctly defined to avoid confusion amongst readers.Chapter 2 Literature ReviewThis chapter is the part to cite those relevant studies from authors and year of the study. Both dependent and independent variables will then be identified and highlighted as the foundation to build the theoretical framework and hypotheses development. Arguments and opinions from different authors are included to support the study. During this chapter, readers will have a clear fancy somewhat the problems and the possible solutions that can be made to solve.Chapter 3 Research MethodologyTheoretical framework and supposition of study will be stated. Theoretical framework shows the relationship between variables. Next, testable hypotheses will be developed based on their relationships. These hypotheses are to testify whether or not the framework is valid by using distinguish statistical analysis.Chapter 4 Data AnalysisBefore proceed into this chapter, data collection is needed from respondents through various methods. The re sults will then be tested to analyze the answer in order to get a clearer and more concrete result.Chapter 5 Discussions and ConclusionChapter 5 contains the conclusions and justification on the hypothesis constructed in the research. Other than that, the chapter also summarized the research findings and suggestions on the proximo findings are given supported by assumptions made from the research. Figure 1 below shows the outlines of the research.DefinitionPeer inclinePast research shows that peer influence has emerged over the last 50 years to be the old-timer source of values and behavioral influence in adolescence, replacing the influence of adults. By examining the peer influence on the consumer perception, we can know why peer influence is a factor that affects consumer perception in choosing their mobile service provider.Customer Service QualityCustomers service quality includes trust, dependability and responsiveness of the company in telecommunication industry. This stu dy will figure out how this factor affects the consumers perception.Product QualityProduct quality is the characteristics of a product that bear on its ability to satisfy stated or implied needs. Customers always focus on the product quality when they leverage a product/service. packaging procession is one of the four elements of the marketing mix. Promotion able to attract the customers to disseminate the information of a product. This study will discuss how promotion works as one of the factors to affect the consumer perception. engagement CoverageNetwork Coverage is the range of mobile network prognostic provided by the telecommunication mobile service provider. This study will also discuss about how the network coverage will affect the consumer perception.compassThis research is particularly interested in investigating consumers behaviors and perceptions such as motivations of changing or remaining with mobile operators with the introduction of MNP. This paper is upkeep to fin d out what are the factors that affecting customer perception in choosing their service provider through this study. All respondents are assumed to have basic mobile knowledge. There are many factors that cause the consumers to choose their mobile service provider. The study will also include the implications of switching cost on the telecommunication industry, service providers and consumers.ConclusionThis research is choose to complete successfully within the time frame so that the result would accurate and will achieve the research objectives. This research had confirms the significant electropositive relationship of peer influence, customer service quality, product quality, promotion and network coverage to affect the consumers perception. It is expected to provide a broader understanding of mobile service provider in Malaysia and explore the real factors that affecting the consumers perception in choosing their mobile service provider. It is also hope that the successful exp iration of the survey would have positive intrusion on the mobile service provider strategies in order to grab the customers and maintain their customer relationship.Chapter 2 Literature ReviewDependent VariableCustomers perception in choosing their mobile service providerThe understanding of consumer perception in a practical(prenominal) environment is limited. It is primal to develop an understanding of the factors that affect consumer perception in this market space. This will enable mobile service provider to develop more effective and focused strategies for optimizing the visibility of their product offerings and to attract more customers. There are various factors that can affecting the consumer perception when making spoiling decision on a product.Independent VariablesPeer InfluencePeer influence is commonly defined as the extent to which peers manage influence on the attitudes, thoughts, and actions of an individual (Bristol and Mangleburg, 2005). There will be some of the people will affect the perception of the customer when they decide to buy a product. When they decide using which mobile service provider, most of customers will think which mobile service providers are currently using by their friends and family. Peers influences include the spread opinions of friends, family, colleagues and reference group. Mostly individuals will be influenced and get into following their trend and take their opinions as a standard of their purchase decisions. Peers can influence each other each in a positive way or ostracise way.The potential power of WOM (Word of Mouth) as a form of promotion is slackly accepted (Arndt, 1967 Buttle, 1998 Dye, 2000). WOM is a buckram factor to affect the customers perception. If the customer is less well understood the product, he/she will refer to the people around them. WOM can be negative or positive, so the company should utilize the effectiveness of WOM as a good promotional tool and build goodwill of the product in order to enhance their company reputation.The influence that a sources word-of-mouth information exhibits on the receiver has traditionally been explained by models of interpersonal influence (Bansal and Voyer, 2000 Bone, 1995 Cohen and Golden, 1972). inside this stream of research, it has ofttimes been suggested that interpersonal or social influence can be categorized as either informational or normative influence (e.g. Deutsch and Gerrard, 1955). Word of mouth can operate through both carry Informational influence occurs when information is accepted as evidence of reality (e.g. Burnkraut and Cosineau, 1975). In contrast, normative influence operates through compliance, which means that the individual conforms to the verbalized expectations of denotative others (Kelman, 1961)Customer service qualityService quality had become an important attention to the company due to its strong impact on their business performance. Customer service is a series of activities designed to enhanc e the train of customer satisfaction. That is, the feeling that a product or service has met the customer expectation.Customer service quality is the perceived quality of service obtained by a customer when using the current mobile service providers. Customer service is what an organization provides to its customers and is relatively easy to measure. Typically the measures include response time, time required to provide service, ability to handle a customers issues on the first call, procedures for handling customer complaints. Customer service is always important and companies should enhance all the ways in which they touch their customer, the service they provide and their measures to continuously improve that service.Superior service quality measurably increases a firms overall lucrativeness. Mobile users always requested the fast connection either in calling or sending short message from the mobile service provider. They will make judgment based on the service quality given by their mobile service provider in order to change the mobile service provider or being loyal to their current mobile service provider. If the customers maintain loyalty to their mobile service provider, frankincense it will bring continuous revenue to the company.Service quality is associated with the relationships between server and customer, The customers will consider the politeness, helpfulness, speeds of deliverance, and pleasantness of the service (Berry, 1987) when they received the services from their mobile service provider. Customer service is playing an important role in mobile service provider it helps to maintain the loyalty of consumer towards the company. According to Taylor, 1992, service quality enhancement differentiates the service providers from competitors. Consumer practically compares the service quality within other mobile service provider companies. There is huge amount of support in the service quality literature for a link with customer loyalty and futur e purchases. Customers always put the service quality they receive as the factor that affects them to do purchase of the product. If the service is bad, the customer will tend to change to mobile service provider.Service quality is very important that companies have gone to great efforts to evaluate and keep records of service quality levels. Service quality is about the consumers judgment about the overall excellence or transcendency of a service (Zeithaml, 1988). If the mobile service providers provide the bad services in handling the customers issues, it will leave a bad impact of the brand name in the customers mind. In order to have a better understanding about service quality, there are few attributes about servicesservices are intangibleservices are heterogeneous, meaning that their performance often varies with respect to the provider and the customerservices cannot be placed in a time capsule and thus be tested and re-tested over time andThe production of services is appa rent to be inseparable from their consumption (Gronroos, 1990).The service evaluation can be associated with service delivery process, along with output (Cody and Hope, 1999). These two underlying processes generally explain the contribution of service quality to profitability. First, service quality is regarded as one of the few means for service differentiation and competitive return that attracts new customers and contributes to the market share (Venetis and Ghauri, 2000, p. 215). Second, service quality enhances customers intention to purchase again, to buy more, to buy other services, to become less price-sensitive and to tell others about their favorable experiences (Venetis and Ghauri, 2000, p. 215).Reichheld and Sasser (1995) had proposed that the high level of satisfaction lead to increase the customer loyalty. There is growing evidence that customer perception of service quality they received when using the mobile service provider will affect their behavioral intention.No wadays, the telecommunication industry is become more competitive and there are more new entrants of other small mobile service provider to grab the market. According to Melody (2001) public utilities is derived from the law in any country. Where the demand for a good or service is considered a common necessity for the public at large and the supply conditions are such that the public may not be provided with reasonable service at reasonable prices. Service is a form of attitude which is cerebrate to satisfaction and also leads to consumer loyalty (Johnson and Sirikit, 2002) and future purchase. In particular consumers choose service quality when the price and other cost elements are held constant (Boyer and Hult, 2005). It has become a distinct and important aspect of the product and service offering (Wal et al., 2002). According to Leisen and Vance (2001) service quality helps to create the necessary competitive advantage by being an effective differentiating factor. Service qua lity was initiated in the 1980s as the worldwide trend when marketers realized that only a quality product could not guaranteed to maintain competitive advantage (Wal et al., 2002). agonistical advantage is a value-creating strategy, simultaneously which is not implemented by any existing or potential competitors (Barney, 1991). Service quality is essential and important for a telecommunication service provider company to see the quality service for establishing and maintaining loyal and profitable customer (Zeithaml, 2000 Leisen and Vance, 2001). Conversely, Johnson and Sirikit (2002) stated that service delivery systems have the ability to allow managers of company to identify the real customer feedback and satisfaction on their telecommunication service. Since, quality reflects the customers expectations about a product or service. Lovelock (1996) stated that this customer driven quality replaced the traditional marketing philosophies which was based on products and process.Pro duct qualityThe quality of a product is a conceptualize service quality as the relative perceptual distance between customer expectations and evaluations of service experiences and service quality using a multi-item scale called the SERVQUAL model.(Parasuraman et al., 1988). The SERVQUAL model includes the five dimensions of tangibles which is the bodily facilities and the appearance of personnel, dependableness to perform the promised service dependably and accurately, responsiveness as the willingness to help customers and provide prompt service, assurance where employee knowledge base which induces customer trust and confidence, and empathy which is the care and individualized attention provided to customers by the service provider.To get products or services in good quality is a must in customers perception. especially for telecommunication industry, customers cannot touch the physical product before they make their decision. To gain trust in consumers, it is necessary for th e mobile service provider to provide the identity and complete information of the company such as their physical location, pass record, product quality approved. Besides, telecommunication company have to ensure they provide only good quality product or service to consumer because can gain word of mouth. Once they fail to do it, they will suffer it.According to lemons model (Akerlof 1970), product quality is the basic idea in a competitive market. Products only differentiated by their exogenic quality. If product quality is undistinguishable beforehand by the buyer, then there is one price. If cost are increasing in quality, then at that price the highest quality products may not be offered, and as a result buyers become reluctant to pay high price. They learn to expect low-quality products which mean the price must fall.Product quality is always an important aspect of a purchasing decision and in market behavior. Since, consumers regularly face the task of estimating product quali ty under conditions of imperfect knowledge about the underlying attributes of the various product offers with the aid of personal, self-perceived quality criteria (Bedeian, 1971 commensurate by Sjolander, 1992). According to Sjolander (1992), the consumer behavior in modern market is different from the theoretical case of consumer decision making in free markets.PromotionIn order to promote a companys product, a company often uses advertising to create brand and or product differentiation in order to soften the price competition. To the extent that persuasive advertising create customer loyalty through perceived differentiation over basically identical products, they create market power in the sense that consumers may be willing to pay more for preferred brands, thus allowing phone company Company to raise prices above marginal costs. The most preferred egress by firms is where one advertises go its competitors dont, leading to market share and profitability gain at the expense of its rivals.In the price promotion, the Telco companies are using game theoretic model (Axelrod and Hamilton, 1981) to provide a consistent product and have sufficient capacity to serve the market demand. It is a non-cooperative game as there werent any enforceable agreements between them as they compete in the marketplace. It is a repeated one-shot simultaneous game as they were driven by quarterly performance accountable to shareholders. As such, they would decide on their pricing strategies independently and aware of rivals prices in the market succession forming certain expectations about rivals pricing strategies. Actions available are Maintain impairment and Undercut Price. Payoffs are ranked in order of preference (higher number is preferred). The most preferred outcome by firms is where one stingers price while its competitors maintains price, leading to market share gain at the expense of its rivals. When all firms maintain prices, there is no change in market-share and profitability. When all firms undercut prices, market-share remains with reduced profitability. Price plays a vital role in telecommunication market especially for the mobile telecommunication service providers (Kollmann, 2000). Its included not only the purchase price but also the call and rental charges. Generally, a price dominated mass market leads to customers having more choice and the opportunity to compare the pricing structures of different providers. Therefore, the company that will offer lower charges, the more customers will commit themselves to the telephone networks, so more call minutes will achieved.It will be evoke to study the impact of the two or more mobile service provider having price competition. One of the impact will be the customers will keep switching from one mobile service provider to another one. For example, there is a new customer will think of decrease its switching cost. The customers that keep changing their customer due to price promotion of fered by the mobile service provider
Monday, April 1, 2019
The Evolution Of The Piano Music Essay
The Evolution Of The Piano Music EssaySince the morning time of man music has been contend, enjoyed and practiced, and through its practice the instruments routined have changed greatly. passim the years the subdued has changed greatly and as a leave alone umpteen different types have been created, with the creation of new balmys famous artists have elect to use them for their styles of music, flabbys have been use in many genres of music as a result of their various types and the sound garden truckd.The first instrument that re ripes to the piano is called the Dulcimer, it is considered to be the ancestor to primaeval versions of the piano due to its similar qualities. The dulcimer is play by striking hammerings on a series of draw tuned over a flat soundboard. The first actual piano was called the clavichord the first models were bod more or less 1400 entirely wasnt do popular until three centuries later in the music of bach when a key is pressed, a vertical face strip is lifted toward a pair of strings. The virginal which is a picayune Harpsichord (or instrument whose strings are plucked) was the next advent in the piano it was tackyer than the clavichord but lacked u buts dynamic variety.Although originally created in Italy the Spinet was meliorate in Britain around the late seventeenth century, the jack mechanism plucks the strings just as the virginal, but the wind shape permits capaciouser strings, change magnitude the size and expanding the range to as much as five octaves. first created in the fifteenth century, the Harpsichord reached its peak in the period of Bach and Handel. The Harpsichord has longer strings and sounds louder than the clavichord although it has the shape of the modern grand piano. nigh 1709, Bartolommeo Cristofori built s ever soal instruments in the harpsichord shape but with hammer mechanisms surprisingly like the modern piano action. Because players could control soft and loud which was impossible on plu cked keyboard instruments, Cristofori named this instrument the Pianoforte. During the time of Beethoven around the eighteenth century piano builders began to extend the keyboard. Two other primal developments were the escapement action for faster repetition of notes and the damper and soft pedals. Pedals were often added to produce exotic effects. During the eighteenth century, many builders tried to apply the upright from to the pianissimo. In 1800 the first satisfactory uprights were invented. The square grand piano was originally intentional by German builders around the seventeenth century, they tried to apply the cristoforis pianoforte to the traditional rectangular shape of the clavichord. The Square piano was popular until the late eighteen hundreds and early nineteen hundreds. During the nineteenth century the piano proceed to become more powerful and responsive.Some of the greater improvements were the double-repetition action of the Sebastien Erard which allows rattli ng rapid repetition and the full cast-iron frame developed by Alphaeus Babcock. These developments are the basis for todays modern pianos. The pianos of today incorporate the best qualities of early instruments. Cross stringing which is a way to achieve greater richness of annotation by passing more strings over the center of the soundboard. The sostentenuto or middle pedal was introduced in the late nineteenth century, permitting greater melodic coloring.The Baroque period, or the time between 1600 and 1700 where new styles if art and computer architecture flourished in Europe, there where several great musical artists who became known as famous composers. Several of the composers where George Fredric Handel, Antonio Vivaldi and Johann Sebastian Bach. George Fredric Handel was innate(p) the son of a barber Handel ditched a career in law to pursue his love for music. Skilled at the organ, he wrote several church pieces before he acquired a moving in from prince Ernst of Hanover a s a court musician. Not much is known Antonio Vivaldi. except it is know that he wrote many pieces for the church, about 640 all together. He too taught music at several schools. But near the end of his keep he lost support and contacts and died an inadequate man. Johann Sebastian Bach came from a long line of musicians but was the first to become famous outside of his rest home town. Bach was extremely particular about his music. Often times he would destroy his compositions that he didnt find worthy, because of this many of his pieces been lost. His work was truly unique, and his use of intertwining melodies and the fugue are trademarks of his genius.During the classical years many advancements where made in the art of music and as these advancements where made new great composers rose to take advantage of them. Joseph Haydn began his career in music at a very young age he had a wonderful recounting voice. Eventually his voice broke and his singing career ended, afterwards he switched his focus into paper music. In 1790 he moved to Vienna which he made into the nexus of classical music and taught young composers such as Mozart and Beethoven.Wolfgang Amadeus Mozart is considered to be the prince of classical music. Has countless pieces of literature, movies, musicals and plays dedicated to him. He wrote his first music at the age of 5 he toured Europe with his father and infant as a novelty act but eventually he grew up and the novelty was gone. From there forward his natural skill at music carried him and as a student of Haydn and Mozart he blossomed into a efflorescence notch opera writer, which was his only source of income for many years. His later teaching career was not enough to support him and he died a poor man. Ludwig von Beethoven was not near as adroit as Mozart but became known as another child prodigy. At the age of 14 he was given the role of assistant teacher and organist. He standard tutoring from both Haydn and Mozart. Althoug h he had a good childhood at the age of 19 his life quickly took a turn for the worse with the wipeout of his mother, he was then left with the role of supporting his entire family. As a result of his troubled youth he became a very disturbed and angry person, this was portrayed in his music which was fiery and emotional. Felix Mendelssohn was born into a rich and musically talented family and as a result was introduced into music at a young age. He was very talented and his music was very popular at there time of writing. His piano concerto in G minor And My Heart will go on is considered to be the most played concerto of all time. His music for the play A Midsummer nights Dream in addition to its popularity during its time is still played at modern weddings. Mendelssohn often played pieces by Bach and is attribute with bringing back popularity to his music. Johannes Brahms lived during the romantic age but wrote classical music. At the beginning of his carreer he played mostly i n bars and as a result he gained the knowledge of many dance tunes. erst Mendelssohn graduated from playing in bars and brothels he started serious composing he was considered a genius for his music.During the romantic era many changes where made to the way the piano was use. Frederic Chopin was one of those artists, his pieces for the piano the piano where groundbreaking and pushed the limits of what the piano was thought to be capable of. Is innovations completely changed how the piano was treated in music. Franz Lizt was another is considered to be one of the most talented pianists the world has ever seen. His concerts where often completely sold out. Because of the way he played he often has a second piano prepared incase he broke the first one.The piano in its various forms have been used throughout time as entertainment for many cultures. As a result of its popularity and interest from innovative artists it has changed greatly, and more modern variations are used in many genr es of music.
Childhood Obesity in UAE
Childhood Obesity in UAEChildhood Obesity in UAEMind MapSourcesThe inaugurations which atomic number 18 being utilise in this enquiry relating to online cyber softwargon, and academic books and other daybooks to spud help in this look into work. the unlikable-ended ap enkindle movement would be discontinue of the report which had been asked by means of the scientific reasons and bear oning of the work, by dint of this action the person would come to know about the scientific reasons and nourishing the work with it pace. These sources argon utilizing to collect the exact amount of data and maintain the work to be to a greater extent authentic and organized. Throughout the structural work these sources are aiding to systematise the work and providing the cosy environment to work to collect the actual riveicular and figure and portrayed the thinking of the minds of the people (Haboubi Shaikh, 2009).It is also explored that the data is self-possessed in the shape of f acts and figures and maintain the authenticity of the work. The sources are entirely up to date as it has been taken by the students and working class ladies. This grooming of the work abides the in levelation in roam to maintain the work and to order the structure (Gupta, Goel, Shah Misra, 2012).Research ProposalQ) What are some of the factors that contri ande to childhood corpulency?Researching topicThe search is about the childhood obesity in the UAE. This set ups the system that includes the matter that contri unlessed a lot in un healthy criteria of the foods. This theatre includes the effect, causes and ways to overcome the obesity from children in UAE.Significance of the ResearchThis research would be done to aware people about the dangerous effects of the obesity especially in children.This study also defines the structural association how to reduce this factor.This research provides the information that how untold the system could be in stock(predicate) in o rder to perform the children more than active and maintain their life-time style.This research shares the knowledge about the harms of junk foods and other criteria of work.This research provokes the parent to take care of the health of their children to overcome this problem.Interested ReadersThe children and parents who are health aware and want to maintain their health would having a lancinate cheer in allegeing my research.Outcome of the ResearchProvision of awareness, efforts of healthy life, cosset the children into exercise and maintain the diet plan is the overcome of our research program. circumstanceOn 25th of May 2014 this research was conducted. The targeted audience were female person phratry whose get on withs range from 20 to 30, some of them are studying in Abu Dhabi Womens College and others are working in different companies. All participates were supposed to weaken the answer of the 10 questions.Because the researcher targeting the audience relating to education department or the employees department would be more part in order to maintain the work. This is the process through which people would enjoying their way and maintain the work in spite of appearance the programmed sources. Throughout the system there would be more systematic climax to work out in this concern.Collecting DataSurvey ToolsThe observe tools which we are going to use would be more systematic in order to maintain the work. This is the process through which people are very some(prenominal) peculiar in order to systemized and maintain the structure of the study. The online software is much reliable in our context to take out the answer and stick to it into the systematic approach which makes the system more giving and useful in our study.Reason of Choosing the Closed-Ended QuestionThe closed ended questions hold up not enormous definition in order to maintain the work. This is the style in which respondents have not have the chance to get the option and to get provide the irrelevant answer. The study is more focuse and more organized through this way. This is the procedure to which people are much prominent and maintain the work. line to Gather the Qualitative DataThe soft data would be more difficult to collect and after facing these problems people are much systemized to entertain the factual way to prove his research work. All the female does not tell the right age and conceal their weight which was the biggest obstacle in this concern. This is the problematic research work which maintains the status of the work in this concern.Ethical ConsiderationIn order to make the causes of the obesity that are also fast foods and soft drink, we are not supposed to take the name of the brand but generally considered the whole panorama of these kinds of things. cardinal main SourcesThe two main sources which prominent in order to maintain the work and making this research to the completion of the task. The online software would be used in order to conduct the interview and other sources are used in order to maintain the work more peculiar. The other source which is used is internet and other cyber technologies which maintain our work more prominent and more fundamental. This is the programmed sources which pave the way to conduct factual research in more authentic way.These sources are much relevant to conduct the research, but these sources does not only make the research more authentic but also helps to attain the exact way and make the research on the path which is quite relevant to focus on the given topic. age FrameworkThe time in this research requires almost about 3 months. The systematic approach towards the tools and respondents to maintain the research more factual and qualitative would consume more time. It would be almost from March 2014 to June 2013. The idea was initiated in the month of March and presented in the final shape in moth of June. The closed ended questions and answers from the respondents t ake almost one and half month. Te penning of data and taking the data from other sources would also be the part of the system in order to maintain the qualitative work and make the reports and analysing data and provision of findings to provide the factual research work consumes more than 1 month.Presenting ResearchThis research is much prominent in this form which is both in the soft and hard copy. Soft copy are avail the users of cyber technologies and hard copy provides the information to the people who used to read in the black and white form. This is the progressive form of the system through which people are much keen to read our research who having keen interest to maintain their health and who are more health conscious .LimitationsTo maintain the research work in this vast topic the survey of only few students and working ladies would not be enough to provide the effective resultThe time is too short in order to provide the qualitative work and embrace many minds of the peo ple (Trainer, 2010).This research as concluded in the Girls College so girls are much falter to explore they name and all the information like weight and age in UAE.The cultural barrier also comes when the specification of the girls would be the part of our research.The researcher would not easily go to outside the nation and could not take the information from all the nations of UAE as it is more focused the gravid (Trainer, 2010).AppendixReferencesGupta, N., Goel, K., Shah, P., Misra, A. (2012). Childhood obesity in developing countries epidemiology, determinants, and prevention. Endocrine Reviews, 33(1), 48-70. Retrieved from http//press.endocrine.org/inside/abs/10.1210/er.2010-0028Bin Zaal, A. A., Musaiger, A. O., DSouza, R. (2009). Dietary habits associated with obesity among adolescents in Dubai, United Arab Emirates. Nutr Hosp, 24(4), 437-444. Retrieved from scielo.isciii.es/pdf/nh/v24n4/original1.pdfMusaiger, A. O. (2011). gruelling and obesity in eastern mediterranean region prevalence and potential causes. journal of obesity, 2011.retreievd from downloads.hindawi.com/journals/jobes/2011/407237.pdfGupta, N., Shah, P., Nayyar, S., Misra, A. (2013). Childhood obesity and the metabolic syndrome in developing countries. The Indian Journal of Pediatrics, 80(1), 28-37. Retrieved from http//link.springer.com/article/10.1007/s12098-012-0923-5page-1Ng, S. W., Zaghloul, S., Ali, H. I., Harrison, G., Popkin, B. M. (2011). The prevalence and trends of overweight, obesity and nutritionrelated non patrimonial diseases in the Arabian Gulf States. Obesity Reviews, 12(1), 1-13. Retrieved from http//onlinelibrary.wiley.com/doi/10.1111/j.1467-789X.2010.00750.x/abstractjsessionid=5B779E39CF59302349309C12881E740D.f01t04?deniedAccessCustomisedMessage=userIsAuthenticated= ridiculousBadran, M., Laher, I. (2011). Obesity in arabic-speaking countries. Journal of obesity, 2011. Retrieved from http//www.hindawi.com/journals/jobe/2011/686430/abs/Al Junaibi, A., Abdulle, A ., Sabri, S., Hag-Ali, M., Nagelkerke, N. (2012). The prevalence and potential determinants of obesity among school children and adolescents in Abu Dhabi, United Arab Emirates. International Journal of Obesity, 37(1), 68-74. Retrieved from http//www.nature.com/ijo/journal/v37/n1/ well(p)/ijo2012131a.htmlBerger, G., Peerson, A. (2009). Giving young Emirati women a voice participatory action research on physical activity. Health place, 15(1), 117-124. Retrieved from http//www.sciencedirect.com/science/article/pii/S1353829208000397Haboubi, G. J., Shaikh, R. B. (2009). A compare of the nutritional status of adolescents from selected schools of South India and UAE A cross-sectional study. Indian journal of community medicine official publication of Indian Association of onus Social Medicine, 34(2), 108. Retrieved from http//www.ncbi.nlm.nih.gov/pmc/articles/PMC2781115/Trainer, S. S. (2010). Body image, health, and modernity Womens perspectives and experiences in the United Arab Em irates. Asia-Pacific Journal of Public Health, 22(3 suppl), 60S-67S. retrieved from aph.sagepub.com/content/22/3_suppl/60S.shortAl-Raees, G. Y., Al-Amer, M. A., Musaiger, A. O., DSouza, R. (2009). Prevalence of overweight and obesity among children antiquated 2-5 years in Bahrain a comparison between two honorable mention standards. International Journal of Pediatric Obesity, 4(4), 414-416. Retrieved from http//informahealthcare.com/doi/abs/10.3109/17477160902763325
Subscribe to:
Posts (Atom)