What is the difference between factor analysis and discriminant analysis
This service is more advanced with JavaScript available. Advertisement Hide. Reference work entry First Online: 09 August Keywords Multivariate technique Discriminant analysis Factor analysis Principal component analysis Stepwise discriminant analysis Goal programming Bond ratings Compromised solution Explanatory power Discriminant power. This is a preview of subscription content, log in to check access.
Altman, E. Financial ratios, discriminant analysis, and the prediction of corporate bankruptcy. Journal of Finance, 23 , — CrossRef Google Scholar. Predicting corporate bankruptcy: The Z-Score model. In: Corporate financial distress: A complete guide to predicting, avoiding and dealing with bankruptcy.
New York: Wiley. Google Scholar. Financial applications of discriminant analysis: A clarification. Journal of Financial and Quantitative Analysis, 13 , — ZETA analysis, a new model for bankruptcy classification. Journal of Banking and Finance, 1 , 29— Anderson, T. Each of the measures assesses the degree of classification error. Second, one of the vectors may be treated as the mean vector for a given category, in which case the Mahalanobis distance can be used to assess the distances within and across groups in a pairwise manner.
The quality of the discriminant function is then gauged by computing the ratio of the average distance across groups to the average distance within groups. The confusion matrix is a cross-tabulation of the actual versus predicted classification. We will examine this in more detail shortly. This matrix shows some classification ability. Now we ask, what if the model has no classification ability, then what would the average confusion matrix look like?
In this case since the row and column totals are all 32, we get the following confusion matrix of no classification ability:. We assume two groups first for simplicity, 1 and 2.
The Fischer linear discriminant approach is to maximize between group variation and minimize within group variation, i. The idea is that when we have 4 groups, we project each observation in the data into a 3-D space, which is then separated by hyperplanes to demarcate the 4 groups. We now move on to understanding some properties of matrices that may be useful in classifying data or deriving its underlying components. Understanding eigenvalues and eigenvectors is best done visually. The line from the origin through an eigenvector i.
All points on eigenspaces are themselves eigenvectors. So we calculated the eigenvalues and eigenvectors for the covariance matrix of the data. What does it really mean? Think of the covariance matrix as the summarization of the connections between the rates of different maturities in our data set. What we do not know is how many dimensions of commonality there are in these rates, and what is the relative importance of these dimensions.
For each dimension of commonality, we wish to ask a how important is that dimension the eigenvalue , and b the relative influence of that dimension on each rate the values in the eigenvector. The most important dimension is the one with the highest eigenvalue, known as the principal eigenvalue, corresponding to which we have the principal eigenvector. It should be clear by now that the eigenvalue and its eigenvector are eigen pairs.
It should also be intuitive why we call this the eigenvalue decomposition of a matrix. These functions of a matrix are also difficult to get an intuition for. More specifically, it relates to the volume of the space defined by the matrix. But not exactly, because it can also be negative, though the absolute size will give some sense of volume as well. We see immediately that when we multiply the matrix by 2, we get a determinant value that is four times the original, because the volume in two-dimensional space is area, and that has changed by 4.
We may also distort just one dimension, and see what happens. Factor analysis is the use of eigenvalue decomposition to uncover the underlying structure of the data. Given a data set of observations and explanatory variables, factor analysis seeks to achieve a decomposition with these two properties:. Factors must be orthogonal , i. Obtain data reduction, i. Each such subset is a manifestation of an abstract underlying dimension. But, PCA is a linear combination of total variance including error.
Hi Sala, thanks for the answer. Thank you. I read the differences between them but this is the most understandable article. Your email address will not be published. Like donkeys and zebras. They seem to differ only by color until you try to ride one. Principal Component Analysis. Summarize common variation in many variables Learn the 5 steps to conduct a Principal Component Analysis and the ways it differs from Factor Analysis.
Take Me to The Video! Comments Not a good explanation. Hi Rashidul, We try here to help people understand the concepts and meanings without getting much into the math.
Very good explanation to use for people who are not statistically sophisticated. Hi, Very nice graphical explanation! Can you please tell me how I can cite the graphs? Many thanks, Vasilis. The interpretation appears to be quite comprehensive. This is the number one link when you google Principal Components vs Factor Analysis.
I have been struggling to get the difference between these two methods but now i got it clearly. Very good explanation. I going to use to explain my students. Thanks a lot! Leave a Reply Cancel reply Your email address will not be published. The Analysis Factor uses cookies to ensure that we give you the best experience of our website.
If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor. Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website.
0コメント