Prediction of Corporate Financial Distress: An Application of

Document technical information

Format pdf
Size 65.2 kB
First found May 22, 2018

Document content analysis

Category Also themed
Language
English
Type
not defined
Concepts
no text concepts found

Persons

Organizations

Places

Transcript

The International Journal of Digital Accounting Research
Vol. 1, No. 1, pp. 69-85
ISSN: 1577-8517
Prediction of Corporate Financial Distress: An
Application of the Composite Rule Induction
System
Li-Jen Ko. Durham Technical Community College. USA
[email protected]
Edward J. Blocher. University Of North Carolina—Chapel Hill. USA
[email protected]
P. Paul Lin. Wright State University. Dayton, OH. USA
[email protected]
Abstract. The economic consequence of corporate failure is enormous, especially for the
stakeholders of public-held companies. Prior to a corporate failure, the firm’s financial
status is frequently in distress. Consequently, finding a method to identify corporate financial
distress as early as possible is clearly a matter of considerable interest to investors, creditors,
auditors and other stakeholders. This paper uses a composite rule induction system (CRIS;
Liang 1992) to derive rules for predicting corporate financial distress in Taiwan. In addition,
this paper compares the prediction performance of cris, neural computing and the logit model.
The empirical results indicate that both CRIS and neural computing outperform the logit
model in predicting financial distress. Although both CRIS and neural computing perform
rather well, CRIS has the advantage that the derived rules are easier to understand and interpret.
Keywords: corporate financial distress, machine learning, neural computing, composite
rule induction system.
70 The International Journal of Digital Accounting Research
Vol. 1, No. 1
1. INTRODUCTION
The economic consequence of corporate failure is enormous, especially for
the stakeholders of public-held companies. Prior to a corporate failure, the firm’s
financial status is frequently in distress. Consequently, finding a method to identify
corporate financial distress as early as possible is clearly a matter of considerable
interest to investors, creditors, auditors and other stakeholders. The significance
of this issue has stimulated a lot of research concerning the prediction of corporate
bankruptcy or financial distress. These studies often used the statistical approach
or iterative learning approach1 to develop prediction models.
The statistical approach includes discriminant analysis, regression analysis,
logit analysis or probit analysis and usually requires that the data follow certain
distributional assumptions to generate robust results (Beaver 1966, Altman 1968,
Beaver 1968, Deakin 1972, Aharony et al. 1980, Ohlson 1980, Zmijewski 1983,
Platt & Platt 1990, Hill et al. 1996, Clark et al. 1997, Mossman et al. 1998).
Although financial data and ratios rarely have a normal distribution, rank
transformation of data has been shown to be useful to make the models less sensitive
to non-normal distributions. Kane et al. (1998) apply rank transformation to
financial ratios and the results indicate an improvement in predicting corporate
failure. Iterative learning models, on the other hand, are free from distribution
constraints because they are based on criteria other than sample mean and variance
(Frydman et al. 1985, Messier & Hansen 1988, Odom & Sharda 1990, Liang
1992, Tam & Kiang 1992, Hansen et al. 1993, Wilson & Sharda 1994, Lee et al.
1996).
Iterative learning models refer to the process of training computers to derive
rules or to develop algorithms from existing cases. Several iterative learning
methods have been developed, including neural computing and inductive learning
systems. Neural computing uses artificial neural networks (ANN) to emulate a
human’s biological neural network and to develop algorithms from the given
samples (cases), whereas an inductive learning system derives rules. Prior research
indicates that neural computing outperforms statistical models in predicting business failure (Odom & Sharda 1990, Tam & Kiang 1992).
ID3 (Quinlan 1979) is an inductive learning system and is more effective than
discriminant analysis in predicting bankruptcy and loan default (Messier & Hansen
1988). However, ID3 did not outperform statistical models in other studies (Tam
& Kiang 1992, Liang et al. 1992, Hansen et al. 1993). The rule induction
mechanisms of ID3 process nominal and non-nominal variables in the same way
without considering their different characteristics. In addition, the probability
1
Some researchers also used statistical models along with iterative models for performance comparison purposes.
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 71
assessments for the rules are typically based on the frequency of occurrence in the
training data set. Consequently, Liang (1992) proposed a composite rule induction
system (CRIS) to overcome these drawbacks.
This study uses CRIS to derive rules for predicting corporate financial distress
in Taiwan. In addition, the study compares the prediction performance of CRIS,
neural computing and the logit model. The remainder of the paper is organized as
follows. The next section discusses prior research dealing with the prediction of
corporate financial distress, which suggests the methods used in the present study.
The methodology section defines the operational terms and explains data collection,
models and model validation. The ensuing section presents the empirical results
and data analyses. Finally, conclusions and limitations of the study are presented.
2. PREDICTION OF CORPORATE FINANCIAL
DISTRESS
Statistical models
Numerous research projects have been conducted to identify early warning
indicators of corporate financial distress. In the 60’s, researchers used statistical
models to identify financial ratios that could classify companies into failure or
non-failure groups. The statistical approach includes univariate and multivariate
models. In his pioneering work, Beaver (1966) used a dichotomous classification
test to identify financial ratios for corporate failure prediction. He used 30
financial ratios and 79 pairs of companies (failure/non-failure). The best
discriminant factor was the working capital/debt ratio, which correctly identified
90 percent of the firms one year prior to failure. The second best discriminant
factor was the net income/total assets ratio, which had 88 percent accuracy.
Subsequently, there have been relatively few studies using the univariate model
for bankruptcy prediction, and researchers overwhelmingly used multivariate
models instead.
Altman (1968) was the first researcher to develop a multivariate statistical
model to discriminate failure from non-failure firms. He used multivariate
discriminant analysis (MDA), and the initial sample was composed of 66 firms
with 33 firms in each of the two (failure/non-failure) groups. The five financial
ratios used in his MDA model were working capital/total assets, retained earnings/
total assets, earning before interest and tax/total assets, market value of equity/
book value of total liabilities, and sales/total assets. The model was extremely
accurate in classifying 95% of the total sample correctly one year prior to failure
(-1 year), but misclassification of failed firms increased significantly as the
prediction time increased (28% at –2 years, 52% at –3 years, 71% at –4 years).
72 The International Journal of Digital Accounting Research
Vol. 1, No. 1
Martin (1977) used the logit model for bank failure prediction. Subsequently,
Ohlson (1980) also used the logit model to predict business failure with a sample of
105 bankrupt firms and 2,058 non-failing firms. The nine financial ratios included
in the model were the firm size (log of a price-level deflated measure of total assets),
total liabilities/total assets, working capital/total assets, current liabilities/current
assets, a dummy variable indicating whether total assets were greater or less than
total liabilities, net income/total assets, funds from operation/total liabilities, another
dummy variable indicating whether net income was negative for the last two years
and change of net income. Ohlson used a relatively unbiased sampling procedure
because the failure/non-failure ratio in his study was more realistic. However, the
model did not perform as well as MDA, which suggested that previous researchers
might have overstated the discriminatory power of their models (Morris 1997).
Zmijewski (1984) examined the “choice-base” sample bias and “sample
selection” bias typically faced by financial distress researchers. Contrary to the
common 1:1 failure/non-failure matching, he used the probit model on six sets of
data where the ratio of failure/non-failure varied from 1:1 to 1:20. The results
indicated that the choice-based sample bias decreased as the failure/non-failure
ratio approached the population probability. In addition, with regard to the sample
selection bias, the results indicated a significant bias existed in the majority of the
tests conducted. However, for both issues, the results did not indicate significant
changes in overall classification and prediction rates.
Neural computing
Neural computing has generated considerable research interest and has been
applied in various areas, including the prediction of corporate bankruptcy or financial
distress. Neural computing is a computer system that consists of a network of
interconnected units called artificial neurons (AN). AN are organized in layers
inside the network. The first layer is the input layer, and the last is the output layer.
Hidden layers exist between the input and output layers, and there can be several
hidden layers for complex applications. Computer programs process the training
sample to identify the relationships between input and output data. Neural computing
is more adaptive to the real world situation because it is not subject to distribution
constraints. This advantage makes neural computing an appealing tool for developing
prediction models because the variance-covariance matrices of failed/non-failed firms
are often not equal, and financial data seldom follow the multivariate normal
distribution, each of which is a violation of the MDA assumptions.
Odom and Sharda (1990) used the same financial ratios employed by Altman
(1968) and applied ANN to a sample of 65 failed and 64 non-failed firms. The
training sample comprised 38 failed and 36 non-failed firms. A three-layer neural
network was created with five hidden nodes. Their model correctly identified all
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 73
failed and non-failed firms in the training sample, compared to 86.8% accuracy
by MDA. Regarding the performance with holdout samples, ANN had an accuracy
rate of 77% or higher, whereas MDA could hit the target only between 59% and
70%. Subsequently, several studies also revealed that ANN outperformed other
prediction models (Hansen & Messier 1991, Salchenberger et al. 1992, Tam &
Kiang 1992, Coats & Fant 1993, Hansen et al. 1993, Altman et al. 1994, Wilson &
Sharda 1994).
Inductive learning systems
ID3 is a relatively simple mechanism for discovering a classification rule from
a collection of objects belonging to two classes (Quinlan 1979). Each object must
be described in terms of a fixed set of attributes, each of which has its own set of
possible values. An object is classified by starting at the root of the decision tree,
finding the value of the tested attribute in the given object, taking the branch
appropriate to that value, and continuing the process until a leaf is reached. ID3
uses entropy to measure the values of each attribute and then derives rules through
a repetitive decomposition process that minimizes the overall entropy. Messier
and Hansen (1988) used ID3 to derive prediction rules from loan default and
corporate bankruptcy cases. The loan default training sample contained 32 firms
with 16 in each group (default or non-default). In the corporate bankruptcy case,
the training sample contained 8 bankrupt and 15 non-bankrupt firms. For the
holdout samples, the rules derived by ID3 correctly classified the bankrupt/nonbankrupt firms with perfect accuracy and 87.5% accuracy for the loan default.
The recursive partitioning algorithm (RPA) is a non-parametric classification
technique. The method starts with the sample data, their financial characteristics,
the actual group classification, the prior probabilities, and the misclassification
costs. A binary tree is built where a rule is derived for each node. The rule is
based on a single variable, which can easily explain the failure of a firm (Cronan
et al. 1991). Frydman et al. (1985) used a sample of 58 failed firms along with
142 non-failed firms to derive prediction rules. The empirical results of their
study indicated the RPA model outperformed MDA. The goal of their study was
to minimize the expected cost of misclassification, whereas the objective of Messier
and Hansen’s (1988) study was to minimize the number of misclassifications.
However, RPA has two disadvantages (Zopounidis & Dimitras 1998). First, it is a
forward selection method and the same variable can be used again in the
classification rule at a later stage with a different cut-off value. Second, continuation
of partitioning processes can result in a tree where every single firm is correctly
classified by one terminal node and may have the problem of overfitting.2
2
Overfitting refers to a model that fits exceptionally well on to the data from which it is derived, but far less
well on to other data from a holdout sample (Morris 1997).
74 The International Journal of Digital Accounting Research
Vol. 1, No. 1
Although prior research indicated that inductive learning systems outperformed
statistical models, Liang (1992) pointed out that the algorithms had several
limitations. The limitations included lower accuracy for real number data, lower
efficiency for larger samples, difficulty in assessing the probability associated
with rules, and a single algorithm for both nominal and non-nominal attributes.
Consequently, Liang proposed a composite rule induction system (CRIS) to
overcome these drawbacks and applied CRIS to a bankruptcy data set containing
50 cases. Each case included four nominal and five non-nominal attributes. Twelve
experiments were conducted and the data set was randomly divided into a training
sample and a holdout sample. The results indicated that CRIS had the highest
accuracy (80.8%) followed by ANN (78.3%) in bankruptcy prediction. Both CRIS
and ANN outperformed MDA (75.8%).
This paper applies CRIS to derive rules for predicting corporate financial
distress in Taiwan. In addition, it performs an empirical comparison of predictive
capability among CRIS, neural computing and the logit model.
3. METHODOLOGY
Financially distressed companies
Although numerous (especially small) business firms in Taiwan went bankrupt
or had financial distress, their financial data are often unavailable. Consequently,
we use the data from the Taiwan Stock Exchange (TSE). The Trading Code #46
of TSE specifies the conditions of a financially distressed company, which include
bankruptcy, reorganization and default. The financial statements of financially
distressed companies per TSE Trading Code are listed in a special database, which
can be easily identified.
Classification accuracy
Table 1 explains the determination of classification accuracy in this paper. P11
denotes the probability of a normal company being correctly classified, whereas
P22 denotes the probability of a financially distressed company being correctly
classified. On the other hand, (classification) Type I error refers to the probability
that a financially distressed company is mistakenly classified as normal (1 – P22)
and Type II error represents the probability that a normal company is mistakenly
classified as financially distressed (1 – P11). The overall classification accuracy
(P) is the probability that companies are correctly classified as either normal or
financially distressed. To our knowledge, there has been no research dealing with
the misclassification cost of Type I and Type II errors in Taiwan. Consequently,
similar to previous studies (Messier & Hansen 1988, Liang 1992), the objective of
this paper is to minimize the number of misclassifications.
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 75
Actual normal
company
Actual financially
distressed company
Classified as
normal company
Classified as
financially
distressed company
P11
1 – P11
(Type II error)
1 - P22
(Type I error)
P22
Table 1. Classification accuracy table
Data collection
There have been only a few financially distressed companies on TSE after
1985. Most financially distressed companies on TSE had financial problems during
the period between 1981 and 1985. Twenty-eight companies were in financial
distress during the above period but complete financial data are available for only
19 firms. To control the unwanted bias, a distressed firm was matched with a
normal one according to industry and firm size. In addition to the 1:1 matched
sample (38 firms), this study also attempted to create a 1:2 matched sample. Due
to the high concentration in some industries, four distressed firms were not matched
with the second normal firms of similar size. Therefore, 53 firms were included in
the second training sample (19:34).
Financial ratios
The usefulness of financial ratios and cash flow data for bankruptcy prediction
is substantial in comparison with the use of market return data (Mossman et al.
1998). Furthermore, the model using financial ratios from the year immediately
preceding bankruptcy has the best results. Consequently, this paper uses financial
data of one year prior to financial distress to induce rules. Prior researchers have
identified financial ratios (of corporate America) for bankruptcy prediction or
financial distress prediction. However, due to the possible differences of firm
characteristics between corporate America and TSE firms, this paper uses fourteen
financial ratios commonly included in the financial filing with TSE. This paper
then examines the explanatory capability of these financial ratios using step-wise
regression and selects five variables for the final model. These five variables are
total liabilities/total assets, quick assets/current liabilities, sales/fixed assets, margin/
sales and cash dividend per share.
Models
The rule induction mechanism of CRIS is composed of three major
components: 1) a hypothesis generator that determines hurdle values and the proper
relationship between dependent and independent attributes, 2) a probability
76 The International Journal of Digital Accounting Research
Vol. 1, No. 1
calculator that determines the probability associated with each rule, and 3) a rule
scheduler that determines how candidate rules should be organized to form a
structure. The construction process includes the following steps:
1. The training data containing non-nominal independent variables and nominal dependent variables are entered.
2. Different algorithms are used for hypothesis generation based
on different properties of nominal and non-nominal attributes.
3. The hypotheses are converted to candidate rules by assessing
their probabilities and making necessary modifications.
4. The resulting candidate rules are evaluated and selected to
form a decision structure that can interpret the existing cases and facilitate future prediction.
This paper also uses PCNeuron to construct ANN. The input layer has five
process units and the output layer has two process units. Although the optimal
number of process units for the hidden layer is determined by trial and error, a
general rule of thumb is the average number of the input and output process units.
Therefore, in the present study, there are three process units in the hidden layer.
Model validation
If the training sample is not a fair representation of the problem domain, the
resulting classification error rate can be misleading (Hansen et al. 1993). Some
studies used the same training data for model validation after the models were
constructed. As a consequence, the misclassification rate would be very low and
there may exist the problem of overfitting. One solution to this problem is to
construct the model from the training sample and use a holdout sample for
validation. Furthermore, the researcher can use the holdout sample from a latter
period to test how robust discriminatory power is over time (Joy & Tollefson
1975). Unfortunately, a post holdout sample is not available in Taiwan due to the
fact that TSE has only a few financially distressed firms after 1985.
The validation is performed in three steps. First, all data are used for both
model construction and model validation (i.e., no holdout sample). The results
facilitate examining the possibility of overfitting. Second, similar to prior research
(Liang 1992), this study repeats the experiment 20 times and the average accuracy
percentage is computed. In each experiment, thirteen financially distressed firms
are randomly selected into the training sample and the remaining six firms are
treated as the holdout sample. Accordingly, the matched normal firms are assigned
into the training and holdout samples, respectively. Finally, the jackknife method
(Lachenbruch 1967) is also used for model validation. The following hypotheses
are tested.
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 77
H1: There is no significant difference in classification
accuracy between CRIS and the logit model.
H2: There is no significant difference in classification
accuracy between ANN and the logit model.
H3: There is no significant difference in classification
accuracy between CRIS and ANN.
4. EMPIRICAL RESULTS
Classification accuracy
The comparison of classification accuracy across models of various studies
may not be meaningful if the financial ratios and data are different. Table 2 presents
classification accuracy among CRIS, neural computing and the logit model when
all data are used in the training sample (i.e., no holdout sample). All three models
perform well with accuracy of at least 89%. The overall accuracy consistently
improves for all three models when the sample size is increased from 38 firms
(19:19) to 53 firms (19:34). The logit model has the best performance (94.34%)
with 53 firms in the training sample. Nonetheless, this might be the overfitting as
a result of using the same data for model validation.
Matched sample (distressed firms: normal firms) = 19:19
Model
Financially
Normal
distressed
companies
CRIS
89.47%
94.74%
ANN
89.47%
89.47%
Logit model
89.47%
89.47%
Matched sample (distressed firms: normal firms) = 19:34
Model
Financially
Normal
distressed
companies
CRIS
89.47%
94.12%
ANN
89.47%
94.12%
Logit model
94.74%
94.12%
Overall
92.11%
89.47%
89.47%
Overall
92.45%
92.45%
94.34%
Table 2. Classification accuracy with no holdout sample
Table 3 indicates that both CRIS and neural computing outperform the logit
model when holdout samples are used. Again, all three models consistently improve
their overall accuracy when the sample size is increased. The increase of sample
size significantly improves the accuracy of classifying normal firms. It appears
that CRIS is the only model whose Type I error probability is consistently lower
than that of Type II error.
78 The International Journal of Digital Accounting Research
Matched sample (distressed firms: normal firms) = 19:19
Model
Financially
Normal
distressed
companies
CRIS
95.83%
83.33%
ANN
86.67%
89.17%
Logit model
79.19%
86.67%
Matched sample (distressed firms: normal firms) = 19:34
Model
Financially
Normal
distressed
companies
CRIS
94.17%
94.12%
ANN
95.00%
94.57%
Logit model
83.33%
93.21%
Vol. 1, No. 1
Overall
89.58%
87.92%
82.92%
Overall
94.13%
94.72%
89.74%
Table 3. Classification accuracy with 20 experiments
Table 4 presents the results using the jackknife method. Again, both CRIS
and neural computing outperform the logit model. Although the logit model has
the highest accuracy in predicting corporate financial distress, it also has the highest
Type II error. The results also indicate a positive effect on the overall classification
accuracy as the sample sizes increased.
Matched sample (distressed firms: normal firms) = 19:19
Model
Financially distressed
Normal
companies
companies
CRIS
89.47%
84.21%
ANN
84.21%
89.47%
Logit model
78.95%
84.21%
Matched sample (distressed firms: normal firms) = 19:34
Model
Financially distressed
Normal
companies
companies
CRIS
89.47%
94.12%
ANN
89.47%
94.12%
Logit model
94.74%
88.24%
All
companies
86.84%
86.84%
81.58%
All
companies
92.45%
92.45%
88.68%
Table 4. Classification accuracy using the jackknife method
Comparison among models
The Wilcoxon rank test is performed to examine any significant difference
between the models. The results (Tables 5, 6 and 7) indicate that CRIS outperforms
the logit model in both cases (34-firms sample and 53-firms sample), while neural
computing outperforms the logit model only with the 53-firms sample size. It
appears that there is no significant performance difference between CRIS and
neural computing.
Ko, Blocher & Lin
Σ Ri
Σ Ri2
T = Σ Ri/SQRT(Σ Ri2)
Prediction of Corporate Financial Distress... 79
Distressed firms:
normal firms = 19:19
-84
1430.5
-2.22
Pr = 0.0132 *
Distressed firms:
normal firms = 19:34
-46
343
-2.48
Pr = 0.0066 *
Table 5. Comparison of CRIS and the logit model. *CRIS significantly outperforms the logit model.
Σ Ri
Σ Ri2
T = Σ Ri/SQRT(Σ Ri2)
Distressed firms:
normal firms = 19:19
-38
1695.5
-0.92
Pr = 0.1788
Distressed firms:
normal firms = 19:34
-55
343
-2.97
Pr = 0.0015 *
Table 6. Comparison of ANN and the logit model. *ANN significantly outperforms the logit model
Σ Ri
Σ Ri2
T = Σ Ri/SQRT(Σ Ri2)
Distressed firms:
normal firms = 19:19
-14
267
-0.86
Pr = 0.1949
Distressed firms:
normal firms = 19:34
3
4.5
1.42
Pr = 0.9222
Table 7. Comparison of CRIS and ANN
Figure 1.Comparison of accuracy among models. Sample Size: 38 Firms
Overall, both CRIS and neural computing outperform the logit model (Figure 1
and Figure 2). Although there is no significant performance difference between CRIS
and neural computing, it is easier to follow the (decision) rules derived by CRIS. The
80 The International Journal of Digital Accounting Research
Vol. 1, No. 1
hidden layer of ANN is difficult, to interpret. In addition, larger training samples can
achieve a better result in predicting corporate financial distress (Figure 3).
Figure 2. Comparison of accuracy among models. Sample Size: 54 Firms
Figure 3. Comparison of accuracy with different sample sizes
5. CONCLUSIONS
It is important to understand the early warning indicators and implications of
corporate financial distress. If the stakeholder can predict a company is on its way
to a financial distress, he/she can take a necessary action in time. Similarly, it is
vitally important for an auditor to be able to assess whether or not a company is a
going concern in preparing the audit report (Morris 1997). The significant
consequence of corporate financial distress has generated a lot of research interest
and numerous methods have been applied to develop prediction models.
This paper used CRIS to derive rules for the prediction of corporate financial
distress in Taiwan. Using step-wise regression, this study identified five financial
ratios-total liabilities/total assets, quick assets/current liabilities, sales/fixed assets,
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 81
margin/sales and cash dividend per share—that could effectively predict a financial
distress. Then, it applied CRIS and ANN to develop the financial distress prediction
model using these five financial ratios. The empirical results indicate that both
CRIS and ANN outperform the logit model. Although both CRIS and ANN perform
rather well, CRIS has the advantage that the derived rules are easier for humans to
learn (the rules derived by CRIS are shown in the Appendix).
Due to the limitations of TSE data, this paper used only 19 financially distressed
firms. Accordingly, the results may be qualified as a consequence of the small
sample size (Liang et al. 1992, Bhattacharyya & Pendharkar 1998). Furthermore,
since the best prediction model of machine learning is identified through iterative
cycles, the results reported in this paper do not provide any conclusive statements
regarding the performance of CRIS and ANN. As explained earlier, the comparison
of classification accuracy across models may not be meaningful if the financial
ratios and data are different. The purpose of this paper is not to identify “the”
model. Instead, this paper attempts to apply an effective tool to assist stakeholders
in predicting corporate financial distress in Taiwan. More studies are needed to
continue this learning process. In many cases, the prediction accuracy can be
improved by inventing a more appropriate set of features to describe the available
data (Mitchell 1999).
We do have reason to search for machine learning programs that will avoid
the inefficiencies of human learning (Simon 1983). Indeed, humans do not
outperform machine learning when adequate historical data are available (Kattan
et al. 1993). The process of knowledge acquisition using interviews or protocol
analysis can be time-consuming and ineffective. An effective rule induction system
can assist knowledge engineers in identifying knowledge by collecting previous
cases solved by experts and identifying attributes that are relevant for decisionmaking. Advances in machine learning have made it possible to apply effective
tools to a variety of business problems in order to extract extra information from
existing data. Researchers can apply an effective inductive system such as CRIS
to solve other business problems. Subsequent studies can also incorporate the
prior probability of financial distress and misclassification costs to improve the
generalization of the research results.
APPENDIX: RULES DERIVED BY CRIS
FROM 1:1 MATCHING
If Gross Margin/Sales >= 31.8818
Then BANKRUPTCY = YES with probability = 0.86
If Liability Ratio >= 69.3450
Then BANKRUPTCY = YES with probability = 0.84
82 The International Journal of Digital Accounting Research
Vol. 1, No. 1
..
If FA Turnover >= 3.2517
Then BANKRUPTCY = NO with probability = 0.76
..
If Gross Margin/Sales >= 27.5881
Then BANKRUPTCY = YES with probability = 0.70
..
If Liability Ratio < 61.0398
Then BANKRUPTCY = NO with probability = 0.88
..
If Liability Ratio < 67.0905
Then BANKRUPTCY = NO with probability = 0.82
..
If Liability Ratio >= 67.0905
Then BANKRUPTCY = YES with probability = 0.82
..
The structure misclassifies the following in 38 input cases: #6 17 36
FROM 1:2 MATCHING
If Liability Ratio < 64.7217
Then BANKRUPTCY = NO with probability = 0.85
..
If Per Cash dividend >= 0.2549
Then BANKRUPTCY = NO with probability = 0.88
..
If Current Ratio < 18.4393
Then BANKRUPTCY = YES with probability = 0.61
..
If Gross Margin/Sales < 0.1514
Then BANKRUPTCY = YES with probability = 0.65
..
If Liability Ratio >= 71.3438
Then BANKRUPTCY = YES with probability = 0.88
..
If Liability Ratio < 66.4588
Then BANKRUPTCY = NO with probability = 0.83
..
If Liability Ratio >= 66.4588
Then BANKRUPTCY = YES with probability = 0.83
..
The structure misclassifies the following in 53 input cases: #10 11 37 51
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 83
6. REFERENCES
Aharony, J., C. P. Jones and I. Swary. (1980): An analysis of risk and return
characteristics corporate bankruptcy using capital market data. Journal of Finance
(September): 1001-1016.
Altman, E. J. (1968): Financial ratios, discriminant analysis and the prediction of
corporate bankruptcy. Journal of Finance (September): 589-609.
______, G. Marco and F. Varetto. (1994): Corporate distress diagnosis: comparisons
using linear discriminant analysis and neural networks (the Italian experience).
Journal of Banking and Finance 18: 505-529.
Beaver, W. H. (1966): Financial ratios as predictors of failures. Empirical Research
in Accounting , Supplement to Journal of Accounting Research: 71-111.
______. (1968): Market price, financial ratios and the prediction of failure. Journal
of Accounting Research (Autumn): 179-192.
Bhattacharyya, S. and P. C. Pendharkar. (1998): Inductive, evolutionary, and
neural computing techniques for discrimination: a comparative study. Decision
Sciences (Fall): 871-899.
Clark, C. F., P. L. Foster, K. M. Hogan and G. H. Webster. (1997): Judgment
approach to forecasting bankruptcy. Journal of Business Forecasting Methods &
Systems (Summer): 14-18.
Coats, P. K. and L. F. Fant. (1993): Recognizing financial distress patterns using a
neural network tool. Financial Management (Autumn): 142-155.
Cronan, T. P., L. W. Glorferd and L. G. Perry. (1991): Production system
development for expert systems using a recursive partitioning induction approach:
an application to mortgage, commercial, and consumer lending. Decision Sciences
Vol. 22: 812-845.
Deakin, E. B. (1972): A discriminant analysis of predictors of business failure.
Journal of Accounting Research (Spring): 167-179.
Frydman, H., E. I. Altman and D-L. Kao. (1985): Introducing recursive partitioning
for financial classification: the case of financial distress. Journal of Finance
(March): 269-291.
Hansen, J. V. and W. F. Messier. (1991): Artificial neural networks: foundations
and application to a decision problem. Expert Systems with Applications (Vol. 3):
135-141.
Hansen, J. V., G. J. Koehler, W. F. Messier Jr. and J. F. Mutchler. (1993): Developing
knowledge structure: a comparison of a qualitative-response model and two
machine-learning algorithms. Decision Support Systems (September): 235-243.
84 The International Journal of Digital Accounting Research
Vol. 1, No. 1
Hill, N. T., S. E. Perry and S. Andes. (1996): Evaluating firms in financial distress:
an event history analysis. Journal of Applied Business Research (Summer): 60-71.
Joy, O. M and J. O. Tollefson. (1975): On the financial application of discriminant
analysis. Journal of Financial and Quantitative Analysis (December): 723-739.
Kane, G. D., F. M. Richardson and N. L. Mead. (1998): Rank transformations and
the prediction of corporate failure. Contemporary Accounting Research (Summer):
145-166.
Kattan, M. W., D. A. Adams and M. S. Parks. (1993): A comparison of machine
learning with human judgment. Journal of Management Information Systems
(Spring): 37-57.
Lachenbruch, P. A. (1967): An almost unbiased method of obtaining confidence
interval for the probability of misclassification in discriminant analysis. Biometrics
(December): 639-645.
Lee, K. C., I. Han and Y. Kwon. (1996): Hybrid neural network model for
bankruptcy prediction. Decision Support Systems (September): 63-72.
Liang, T. P. (1992): A composite approach to inducing knowledge for expert systems
design. Management Science (January): 1-17.
______., J. S. Chandler, I. Han and J. Roan. (1992): An empirical investigation of
some data effects on the classification accuracy of probit, ID3, and neural networks.
Contemporary Accounting Research (Fall): 306-328.
Martin, D. (1977): Early warning of bank failure: a logit regression approach.
Journal of Banking and Finance 1: 249-276.
Messier, W. F. Jr. and J. V. Hansen. (1988): Inducing rules for expert system
development: an example of using default and bankruptcy data. Management
Science (December): 1403-1415.
Mitchell, T. M. (1999): Machine learning and data mining. Communications of
the ACM (November): 31-36.
Morris, R. (1997): Early Warning Indicators of Corporate Failure. (Aldershot,
England: Ashgate Publishing Ltd.).
Mossman, C. E., G. G. Bell, L. M. Swartz and H. Turtle. (1998): An empirical
comparison of bankruptcy models. Financial Review (May): 35-53.
Odom, M. D. and R. Sharda. (1990): A neural network model for bankruptcy
prediction. Proceedings of the International Joint Conference on Neural Networks
II:163-167.
Ohlson. J. A. (1980): Financial ratios and the probabilistic prediction of bankruptcy.
Journal of Accounting Research (Spring): 109-131.
Ko, Blocher & Lin
Prediction of Corporate Financial Distress... 85
Platt, H. P. and M. B. Platt. (1990): Development of a class of stable predictive
variables: the case of bankruptcy prediction. Journal of Business Finance and
Accounting 17 (1):31-51.
Quinlan, J. R. (1979): Discovering rules by induction from large collections of
examples. in D. Michie (Ed.) Expert Systems in the Microelectronic Age.
(Edinburgh, England: Edinburgh University Press): 168-201.
Salchenberger, L. M., E. M. Cinar and N. A. Lash. (1992): Neural networks: a
new tool for predicting thrift failure. Decision Sciences Vol. 23: 899-916.
Simon, H. A. (1983): Why should machines learn? In R. S. Michalski, J. G.
Carbonell and T. M. Mitchell (Ed.) Machine Learning: An Artificial Intelligence
Approach. (Palo Alto, CA: Tioga Publishing Co.).
Tam, K. Y. & M. Y. Kiang. (1992): Managerial applications of neural networks:
the case of bank failure predictions. Management Science (July): 926-947.
Wilson, R. L. and R. Sharda. (1994): Bankruptcy prediction using neural networks.
Decision Support Systems (June): 545-557.
Zmijewski, M. (1983): Methodological issues related to the estimation of finance
distress prediction models. Journal of Accounting Research, Supplement on Current
Econometric Issues in Accounting Research: 59-82.
Zopounidis, C. and A. I. Dimitras. (1998): Multicriteria Decision Aid Methods for
the Prediction of Business Failure. (Dordrecht, The Netherlands: Kluwer Academic
Publishers).
×

Report this document