Research Article
A New Semisupervised-Entropy Framework of Hyperspectral Image Classification Based on Random Forest
Table 1
The process of generate decision tree (using the given training data to produce a decision tree).
| Algorithm: Generate decision tree(using the given training data to produce a decision tree) |
| Input: Training data set samples, with discrete value attributes, the collection of candidate attributes Attribute list; | Output: A decision Tree; | method: Create node N; | If the samples are in the same Class C; | returns N as a leaf node, with class C tag; | If Attribute list is an empty then; | returns N as the leaf node, marking the most common class in samples;//Majority Voting; | Selecting the optimal classification attribute test attribute in Attribute list; Using information gain as attribute selection metric; | The Mark node N is test attribute; | for the known value AI in each test attribute; The Division of Samples | It is grown from the node N with a condition of test attribute= ai; | Set up SI as sample of test attribute=ai in samples; A partition; | If si is an empty then; | plus a leaf node, marked as the most common class in the samples;//majority vote | else plus one by Generate decision tree (SI, Attribute_ List-test attribute) returns the node; |
|
|