Evaluating Multiple Models with Looped X-Validation (Loops + Macros)
This process shows how multiple different models can be evaluated with cross validation runs. This allows for the comparison of, for example, three prediction model types (e.g., ANN, SVM and DT) all at once under a x-fold cross validation.
Having said that, the process actually performs the same cross validation several times using each time a different modeling scheme. It makes use of loops, collections, subprocess selection and macros and is therefore also an interesting showcase for more complex process designs within RapidMiner.
The process begins with the definition of the number of models which should be evaluated (Set Macro) which sets the macro "max" to the value of 3 in this example. The next step is the generation of data which would normally be replaced by loading your own data from the repository or by some ETL subprocess.
The interesting parts begins now: The Loop operator now iterates the defined number of times over its inner process consisting of a macro definition for the current iteration and the cross validation. The cross validation itself is defined with a local random seed in its parameters in order to ensure that in each iteration exactly the same data splitting is performed. For each iteration, the training subprocess will select a different learner according to the value of the macro "current_iteration". Please make sure that the number of models here and the number defined in "max" are the same!
The results are automatically collected within a Performance Vector Collection by the Loop operator and will be delivered as final result.
Preview
Run
Not available
Workflow Components
Unavailable
Reviews (0)
Other workflows that use similar services (0)
There are no workflows in myExperiment that use similar services to this Workflow.
Comments (0)
No comments yet
Log in to make a comment