Tag Results

Items tagged with "rapidminer" (55)

Note: some items may not be visible to you, due to viewing permissions.


Files (2)
Uploader

Blob Datasets for the pack: RCOMM2011 recommender systems...

Created: 2011-05-05 21:18:51 | Last updated: 2011-05-06 12:13:22

Credits: User Matko Bošnjak User Ninoaf

License: Creative Commons Attribution-Share Alike 3.0 Unported License

Dataset description: items This is a concatenated train and test set from ECML/PKDD Discovery Challenge 2011. Only ID and name attributes were used, other attributes are discarded because of the size of the dataset. This example set represents the content information for each of the items represented by an ID. user_history This is an example set consisting of randomly sampled IDs from items dataset. It represents the user's history - all the items (in this case lectures) he has viewed. u...

File type: ZIP archive

Comments: 0 | Viewed: 766 times | Downloaded: 448 times

Tags:

Uploader

Blob Experimental user to item score matrix Excel file

Created: 2011-11-26 20:07:02 | Last updated: 2011-11-26 20:07:04

Credits: User Matko Bošnjak

License: Creative Commons Attribution-Share Alike 3.0 Unported License

 A test file for Collaborative filtering recommender

File type: Excel workbook

Comments: 0 | Viewed: 327 times | Downloaded: 203 times

Tags:

Groups (1)

Network-member RapidMiner Demo

Unique name: rapidminer_demo
Created: Saturday 01 May 2010 09:49:28 (UTC)

This group is created for demo processes used in RapidMiner documentation, manual, or training. Feel free to add workflows if you consider them instructive examples.

5 shared items   |   1 announcements

Tags:

Latest announcement:: RCOMM 2011

Packs (10)
Creator

Pack RapidMiner plugin for Taverna videos and descriptions


Created: 2011-06-06 10:17:52 | Last updated: 2011-12-13 16:02:04

 This pack contains videos the show how to use various parts of the RapidMiner plugin for Taverna. The videos demonstrate how to build a Taverna workflow which collects a GEO dataset, uploads it to RapidAnalytics, trains a classifier on one half of the data and tests it on the other half. This classification process can be used to gauge how well mutant and control assays agree across experimental repeats.

5 items in this pack

Comments: 0 | Viewed: 150 times | Downloaded: 62 times

Tags:

Creator

Pack Creating a focused corpus of factual outcomes from b...


Created: 2011-06-28 11:19:04 | Last updated: 2011-12-13 16:02:16

 This pack contains resources and supplementary files for the submission to the MIND2011 workshop titled "Creating a focused corpus of factual outcomes from biomedical experiments" by James Eales, George Demetriou and Robert Stevens

1 item in this pack

Comments: 0 | Viewed: 69 times | Downloaded: 40 times

Tags:

Creator

Pack RapidAnalytics Video Series Demo Processes


Created: 2011-11-02 15:02:21 | Last updated: 2011-11-02 18:00:41

This pack contains RapidMiner processes created for the RapidAnalytics Video Series.

3 items in this pack

Comments: 0 | Viewed: 156 times | Downloaded: 73 times

Tags:

Creator

Pack Who Wants to be a Data Miner?


Created: 2011-11-02 17:54:07 | Last updated: 2013-09-09 16:22:11

One of the most fun events at the annual RapidMiner Community Meeting and Conference (RCOMM) is the live data mining process design competition "Who Wants to be a Data Miner?". In this competition, participants must design RapidMiner processes for a given goal within a few minutes. The tasks are related to data mining and data analysis, but are rather uncommon. In fact, most of the challenges ask for things RapidMiner was never supposed to do. This pack contains solutions for these...

12 items in this pack

Comments: 0 | Viewed: 258 times | Downloaded: 138 times

Tags:

Pack e-LICO recommender workflows


Created: 2011-03-15 15:33:48 | Last updated: 2012-01-28 19:39:06

This pack contains recommender system workflows created for the purpose of e-LICO project.

6 items in this pack

Comments: 0 | Viewed: 294 times | Downloaded: 159 times

Tags:

Pack RCOMM2011 recommender systems workflow templates


Created: 2011-04-07 14:59:37 | Last updated: 2012-01-28 19:37:47

No description

6 items in this pack

Comments: 0 | Viewed: 478 times | Downloaded: 162 times

Tags:

Pack Online update experiment pack


Created: 2012-01-29 16:29:09 | Last updated: 2012-01-29 22:06:46

This is a pack containing experimentation workflows and datasets for item recommendation and rating prediction online update testing.

12 items in this pack

Comments: 0 | Viewed: 93 times | Downloaded: 63 times

Tags:

Pack Experimentation for recommender extension templates


Created: 2012-01-28 21:54:16 | Last updated: 2012-01-31 16:01:43

This is a recommender extension experimentation pack

6 items in this pack

Comments: 0 | Viewed: 97 times | Downloaded: 50 times

Tags:

Creator

Pack Sudoku solving with RapidMiner (Who Wants to be a Da...


Created: 2012-09-04 16:56:44 | Last updated: 2012-09-04 16:58:26

A fun event at the annual RapidMiner conference RCOMM is the live data mining challenge "Who wants to be a data miner?" where contestants solve tasks data analysis tasks within a few minutes. In 2012 the task was to (partially) solve a Sudoku puzzle. Processes 1 to 3 in this pack correspond to the three tasks whereas process 0 loads the initial data and task 4 is a bonus process that solves the entire Sudoku. Make sure the processes are saved under the name they have on myExperimen...

5 items in this pack

Comments: 0 | Viewed: 108 times | Downloaded: 76 times

Tags:

Creator

Pack RCOMM 2013 Data Mining Challenge


Created: 2013-09-09 16:20:45 | Last updated: 2013-09-09 16:24:00

This pack contains the solution and the input data generator for one of the tasks of the RCOMM 2013 data mining challenge "Who Wants to be a Data Miner?". Participants had to solve the task within 10 minutes. The task was this: Givena variant of the Golf data set (found in the //Samples/data folder) where the attribute Outlook is missing,a decision tree model built on the complete Golf data set, anda utility data set containing only the three distinct values of Golf,create an exampl...

2 items in this pack

Comments: 0 | Viewed: 60 times | Downloaded: 31 times

Tags:

Workflows (42)

Workflow Image Mining with RapidMiner (1)

Thumb
This is an image mining process using the image mining Web service provided by NHRF within e-Lico. It first uploads a set of images found in a directory, then preprocesses the images and visualizes the result. Furthermore, references to the uploaded images are stored in the local RapidMiner repository so they can later be used for further processing without uploading images a second time.

Created: 2010-04-28 | Last updated: 2012-01-16

Workflow Looping over Examples for doing de-aggrega... (1)

Thumb
This process is based on (artificially generated) data that looks like it has been aggregated before. The integer attribute Qty specifies the quantity of the given item that is represented by the rest of the example. The process now loops over every example and performs on each example another loop, that will append the current example to a new example set. This example set has been created as empty copy of the original example set, so that the attributes are equally. To get access to and rem...

Created: 2010-04-29

Workflow Using Remember / Recall for "tunneling" re... (1)

Thumb
This process shows how Remeber and Recall operators can be used for passing results from one position to another position in the process, when it's impossible to make a direct connection. This process introduces another advanced RapidMiner technique: The macro handling. We have used the predefined macro a, accessed by %{a}, that gives the apply count of the operator. So we are remembering each application of the models that are generated in the learning subprocess of the Split validation. Af...

Created: 2010-04-29 | Last updated: 2012-01-16

Workflow CamelCases (1)

Thumb
this process splits up camelcases

Created: 2010-06-02

Uploader

Workflow Connect to twitter and analyze the key words (1)

Thumb
Hi All, This workflow connects RapidMiner to Twitter and downloads the timeline. It then creates a wordlist from the tweets and breaks them into key words that are mentioned in the tweets. You can then visualize the key words mentioned in the tweets. This workflow can be further modified to review various key events that have been talked about in the twitterland. Do let me know your feedback and feel free to ask me any questions that you may have. Shaily web: http://advanced-analyti...

Created: 2010-07-26 | Last updated: 2010-07-26

Workflow 2. Getting Started: Retrieve and Apply a M... (1)

Thumb
This getting started process demonstrates how to load (retrieve) a model from the repository and apply it to a data set. The result is a data set (at the lab output for "labeled data" ) with has a new "prediction" attribute which indicated the prediction for each example (ie. row/record). You will need to adjust the path of the retrieve data operator to the actual location where the model is stored by a previews execution of the "1. Getting Started: Learn and Store a...

Created: 2011-01-17 | Last updated: 2011-01-19

Workflow 1. Getting Started: Learn and Store a Model (1)

Thumb
This getting started process shows the first step of learning and storing a model. After a model is learned, you can load (Retrieve operator) the model and apply it to a test data set (see 2. Getting Started: Retrieve and Apply Model). The process is NOT concerned with evaluation of the model. This process will not immediately run in RapidMiner because you have to adjust the repository path in the Retrieve operator. Tags: Rapidminer, model, learn, learning, training, train, store, first step

Created: 2011-01-17 | Last updated: 2011-01-17

Workflow Change Class Distribution of Your Training... (1)

Thumb
This example process shows how to change the class distribution of your training data set (in this case the training data is what ever comes out of the "myData reader"). The given training set has a distribution of 10 "Iris-setosa" examples, 40 "Iris-versicolor" examples and 50 "Iris-virginica" examples. The aim is to get a data set which has the class distribution for the label, lets say 10 "Iris-setosa", 20 "Iris-versicolor" and 20 "Iris-virginica. Beware that this may change some propert...

Created: 2011-01-21 | Last updated: 2011-01-21

Workflow Random recommender (1)

Thumb
This process does a random item recommendation; for a given item ID, from the example set of items, it randomly recommends a desired number of items. The purpose of this workflow is to produce a random recommendation baseline for comparison with different recommendation solutions, on different retrieval measures. The inputs to the process are context defined macros: %{id} defines an item ID for which we would like to obtain recommendation and %{recommender_no} defines the required number of ...

Created: 2011-03-15 | Last updated: 2011-03-15

Workflow Collaborative filtering recommender (1)

Thumb
This process executes a collaborative filtering recommender based on user to item score matrix. This recommender predicts one user’s score on some of his non scored items based on similarity with other users. The inputs to the process are context defined macros: %{id} defines an item ID for which we would like to obtain recommendation and %{recommender_no} defines the required number of recommendations and %{number_of_neighbors} defines the number of the most similar users taken into a...

Created: 2011-03-15 | Last updated: 2012-03-06

Workflow Content based recommender (1)

Thumb
This process is a special case of the item to item similarity matrix based recommender where the item to item similarity is calculated as cosine similarity over TF-IDF word vectors obtained from the textual analysis over all the available textual data. The inputs to the process are context defined macros: %{id} defines an item ID for which we would like to obtain recommendation and %{recommender_no} defines the required number of recommendations. The process internally uses an example set of...

Created: 2011-03-15 | Last updated: 2011-03-15

Workflow Item to item similarity matrix -based reco... (1)

Thumb
This process executes the recommendation based on item to item similarity matrix. The inputs to the process are context defined macros: %{id} defines an item ID for which we would like to obtain recommendation and %{recommender_no} defines the required number of recommendations. The process internally uses an item to item similarity matrix written in pairwise form (id1, id2, similarity). The process essentially filters out appearances of the required ID in both of the columns of the pairwis...

Created: 2011-03-15 | Last updated: 2011-03-15

Workflow Content based recommender system template (1)

Thumb
As an input, this workflow takes two distinct example sets: a complete set of items with IDs and appropriate textual attributes (item example set) and a set of IDs of items our user had interaction with (user example set). Also, a macro %{recommendation_no} is defined in the process context, as a required number of outputted recommendations. The first steps of the workflow are to preprocess those example sets; select only textual attributes of item example set, and set ID roles on both of th...

Created: 2011-05-05 | Last updated: 2011-05-09

Credits: User Matko Bošnjak User Ninoaf

Attributions: Blob Datasets for the pack: RCOMM2011 recommender systems workflow templates

Workflow Item-based collaborative filtering recomme... (1)

Thumb
The workflow for item-based collaborative filtering receives a user-item matrix for its input, and the same context defined macros as the user-based recommender template, namely %{id}, %{recommendation_no}, and %{number_of_neighbors}. Although this process is in theory very similar to user-based technique, it differs in several processing steps since we are dealing with an item-user matrix, the transposed user-item example set. The first step of the workflow, after declaring zero values miss...

Created: 2011-05-05 | Last updated: 2011-05-09

Credits: User Matko Bošnjak User Ninoaf

Attributions: Blob Datasets for the pack: RCOMM2011 recommender systems workflow templates

Workflow User-based collaborative filtering recomme... (1)

Thumb
The workflow for user-based collaborative filtering, takes only one example set as an input: a user-item matrix, where the attributes denote item IDs, and rows denote users. If a user i has rated an item j with a score s, the matrix will have the value s written in i-th row and j-th column. In the context of the process we define the ID of the user %{id}, desired number of recommendations %{recommendation_no}, and the number of neighbors used in ca...

Created: 2011-05-05 | Last updated: 2011-05-09

Credits: User Matko Bošnjak User Ninoaf

Attributions: Blob Datasets for the pack: RCOMM2011 recommender systems workflow templates

Uploader

Workflow SVD user-based collaborative filtering rec... (1)

Thumb
This workflow takes user-item matrix A as a input. Then it calculates reduced SVD decomposition A_k by taking only k greatest singular values and corresponding singular vectors. This worfkflow calculates recommendations and predictions for particular user %{id} from matrix A. Particular row %{id} is taken from original matrix A and replaced with %{id} row in A_k matrix. Predictions are made for %{id} user based on another users A_k. Note: This workflow uses R-script operator with R library ...

Created: 2011-05-09 | Last updated: 2011-05-09

Credits: User Ninoaf User Matko Bošnjak

Attributions: Workflow User-based collaborative filtering recommender system template Blob Datasets for the pack: RCOMM2011 recommender systems workflow templates

Uploader

Workflow LSI content based recommender system template (1)

Thumb
This workflow performs LSI text-mining content based recommendation. We use SVD to capture latent semantics between items and words and to obtain low-dimensional representation of items. Latent Semantic Indexing (LSI) takes k greatest singular values and left and right singular vectors to obtain matrix  A_k=U_k * S_k * V_k^T. Items are represented as word-vectors in the original space, where each row in matrix A represents word-vector of particular item. Matrix U_k, on the other hand ...

Created: 2011-05-06 | Last updated: 2011-05-09

Credits: User Ninoaf User Matko Bošnjak

Attributions: Workflow Content based recommender system template Blob Datasets for the pack: RCOMM2011 recommender systems workflow templates

Workflow Mining Semantic Web data using FastMap - E... (1)

Thumb
This workflow describes how to learn from the Semantic Web's data. The input to the workflow is a feature vector developed from a RDF resource. The loaded example set is then divided into training and test parts. These sub-example sets are used by the FastMap operators (encapsulate the FastMap data transformation technique), which processes each feature at a time and transform the data into a different space. This transformed data is more meaningful and helps the learner to improve classfica...

Created: 2011-06-25 | Last updated: 2011-06-25

Workflow Mining Semantic Web data using Corresponde... (1)

Thumb
This workflow describes how to learn from the Semantic Web's data using a data transformation algorithm 'Correspondence Analysis'. The input to the workflow is a feature vector developed from a RDF resource. The loaded example set is divided into training and test parts. These sub-example sets are used by the Correspondence Analysis operators (encapsulate the Correspondence Analysis data transformation technique) which processes each feature at a time and transform the data into a different...

Created: 2011-06-25 | Last updated: 2011-06-25

Workflow Mining Semantic Web data using Corresponde... (1)

Thumb
This workflow will explain that how an example set can be extracted from an RDF resource using the provided SPARQL query. This example set is then divided into training and test parts. These sub-example sets are used by the Correspondencce Analysis operators (encapsulate the Correspondencce Analysis data transformation technique) which processes each feature at a time and transform the data into a different space. This transformed data is more meaningful and helps the learner to improve clas...

Created: 2011-06-25 | Last updated: 2011-06-25

Workflow Tag Clustering (TaCl) (1)

Thumb
This is a sample process for a tag clustering. See http://www-ai.cs.uni-dortmund.de/SOFTWARE/TaCl/index.html

Created: 2011-11-17 | Last updated: 2011-11-17

Uploader

Workflow Semantic clustering (with AHC) of SPARQL q... (1)

Thumb
The workflow uses RapidMiner extension named RMonto (http://semantic.cs.put.poznan.pl/RMonto/) to perform clustering of SPARQL query results based on chosen semantic similarity measure. The measure used in this particualr workflow is a kernel that exploits membership of clustered individuals to OWL classes from a background ontology ("Common classes" kernel from [1]). Since the semantics of the backgound ontology is used in this way, we use the name "semantic clustering". ...

Created: 2012-01-29 | Last updated: 2012-01-29

Uploader

Workflow Semantic clustering (with k-medoids) of SP... (1)

Thumb
The workflow uses RapidMiner extension named RMonto (http://semantic.cs.put.poznan.pl/RMonto/) to perform clustering of SPARQL query results based on chosen semantic similarity measure. Since the semantics of the backgound ontology is used in this way, we use the name "semantic clustering". The SPARQL query is entered in a parameter of "SPARQL selector" operator. The clustering operator (k-medoids) allows to specify which of the query variables are to be used as clustering criteria. If more ...

Created: 2012-01-29

Uploader

Workflow Semantic clustering (with alpha-clustering... (1)

Thumb
The workflow uses RapidMiner extension named RMonto (http://semantic.cs.put.poznan.pl/RMonto/) to perform clustering of SPARQL query results based on chosen semantic similarity measure. The measure used in this particualr workflow is a kernel that exploits membership of clustered individuals to OWL classes from a background ontology ("Epistemic" kernel from [1]). Since the semantics of the backgound ontology is used in this way, we use the name "semantic clustering". This ...

Created: 2012-01-29 | Last updated: 2012-01-30

Uploader

Workflow Loading OWL files (RDF version of videolec... (1)

Thumb
The workflow uses RapidMiner extension named RMonto (http://semantic.cs.put.poznan.pl/RMonto/). Operator "Build knowledge base" is responsible for collecting data either from OWL files or SPARQL endpoints or RDF repositories and provide it to the subsequent operators in a workflow. In this workflow it is parametrized in this way, that is builds a Sesame/OWLIM repository from the files specified in "Load file" operators. Paths to OWL files are specified as parameter va...

Created: 2012-01-29 | Last updated: 2012-01-29

Workflow Operator testing workflow (1)

Thumb
This workflow is used for operator testing. It joins dataset metafeatures with execution times and performanse measures of the selected recommendation operator. In the Extract train and Extract test Execute Process operator user should open Metafeature extraction workflow. In the Loop Operator train/test data are used to evaluate performanse of the selected operator. Result is remebered and joined with the time and metafeature informations. This workflow can be used both for Item Recommend...

Created: 2012-01-29

Credits: User Matej Mihelčić User Matko Bošnjak

Workflow Metafeature extraction (1)

Thumb
This is a metafeature extraction workflow used in Experimentation workflow for recommender extension operators. This workflow extracts metadata from the train/test datasets (user/item counts, rating count, sparsity etc). This workflow is called from the operator testing workflow using Execute Process operator.

Created: 2012-01-29 | Last updated: 2012-01-30

Credits: User Matko Bošnjak

Workflow Model update workflow (1)

Thumb
This is a Model update workflow called from data iteration workflow on every given query set. In the Loop operator model and current training set are retrieved from the repository. Model update is performed on a given query set creating new model. Model and updated train set are saved in the repository.

Created: 2012-01-29 | Last updated: 2012-01-29

Credits: User Matej Mihelčić User Matko Bošnjak

Workflow Data iteration workflow (1)

Thumb
This is a data iteration workflow used to iterate throug query update sets.

Created: 2012-01-29

Credits: User Matej Mihelčić User Matko Bošnjak

Workflow Iterate through datasets (1)

Thumb
This is a dataset iteration workflow. It is a part of Experimentation workflow for recommender extension. Loop FIles operator iterates through datasets from a specified directory using read aml operator. Only datasets specified with a proper regular expression are considered. Train and test data filenames must correspond e.g (train1.aml, test1.aml). In each iteration Loop Files calles specified operator testing workflow with Execute subprocess operator. Informations about training and t...

Created: 2012-01-29

Credits: User Matej Mihelčić User Matko Bošnjak

Workflow Model testing workflow (1)

Thumb
This workflow measures performance of three models. Model learned on train data and upgraded using online model updates. Model learned on train data + all query update sets. Model learned on train data only.

Created: 2012-01-29

Credits: User Matej Mihelčić

Workflow Model saving workflow (1)

Thumb
This workflow trains and saves a model for a selected item recommendation operator.

Created: 2012-01-29 | Last updated: 2012-01-30

Credits: User Matej Mihelčić

Workflow Recommender workflow (1)

Thumb
This is a main online update experimentation workflow. It consists of three Execute Process operators. First operator executes model training workflow. Second operator executes online updates workflow for multiple query update sets. The last operator executes performance testing and comparison workflow. Final performance results are saved in an Excel file.

Created: 2012-01-29

Credits: User Matej Mihelčić

Workflow Data iteration workflow (RP) (1)

Thumb
This is a data iteration workflow used to iterate throug query update sets.

Created: 2012-01-29

Credits: User Matej Mihelčić User Matko Bošnjak

Workflow Model update workflow (RP) (1)

Thumb
This is a Model update workflow called from data iteration workflow on every given query set. In the Loop operator model and current training set are retrieved from the repository. Model update is performed on a given query set creating new model. Model and updated train set are saved in the repository.

Created: 2012-01-29 | Last updated: 2012-01-30

Credits: User Matej Mihelčić

Workflow recommender workflow (RP) (1)

Thumb
This is a main online update experimentation workflow. It consists of three Execute Process operators. First operator executes model training workflow. Second operator executes online updates workflow for multiple query update sets. The last operator executes performance testing and comparison workflow. Final performance results are saved in an Excel file.

Created: 2012-01-29

Credits: User Matej Mihelčić

Workflow Model testing workflow (RP) (1)

Thumb
This workflow measures performance of three models. Model learned on train data and upgraded using online model updates. Model learned on train data + all query update sets. Model learned on train data only.

Created: 2012-01-29 | Last updated: 2012-01-30

Credits: User Matej Mihelčić

Workflow Model saving workflow (RP) (1)

Thumb
This workflow trains and saves model for a selected rating prediction operator.

Created: 2012-01-29 | Last updated: 2012-01-30

Credits: User Matej Mihelčić

Uploader

Workflow Transforming user/item description dataset... (1)

Thumb
This workflow provides transformation of an user/item description attribute set, into a format required by attribute based k-NN operators of the Recommender extension. See: http://zel.irb.hr/wiki/lib/exe/fetch.php?media=del:projects:elico:recsys_manual_v1.1.pdf to learn about formats of datasets required by Recommender extension.

Created: 2012-01-30

Uploader

Workflow Semantic meta-mining workflow that perform... (1)

Thumb
Performs a crossvalidation on a data set composed of meta-data of baseline RapidMiner workflows expressed in RDF with the DMOP's ontology terminology for representing processes. Includes discovery of a set of semantic features (patterns) by the Fr-ONT-Qu algorithm (‘workflow patterns’). Through propositionalisation approach those features may be used in an arbitrary (propositional) RapidMiner classification operator.

Created: 2012-03-05 | Last updated: 2012-03-05

Uploader

Workflow Meta-mining workflow that performs crossva... (1)

Thumb
Performs a crossvalidation on a data set composed of baseline RapidMiner workflows described with dataset characteristics used by the given workflow and the learning algorithm used in the given workflow.

Created: 2012-03-05 | Last updated: 2012-03-05

Workflow Analyzing Data from a Linked Open Data SPA... (1)

Thumb
This process reads a list of countries, their GDP and energy consumption from the Eurostat Linked Open Data SPARQL Endpoint (http://wifo5-03.informatik.uni-mannheim.de/eurostat/) and analyzes whether there is a correlation between GDP and energy consumption

Created: 2013-09-11 | Last updated: 2013-09-11

What is this?

Linked Data

Non-Information Resource URI: http://www.myexperiment.org/tags/1764


Alternative Formats

HTML
RDF
XML