Description

Notes: -- 3 classes of waves -- 40 attributes, all of which include noise -- The latter 19 attributes are all noise attributes with mean 0 and variance 1 -- See the book for details (49-55, 169) -- waveform-+noise.data.Z contains 5000 instances

Related Papers

  • Thomas T. Osugi and M. S. EXPLORATION-BASED ACTIVE MACHINE LEARNING. Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements. [link]
  • Iaki Inza and Pedro Larraaga and Ramon Etxeberria and Basilio Sierra. Feature Subset Selection by Bayesian networks based optimization. Dept. of Computer Science and Artificial Intelligence. University of the Basque Country. [link]
  • Pierre Geurts. Extremely randomized trees. Technical report June 2003 University of Li#ege Department of Electrical Engineering and Computer Science Institut Monte#ore. [link]
  • Mohammed Waleed Kadous. Expanding the Scope of Concept Learning Using Metafeatures. School of Computer Science and Engineering, University of New South Wales. [link]
  • Khaled A. Alsabti and Sanjay Ranka and Vineet Singh. CLOUDS: A Decision Tree Classifier for Large Datasets. KDD. 1998. [link]
  • Juan J. Rodr##guez and Carlos J. Alonso. Applying Boosting to Similarity Literals for Time Series Classification. Department of Informatics University of Valladolid, Spain. 2000. [link]
  • Nir Friedman and Moiss Goldszmidt. Discretizing Continuous Attributes While Learning Bayesian Networks. ICML. 1996. [link]
  • Juan J. Rodr##guez and Carlos J. Alonso and Henrik Bostrom. Learning First Order Logic Time Series Classifiers: Rules and Boosting. Grupo de Sistemas Inteligentes, Departamento de Inform#atica Universidad de Valladolid, Spain. [link]
  • Kai Ming Ting and Boon Toh Low. Model Combination in the Multiple-Data-Batches Scenario. ECML. 1997. [link]
  • Thomas G. Dietterich. An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Machine Learning, 40. 2000. [link]
  • Amund Tveit. Empirical Comparison of Accuracy and Performance for the MIPSVM classifier with Existing Classifiers. Division of Intelligent Systems Department of Computer and Information Science, Norwegian University of Science and Technology. [link]
  • Zhi-Hua Zhou and Xu-Ying Liu. Training Cost-Sensitive Neural Networks with Methods Addressing the Class Imbalance Problem. [link]
  • Zoran Obradovic and Slobodan Vucetic. Challenges in Scientific Data Mining: Heterogeneous, Biased, and Large Samples. Center for Information Science and Technology Temple University. [link]
  • Kai Ming Ting and Ian H. Witten. Issues in Stacked Generalization. J. Artif. Intell. Res. (JAIR, 10. 1999. [link]
  • Vikas Sindhwani and P. Bhattacharya and Subrata Rakshit. Information Theoretic Feature Crediting in Multiclass Support Vector Machines. [link]
  • Dietrich Wettschereck and David W. Aha. Weighting Features. ICCBR. 1995. [link]
  • Juan J Rodrguez Diez and Carlos Alonso Gonzlez and Henrik Bostrm. Learning First Order Logic Time Series Classifiers: Rules and Boosting. PKDD. 2000. [link]
  • S. Sathiya Keerthi and Kaibo Duan and Shirish Krishnaj Shevade and Aun Neow Poo. A Fast Dual Algorithm for Kernel Logistic Regression. ICML. 2002. [link]
  • Kai Ming Ting and Boon Toh Low. Theory Combination: an alternative to Data Combination. University of Waikato. [link]
  • Kai Ming Ting and Ian H. Witten. Stacked Generalization: when does it work. Department of Computer Science University of Waikato. [link]
  • Bede Liu and Mingzeng Hu and Wynne Hsu. Multi-level organization and summarization of the discovered rules. KDD. 2000. [link]
  • Eibe Frank and Mark Hall and Bernhard Pfahringer. Locally Weighted Naive Bayes. UAI. 2003. [link]
  • James Bailey and Thomas Manoukian and Kotagiri Ramamohanarao. Fast Algorithms for Mining Emerging Patterns. PKDD. 2002. [link]
  • Giorgio Valentini and Thomas G. Dietterich. Low Bias Bagged Support Vector Machines. ICML. 2003. [link]
  • Tapio Elomaa and Juho Rousu. Finding Optimal Multi-Splits for Numerical Attributes in Decision Tree Learning. ESPRIT Working Group in Neural and Computational Learning. 1996. [link]
  • Giorgio Valentini. An experimental bias--variance analysis of SVM ensembles based on resampling techniques. [link]
  • Juan J. Rodr##guez and Carlos J. Alonso and Henrik Bostrom. Boosting Interval Based Literals. 2000. [link]
  • Ron Kohavi. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. KDD. 1996. [link]
  • Zhi-Hua Zhou and W-D Wei and Gang Li and Honghua Dai. On the Size of Training Set and the Benefit from Ensemble. PAKDD. 2004. [link]
  • Giorgio Valentini. Random Aggregated and Bagged Ensembles of SVMs: An Empirical Bias?Variance Analysis. Multiple Classifier Systems. 2004. [link]
  • Joao Gama and Ricardo Rocha and Pedro Medas. Accurate decision trees for mining high-speed data streams. KDD. 2003. [link]
  • Juan J. Rodr and guez Diez and Carlos J. Alonso. Learning Classification RBF Networks by Boosting. Lenguajes y Sistemas Inform#aticos. [link]
  • Carlos J. Alonso Gonzalez and Juan J. Rodr and iguez Diez. Time Series Classification by Boosting Interval Based Literals. Grupo de Sistemas Inteligentes Departamento de Informatica Universidad de Valladolid. [link]
  • Matthias Scherf and W. Brauer. Feature Selection by Means of a Feature Weighting Approach. GSF - National Research Center for Environment and Health. [link]
  • Giorgio Valentini. Ensemble methods based on bias--variance analysis Theses Series DISI-TH-2003. Dipartimento di Informatica e Scienze dell'Informazione . 2003. [link]
  • [link]

Related datasets