Benchmarking protein classification algorithms via supervised cross-validation

A. Kertész-Farkas, S. Dhir, P. Sonego, M. Pacurar, S. Netoteia, H. Nijveen, A. Kuzniar, J.A.M. Leunissen, A. Kocsor, S. Pongor

Research output: Contribution to journalArticleAcademicpeer-review

13 Citations (Scopus)

Abstract

Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Original languageEnglish
Pages (from-to)1215-1223
JournalJournal of Biochemical and Biophysical Methods
Volume70
Issue number6
DOIs
Publication statusPublished - 2008

Keywords

  • sequence classification
  • homology detection
  • database
  • family
  • information
  • search

Fingerprint

Dive into the research topics of 'Benchmarking protein classification algorithms via supervised cross-validation'. Together they form a unique fingerprint.

Cite this