This HTML5 document contains 29 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
n11doi:10.1007/
dctermshttp://purl.org/dc/terms/
n2https://kar.kent.ac.uk/id/eprint/
wdrshttp://www.w3.org/2007/05/powder-s#
dchttp://purl.org/dc/elements/1.1/
n13http://purl.org/ontology/bibo/status/
n14https://kar.kent.ac.uk/id/eprint/77014#
rdfshttp://www.w3.org/2000/01/rdf-schema#
n16https://kar.kent.ac.uk/id/subject/
n9https://demo.openlinksw.com/about/id/entity/https/raw.githubusercontent.com/annajordanous/CO644Files/main/
n7http://eprints.org/ontology/
n21https://kar.kent.ac.uk/77014/
n15https://kar.kent.ac.uk/id/event/
bibohttp://purl.org/ontology/bibo/
n17https://kar.kent.ac.uk/id/publication/
n18https://kar.kent.ac.uk/id/org/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n8https://kar.kent.ac.uk/id/document/
n12https://kar.kent.ac.uk/id/
xsdhhttp://www.w3.org/2001/XMLSchema#
n4https://demo.openlinksw.com/about/id/entity/https/www.cs.kent.ac.uk/people/staff/akj22/materials/CO644/
n19https://kar.kent.ac.uk/id/person/

Statements

Subject Item
n2:77014
rdf:type
bibo:Article n7:ConferenceItemEPrint bibo:AcademicArticle n7:EPrint
rdfs:seeAlso
n21:
owl:sameAs
n11:978-3-030-29726-8_4
n7:hasAccepted
n8:3189296
n7:hasDocument
n8:3189357 n8:3189358 n8:3189359 n8:3189356 n8:3189296 n8:3189301
dc:hasVersion
n8:3189296
dcterms:title
Automated machine learning for studying the trade-off between predictive accuracy and interpretability
wdrs:describedby
n4:export_kar_RDFN3.n3 n9:export_kar_RDFN3.n3
dcterms:date
2019-08-23
dcterms:creator
n19:ext-a.a.freitas@kent.ac.uk
bibo:status
n13:peerReviewed n13:published
dcterms:publisher
n18:ext-1c5ddec173ca8cdfba8b274309638579
bibo:abstract
Automated Machine Learning (Auto-ML) methods search for the best classification algorithm and its best hyper-parameter settings for each input dataset. Auto-ML methods normally maximize only predictive accuracy, ignoring the classification model’s interpretability – an important criterion in many applications. Hence, we propose a novel approach, based on Auto-ML, to investigate the trade-off between the predictive accuracy and the interpretability of classification-model representations. The experiments used the Auto-WEKA tool to investigate this trade-off. We distinguish between white box (interpretable) model representations and two other types of model representations: black box (non-interpretable) and grey box (partly interpretable). We consider as white box the models based on the following 6 interpretable knowledge representations: decision trees, If-Then classification rules, decision tables, Bayesian network classifiers, nearest neighbours and logistic regression. The experiments used 16 datasets and two runtime limits per Auto-WEKA run: 5 h and 20 h. Overall, the best white box model was more accurate than the best non-white box model in 4 of the 16 datasets in the 5-hour runs, and in 7 of the 16 datasets in the 20-hour runs. However, the predictive accuracy differences between the best white box and best non-white box models were often very small. If we accept a predictive accuracy loss of 1% in order to benefit from the interpretability of a white box model representation, we would prefer the best white box model in 8 of the 16 datasets in the 5-hour runs, and in 10 of the 16 datasets in the 20-hour runs.
dcterms:isPartOf
n12:repository n17:ext-03029743
dcterms:subject
n16:Q335
bibo:authorList
n14:authors
bibo:presentedAt
n15:ext-14ff6d331beb9a371bffdbdb47fba33f
bibo:volume
11713