-
Notifications
You must be signed in to change notification settings - Fork 8
/
CITATION.cff
72 lines (72 loc) · 3.48 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# YAML 1.2
---
abstract: "Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as “partition and vote” schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results."
authors:
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Xu
given-names: Haoyin
orcid: "https://orcid.org/0000-0001-8235-4950"
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Kinfu
given-names: Kaleab
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: LeVine
given-names: Will
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Panda
given-names: Sambit
orcid: "https://orcid.org/0000-0001-8455-4243"
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Dey
given-names: Jayanta
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Ainsworth
given-names: Michael
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Peng
given-names: "Yu-Chung"
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Kusmanov
given-names: Madi
-
affiliation: "Harvard University, Cambridge, MA"
family-names: Engert
given-names: Florian
-
affiliation: "Microsoft Research, Redmond, WA"
family-names: White
given-names: Christopher
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Vogelstein
given-names: Joshua
orcid: "https://orcid.org/0000-0003-2487-6237"
-
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Priebe
given-names: Carey
cff-version: "1.2.0"
identifiers:
-
type: url
value: "https://arxiv.org/pdf/2108.13637.pdf"
date-released: 2021-11-02
keywords:
- Python
- classification
- "decision trees"
- "random forests"
- "deep networks"
license: MIT
message: "If you use the benchmark code of DF/DN, please cite it using these metadata."
repository-code: "https://github.com/neurodata/df-dn-paper"
title: "When are Deep Networks really better than Decision Forests at small sample sizes, and how?"
...