Questions tagged [vc-dimension]
The VC dimension (for Vapnik–Chervonenkis dimension) is a measure of the capacity (complexity, expressive power, richness, or flexibility) of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter.
74 questions
1
vote
0
answers
26
views
VC dimension of translating unions of intervals
I have the following hypothesis class $\mathcal{H}_m$ parameterized by $m$ (the number of intervals):
$$\mathcal{H}_m = \left\{\bigcup_{i=1}^{m} [b + i(i-1), b + i^2] : b \in \mathbb{R}\right\}$$
...
5
votes
1
answer
165
views
Can conditioning eliminate VC dimension dependence in empirical process bounds?
I'm analyzing the function class:
$$
\mathcal{F} = \left\{ (x, z, y) \mapsto \mathbb{1}\{ y \leq z\alpha + x^\top\beta \} : \alpha \in \mathbb{R}, \beta \in \mathbb{R}^d \right\}.
$$
Let $\mathbb{G}_n(...
1
vote
1
answer
126
views
Two questions about the VC theory (on the generalization error bound)
In Andrews Ng's machine learning notes (https://cs229.stanford.edu/main_notes.pdf), he introduced the following bound for the difference between generalization error and training error (see the ...
0
votes
1
answer
477
views
2d rectangle vc-dimension shattering 5 points
I have seen in many places that the vc-dimension of H, an hypostases class which consists of rectangles parallel to the axis is 4.
yet when constructing this 2D constellation of points i can't ...
2
votes
0
answers
77
views
Confirming my interpretation on a comment by Vladimir Vapnik
In the preface to the first edition of his book The Nature of Statistical Learning Theory, Vapnik makes the following comment:
Between 1960 and 1980 a revolution in statistics occurred: Fisher's ...
0
votes
0
answers
58
views
Vapnik-Chervonenkis dimension of hypergraph, given bound on number of hyperedges
in studying now about Vapnik-Chervonenkis dimension, and there is one question that I not able to solve.
Let $\textrm{(X , R)}$ be a range space so that any hypergraph $\textrm{(V, F)}$ in it ...
1
vote
0
answers
61
views
Is infinite VC-dim equivalent to universal approximator?
If $m$ is the VC-dim then it means there is no configuration of $m+1$ datapoints we can shatter. But there could be configurations of $m$ datapoints we cannot shatter. Hence my confusion and my ...
3
votes
1
answer
1k
views
What is the VC dimension for logistic regresion?
I know that the VC dimension for a perceptron is 3, but what is it for a logistic regression model?
3
votes
0
answers
116
views
Hypothesis class with n elements that shatters a set C of n/2 points
I started learning Advanced Machine Learning and came across a problem that stuck. I would be grateful if you could help me with some ideas or solutions:
What is the maximum value of the natural even ...
1
vote
0
answers
116
views
The VC dimension of a mixture of normal laws
In his Statistical Learning Theory (1998), Vapnik presents the following mixture of two normal laws (p.236), in which the parameters $a$ and $\sigma$ are unknown:
$$p(z;a;\sigma)=\frac{1}{2}\mathcal{N}...
2
votes
0
answers
146
views
Is the VC dimension of a MLP regressor a valid upper bound on how many points it can exactly fit?
I want to calculate an upper bound on how many training points an MLP regressor can fit with ~0 error. I don't care about the test error, I want to overfit as much as possible the (few) training ...
0
votes
1
answer
106
views
Characterizations of uniformly learnable function classes in the distribution-specific setting
Let $X$ be some input domain (a measurable space). Then let $D$ be some class of probability distributions on $X\times\{0,1\}$. We will call such distributions learning tasks. We say that $D$ is ...
1
vote
0
answers
114
views
Sample complexity and VC dimension
I'm studying VC-dimension and sample complexity, and I'd like to understand whether I understand it correctly via the following example.
Let $X = \mathbb{R}$ and $\mathcal{H} = \{ h_{\theta}(x)=\text{...
4
votes
1
answer
286
views
Distributional Assumptions in Machine Learning
In more classical statistical methods like linear regression, we can quantify how well our model generalizes under certain strong assumptions. For example, we know that $\hat Y = X \hat \beta \sim \...
1
vote
0
answers
149
views
VC-Dimensions and PAC-learning of some specific certain class of classifiers
I'm learning VC-dimensions and PAC-learnability right now and I need some help. I'm answering a practice exercise question that I'm prepping for an exam. So suppose we have some domain $\mathcal{X} = \...