Skip to main content

Questions tagged [svm]

Support Vector Machine refers to "a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis."

Filter by
Sorted by
Tagged with
5 votes
1 answer
127 views

A logistic regression will generate a set of hyperplanes like so: (for more details about the picture see: Probability threshold in ROC curve analyses) For computing the AUC of a classification the ...
Sextus Empiricus's user avatar
1 vote
0 answers
37 views

I wondered if it is possible to extract the 'support vectors' when doing a support vector machine (SVM) classifier? For example, suppose I ran a 2-class SVM, can the two support vectors each side of ...
EB3112's user avatar
  • 303
0 votes
0 answers
41 views

I am listening to a lecture on soft margin SVM https://youtu.be/XUj5JbQihlU?si=b66SblRnw9mmczVU&t=2969 The lecturer says that the blue dot represents a violation of the margin. I don't really ...
Your neighbor Todorovich's user avatar
1 vote
0 answers
42 views

It is widely known that if you were to calculate the maximizer of the dual SVM program (denote as $\alpha^*$), then the primal minimizer of the hard-margin SVM program, \begin{aligned}&{\underset {...
Your neighbor Todorovich's user avatar
2 votes
2 answers
180 views

I trained a SVM multiple regression model and want to know how much each feature contributes to the prediction variance (quantified by the RMSE). I got the Shapley values for each feature on data from ...
n6r5's user avatar
  • 209
0 votes
0 answers
36 views

I have trained a multiregression model using non-linear SVM, and got quite good metrics, with no big differences between test (20% data) and train (80% data) metrics. The following are the test/train ...
n6r5's user avatar
  • 209
2 votes
0 answers
55 views

I am trying to figure out how to infer C in support vector machine. C is the upper bound on magnitude of lagrange multipliers. These multipliers are not independent. They are probably mutually ...
Coo's user avatar
  • 121
2 votes
1 answer
108 views

Memorability scores for a set of words can be downloaded from here. I am interested in seeing how well semantic embeddings can predict the relative memorability of words, as measured by Spearman's rho....
AvadaMouse's user avatar
0 votes
0 answers
51 views

How would a visualization of the data underlying SVR, such as in svm(X,y,type="nu-regression",kernel="linear",nu=0.5) look like in 3 dimensions (...
Sam's user avatar
  • 757
0 votes
0 answers
108 views

I'm trying to build an SVM model using the ksvm function from the kernlab package in R. My dataset is about breast cancer, and I'm trying to predict the diagnosis variable, which is a factor. All the ...
user avatar
1 vote
1 answer
81 views

I have two classes of data to train a linear support vector machine. To be specific, I used Principle Component Analysis to project the data to 2-dimension and trained the support vector machine. I ...
user avatar
1 vote
0 answers
80 views

I'm encountering a multiclass classification problem where I'm trying to predict 4 categories using SVM. I'm trying to fine-tuning its hyperparameter using Bayesian Optimization to speed up the ...
Duy Ngo's user avatar
  • 33
1 vote
1 answer
216 views

I recently conducted a Principal Component Analysis (PCA) on a dataset with a four-category target variable. While the PCA score plot revealed excellent separation for one group, the remaining three ...
Mamad Fasih's user avatar
3 votes
1 answer
268 views

Why in most explanations of the kernel trick is Mercer's Theorem used as justification? Can we not justify it as well with Moore-Aronszajn, which does not place the assumption of compactness on $X$ ...
T. Tim's user avatar
  • 117
2 votes
1 answer
133 views

In standard SVM formulations, we typically look for a vector $w \in \mathbb{R}^D$ that defines a hyperplane in $\mathbb{R}^D$. The decision function is then of the form: $$ f(x) = \operatorname{sign}(\...
T. Tim's user avatar
  • 117

15 30 50 per page
1
2 3 4 5
153