Feature Selection
title: Feature Selection description: Feature selection is a process where you automatically select those features in your data that contribute most to the prediction variable keywords: [feature selection] author: Juma Shafara date: "2024-03" date-modified: "2024-07-25"

What is Feature Selection
Feature selection is a process where you automatically select those features in your data that contribute most to the prediction variable or output in which you are interested.
Having irrelevant features in your data can decrease the accuracy of many models, especially linear algorithms like linear and logistic regression.
Three benefits of performing feature selection before modeling your data are:
- Reduces Overfitting: Less redundant data means less opportunity to make decisions based on noise.
- Improves Accuracy: Less misleading data means modeling accuracy improves.
- Reduces Training Time: Less data means that algorithms train faster.
You can learn more about feature selection with scikit-learn in the article Feature selection.
Univariate Feature Selection Techniques
Statistical tests can be used to select those features that have the strongest relationship with the output variable.
The scikit-learn library provides the SelectKBest class that can be used with a suite of different statistical tests to select a specific number of features.
Many different statistical tests can be used with this selection method. For example the ANOVA F-value method is appropriate for numerical inputs and categorical data. This can be used via the f_classif() function. We will select the 4 best features using this method in the example below.
Let's first separate our data into features ie X and outcome ie y as below.
Numeric or Continuous Features with Categorical Outcome
Beginning with the numeric columns, let's find which of them best contributes to the outcome variable
# create a test object from SelectKBest
test = SelectKBest(score_func=f_classif, k=2)
# fit the test object to the data
fit = test.fit(X_numeric, y)
# get the scores and features
scores = fit.scores_
# get the selected indices
features = fit.transform(X_numeric)
selected_indices = test.get_support(indices=True)
# print the scores and features
print('Feature Scores: ', scores)
print('Selected Features Indices: ', selected_indices)
This shows us that the best 2 features to use to differentiate between the groups in our outcome are [0, 1] ie age and income
Numeric Features with Numeric Outcome
Let's selecting the input features X, and the output (outcome), y
We will still use the SelectKBest class but with our score_func as f_regression instead.
test = SelectKBest(score_func=f_regression, k=1)
# Fit the test to the data
fit = test.fit(X, y)
# get scores
test_scores = fit.scores_
# summarize selected features
features = fit.transform(X)
# Get the selected feature indices
selected_indices = fit.get_support(indices=True)
print('Feature Scores: ', test_scores)
print('Selected Features Indices: ', selected_indices)
Here, we can see that age is selected because it returns the higher f_statistic between the two features
Both input and outcome Categorical
Let's begin by selecting out only the categorical features to make our X set and set y as categorical
Now we shall again use SelectKBest but with the score_func as chi2.
Note: When using the Chi-Square (chi2) as the the score function for feature selection, you use the Chi-Square statistic.
Again, we can see that the features with higher f_statistic scores have been selected
f_classifis most applicable where the input features are continuous and the outcome is categorical.f_regressionis most applicable where the input features are continuous and the outcome is continuous.chi2is best for when the both the input and outcome are categorical.
Recursive Feature Elimination
The Recursive Feature Elimination (or RFE) works by recursively removing attributes and building a model on those attributes that remain.
It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute.
You can learn more about the RFE class in the scikit-learn documentation.
Logistic Regression
From the operation above, we can observe features that bring out the best from the LogisticRegression model ranked from 1 as most best and bigger numbers as less.
Feature Importance
Bagged decision trees like Random Forest and Extra Trees can be used to estimate the importance of features.
In the example below we construct a ExtraTreesClassifier classifier for the Pima Indians onset of diabetes dataset. You can learn more about the ExtraTreesClassifier class in the scikit-learn API.
Extra Trees Classifier
Random Forest Classifier
more about random forest here