Skip to content

Latest commit

 

History

History
82 lines (73 loc) · 2.78 KB

Regression_AlgorithmsWithProsandCons.md

File metadata and controls

82 lines (73 loc) · 2.78 KB
  1. Linear Regression:

    • Pros:
      • Simple and easy to understand.
      • Fast training and prediction.
      • Provides coefficients that indicate feature importance.
    • Cons:
      • Assumes a linear relationship between features and the target variable.
      • Sensitive to outliers.
  2. Ridge Regression (L2 Regularization):

    • Pros:
      • Handles multicollinearity by adding regularization term.
      • Reduces the impact of irrelevant features.
    • Cons:
      • Requires tuning of the regularization parameter.
      • Does not perform feature selection.
  3. Lasso Regression (L1 Regularization):

    • Pros:
      • Performs feature selection by setting some coefficients to zero.
      • Handles multicollinearity.
    • Cons:
      • May not work well with a large number of features.
      • Requires tuning of the regularization parameter.
  4. Elastic Net Regression:

    • Pros:
      • Combines L1 and L2 regularization.
      • Balances the advantages of Ridge and Lasso.
    • Cons:
      • Requires tuning of two regularization parameters.
  5. Decision Tree Regression:

    • Pros:
      • Nonlinear relationships can be captured.
      • No need for feature scaling.
      • Handles both numerical and categorical data.
    • Cons:
      • Prone to overfitting, especially on small datasets.
      • Instability: small changes in data can lead to different tree structures.
  6. Random Forest Regression:

    • Pros:
      • Reduces overfitting compared to a single decision tree.
      • Handles a large number of features and complex relationships.
      • Provides feature importance scores.
    • Cons:
      • May be computationally expensive.
      • Less interpretable than a single decision tree.
  7. Gradient Boosting Regression (e.g., XGBoost, LightGBM):

    • Pros:
      • High predictive accuracy.
      • Handles complex relationships and interactions.
      • Robust to outliers.
    • Cons:
      • More complex, requires tuning of hyperparameters.
      • May be sensitive to overfitting.
  8. Support Vector Regression:

    • Pros:
      • Effective in high-dimensional spaces.
      • Robust to outliers.
      • Versatile due to different kernel functions.
    • Cons:
      • May require careful tuning of hyperparameters.
      • Memory-intensive for large datasets.
  9. K-Nearest Neighbors Regression:

    • Pros:
      • No assumption about the underlying data distribution.
      • Simple and easy to understand.
    • Cons:
      • Sensitive to outliers.
      • Computationally expensive during prediction for large datasets.
  10. Huber Regression:

    • Pros:
      • Robust to outliers.
      • Balances the advantages of Mean Squared Error and Mean Absolute Error.
    • Cons:
      • Requires tuning of a hyperparameter