逻辑回归_通过正则化项减小方差

网友投稿 504 2022-11-16

逻辑回归_通过正则化项减小方差

逻辑回归_通过正则化项减小方差

逻辑回归_通过正则化项减小方差

# 通过正则化减小提高模型你能力 来减小方差from sklearn.linear_model import LogisticRegressionCVfrom sklearn import datasetsfrom sklearn.preprocessing import StandardScaleriris = datasets.load_iris()features = iris.datatarget = iris.target# 标准化scaler = StandardScaler()features_standardized = scaler.fit_transform(features)# 正则化,通过惩罚项来减小方差 , LogisticRegressionCV 来调节C L1惩罚项和L2惩罚项 惩罚复杂模型logistic_regression = LogisticRegressionCV( penalty='l2', Cs=10, random_state=0, n_jobs=-1)model = logistic_regression.fit(features_standardized, target)DiscussionRegularization is a method of penalizing complex models to reduce their variance. Specifically, a penalty term is added to the loss function we are trying to minimize typically the L1 and L2 penaltiesIn the L1 penalty:α∑j=1p|β̂ j|α∑j=1p|β^j| where β̂ jβ^j is the parameters of the jth of p features being learned and αα is a hyperparameter denoting the regularization strength.With the L2 penalty:α∑j=1pβ̂ 2jα∑j=1pβ^j2 higher values of αα increase the penalty for larger parameter values(i.e. more complex models). scikit-learn follows the common method of using C instead of αα where C is the inverse of the regularization strength: C=1αC=1α . To reduce variance while using logistic regression, we can treat C as a hyperparameter to be tuned to find thevalue of C that creates the best model. In scikit-learn we can use the LogisticRegressionCV class to efficiently tune C.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:pysaprk_统计词频
下一篇:python_重新拼接字段名
相关文章

 发表评论

暂时没有评论,来抢沙发吧~