일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- web 사진
- broscoding
- cudnn
- KNeighborsClassifier
- mglearn
- 데이터전문기관
- CES 2O21 참가
- 재귀함수
- 자료구조
- C언어
- pycharm
- postorder
- classification
- web
- Keras
- 머신러닝
- 결합전문기관
- 웹 용어
- discrete_scatter
- tensorflow
- java역사
- inorder
- paragraph
- vscode
- 대이터
- CES 2O21 참여
- web 용어
- web 개발
- bccard
- html
- Today
- Total
목록분류 전체보기 (688)
bro's coding
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/I8wod/btqDweBgsVb/H4k4YEbguzkVqKued6qEYK/img.png)
from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.datasets import load_iris iris = load_iris() X = iris.data[:, [0]] y = iris.data[:, 2] plt.scatter(X, y) plt.axis('equal') model = LinearRegression() model.fit(X, y) w = model.coef_[0] b = model.intercept_ print(model.score(X, y), model.coef_) # 0.759955310..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/nH1lR/btqDxtLiPKD/xiDeR0abKeYbfjUHq6Scl0/img.png)
alphas = [10, 1, 0.1, 0.01, 0.001, 0.0001] train_scores = [] test_scores = [] ws = [] for alpha in alphas: lasso = Lasso(alpha=alpha) lasso.fit(X_train, y_train) ws.append(lasso.coef_) s1 = lasso.score(X_train, y_train) s2 = lasso.score(X_test, y_test) train_scores.append(s1) test_scores.append(s2) display(train_scores, test_scores, ws) [0.0, 0.40725895623295394, 0.900745787336254, 0.92796316315..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/5jVq3/btqDyEZy0Nb/UDP4k4HcLitmF3uiQu50kK/img.jpg)
릿지(Ridge) 와 라쏘(Lasso) 는 오차값에 규제(Regulation) 항 또는 벌점(Penalty) 항을 추가해서, 좀 더 단순화된 모델 또는 일반화된 모델을 제공하는 방법이다. import numpy as np import matplotlib.pyplot as plt # graph size fig=plt.figure(figsize=[12,6]) # -10부터 10까지 100개로 분할함 rng=np.linspace(-10,10,100) # mse mse=(0.5*(rng-3))**2+30 # ridge's alpha = 1 l2=rng**2 # rasso's alpha = 5 l1=5*np.abs(rng) # ridge ridge=mse+l2 # lasso lasso=mse+l1 # visual..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/wlnm2/btqDtAc5Msm/mnfkCsQ96wL6NDjQd8aOG0/img.png)
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() col1=0 col2=5 X=cancer.data[:,[col1,col2]] y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) X_mean=X_train.mean(axis=0) X_std=X_train.std(axis=0) X_train_norm=(X_train-X_mean)/X_std X_test_norm=(X_te..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/k22Kc/btqDvgSJZiN/AXSZCIOAB4JhUEk8kNGvAK/img.png)
import numpy as np import matplotlib.pyplot as plt # data set from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() col1=0 col2=5 X=cancer.data[:,[col1,col2]] y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) X_mean=X_train.mean(axis=0) X_std=X_train.std(axis=0) X_train_norm=(X_train-X_mean)/X_std X_test..
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer cancer =load_breast_cancer() X=cancer.data y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) from sklearn.svm import SVC model=SVC() model.fit(X_train,y_train) model.score(X_test,y_test) # 0.6083916083916084 ..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/5rNE3/btqDwdG4HHD/Q9QO4ks04CZhhBM2sbDF90/img.png)
높이를 정하는 함수를 만들어 높이를 만들고 구분이 되면 구분 기준을 가지고 다시 높이를 제거한다. C 가 증가하면 곡선이 디테일 해지고 감마가 증가하면 섬들이 많이 생긴다. from sklearn.datasets import load_iris from sklearn.svm import SVC from sklearn.svm import LinearSVC iris=load_iris() col1=0 col2=1 X=iris.data[:,[col1,col2]] y=iris.target X_train,X_test,y_train,y_test=train_test_split(X,y) # SVC : 성능은 좋지만 튜닝이 어렵다 model1=SVC() model1.fit(X_train,y_train) mglearn.plo..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/ITTrS/btqDtzRXZH0/kb4QKJqijCKj5WUPiTHOq0/img.png)
https://broscoding.tistory.com/145 머신러닝.make_circles 사용하기 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import make_circles X,y=make_circles(factor=0.5,noise=0.1) # factor = R2/R1, noise= std) plt.scatter(X[:,.. broscoding.tistory.com X,y=make_circles(factor=0.5,noise=0.1) X=X*[1,0.5] X=X+1 plt.scatter(X[:,0],X[:,1],c=y) plt.vlines([1],-0,2,linestyl..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/b1gfWd/btqDqj3wUor/N3OCk4wl5dXTNFhwUyGZkK/img.png)
https://broscoding.tistory.com/145 머신러닝.make_circles 사용하기 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import make_circles X,y=make_circles(factor=0.5,noise=0.1) # factor = R2/R1, noise= std) plt.scatter(X[:,.. broscoding.tistory.com import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_tes..