일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- 자료구조
- web 용어
- web 개발
- inorder
- cudnn
- discrete_scatter
- pycharm
- 웹 용어
- java역사
- 데이터전문기관
- tensorflow
- paragraph
- Keras
- broscoding
- mglearn
- CES 2O21 참가
- KNeighborsClassifier
- 대이터
- postorder
- classification
- vscode
- web 사진
- 결합전문기관
- html
- 재귀함수
- 머신러닝
- web
- C언어
- bccard
- CES 2O21 참여
- Today
- Total
목록[AI]/python.sklearn (95)
bro's coding
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/c6o7ix/btqDdHwVJGc/I9uIOq6Yxo4E9ZehzhhRQk/img.png)
stochastic gradient descent rng=np.arange(mm-2,mm+2,0.1) plt.plot(rng,((data-rng.reshape(-1,1))**2).mean(axis=1)) plt.vlines([mm],11,15,linestyles=':') plt.ylabel=('MSE') plt.xlabel=('prediction') # 평균일 때, 오차가 가장 작고 # 평균과 멀어지면 오차는 증가한다.
#MAE(절댓값 에러) (np.abs(data-data.mean())).mean() #MSE(제곱 에러) (((data-data.mean())**2)).mean() #RMSE(제곱->루트 에러) np.sqrt((((data-data.mean())**2)).mean())
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/bgGHSB/btqDf2Gk5ef/qmKbRh8ZQqEEkC3cq29Ui1/img.png)
회기 분석 세가지가 데이터를 넣어서 나머지 한 개의 데이터를 예측 X=data[:,:3] y=data[:,3] from sklearn.linear_model import LinearRegression model=LinearRegression() model.fit(X,y) model.score(X,y) 0.9380481344518986 pred_y=model.predict(X) y,pred_y 더보기 (array([0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, ..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/dpSSLE/btqDf11r2vm/3srQDK5fdchGbdTxCur6r0/img.png)
import numpy as np import matplotlib.pyplot as plt iris=np.loadtxt('iris.csv',skiprows=1,delimiter=',',usecols=range(4)) iris=np.c_[iris,[0]*50+[1]*50+[2]*50] iris X=iris[:,:4] y=iris[:,4] 더보기 array([[5.1, 3.5, 1.4, 0.2, 0. ], [4.9, 3. , 1.4, 0.2, 0. ], [4.7, 3.2, 1.3, 0.2, 0. ], [4.6, 3.1, 1.5, 0.2, 0. ], [5. , 3.6, 1.4, 0.2, 0. ], [5.4, 3.9, 1.7, 0.4, 0. ], [4.6, 3.4, 1.4, 0.3, 0. ], [5. , 3.4..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/uvSjs/btqC1GXCL3Q/0GWTWRgkZjqZ1PZTKP93Fk/img.png)
from sklearn.neighbors import KNeighborsClassifier X=iris[:,:4] y=iris[:,4] knn=KNeighborsClassifier() knn.fit(X,y) knn.score(X,y) knn.predict(X) from sklearn.linear_model import LinearRegression linear=LinearRegression() linear.fit(iris[:,[2]],iris[:,3]) 기울기 = linear.coef_[0] 절편 = linear.intercept_ 기울기, 절편 plt.scatter(iris[:,2],iris[:,3]) plt.xlabel('PetalLength') plt.ylabel('PetalWidth') plt.p..