일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- inorder
- web 개발
- bccard
- 결합전문기관
- CES 2O21 참여
- classification
- 머신러닝
- KNeighborsClassifier
- web 사진
- tensorflow
- java역사
- C언어
- discrete_scatter
- CES 2O21 참가
- html
- broscoding
- 대이터
- 재귀함수
- 자료구조
- vscode
- paragraph
- web 용어
- pycharm
- web
- 웹 용어
- Keras
- postorder
- 데이터전문기관
- mglearn
- cudnn
- Today
- Total
목록전체 글 (688)
bro's coding
1 [svm] data: iris / col: 0,1 mglearn을 이용해 그래프를 그리시오 model: SVC / 속성: default https://broscoding.tistory.com/148 2 [corrcoef], [Linear Regeression] data: breast_cancer / col: 0,3 2-1)상관계수를 구하시오 2-2)선형 회기 선을 그리시오 model : Linear regression https://broscoding.tistory.com/143 (2-1 답 : 0.9873571700566123 or array( [[1. , 0.98735717], [0.98735717, 1. ]] ) ) 3 [normalization] data: breast cancer train_..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/cvzMi2/btqDsb404Hk/BxQXC8lzIoHiKL94MNHJfk/img.png)
import numpy as np import matplotlib.pyplot as plt #data 준비 from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() col1=0 col2=3 X=cancer.data[:,col1] y=cancer.data[:,col2] corr=((X-X.mean())*(y-y.mean())).mean()/(X.std()*y.std()) # 0.9873571700566123 np.corrcoef(X.T,y) # array([[1. , 0.98735717], [0.98735717, 1. ]]) from sklearn.linear_model import LinearRegression X=cancer..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/67ytQ/btqDtzQ3iaC/9zEaogQtL77vJD4NbDQ3G1/img.png)
(=pearson's r) np.corrcoef(cancer.data[:,0],cancer.data[:,22]) array([[1. , 0.96513651], # 나:나 나:너 [0.96513651, 1. ]])# 너:나 나:나 plt.imshow(np.corrcoef(cancer.data.T)) plt.colorbar() # 노랗거나 까만 색이 유의미한 데이터
iris=load_iris() X=iris.data y=iris.target X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2) model=LinearSVC(C=1) model.fit(X_train,y_train) pred_y=model.predict(X_test) model.score(X_test,y_test) model.decision_function(X_test) guideline과 나의 거리 양수면 내 쪽에 속한것 array([[-6.58398872e-01, -1.41247905e-01, -2.21407969e+00], [ 1.91618644e+00, -1.16266051e+00, -8.10343365e+00], [ 1.078043..
보호되어 있는 글입니다.
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/blVzuD/btqDsaX0jGA/huwZ51eB5cKsslfQRdaHd1/img.png)
구분선을 긋고 선과 가장 가까운 점들을 찾는다. 그 점을 support vector 라고 한다. import numpy as np import pandas as pd import matplotlib.pyplot as plt # data 준비 from sklearn.datasets import load_iris iris=load_iris() from sklearn.model_selection import train_test_split col1=0 col2=1 X=iris.data[:,[col1,col2]] y=iris.target y[y==2]=1 from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/eh4Orc/btqDmnrEPaS/ZNOXAp9ZwzgPzgHxWj7ayk/img.png)
from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=[10,8]) ax = Axes3D(fig) a = np.arange(-4,12,0.2) b = np.arange(-4,12,0.2) xx, yy = np.meshgrid(a,b) ax.plot_surface(xx, yy, model.coef_[0,0]*xx + model.coef_[0,1]*yy + model.intercept_[0], shade=True, alpha=0.1, color='b') ax.plot_wireframe(xx, yy, model.coef_[0,0]*xx + model.coef_[0,1]*yy + model.intercept_[0], rstride=2, cstride..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/EgxEx/btqDp2lMl6N/kEWbGecWvkcXeGLkvcem3K/img.png)
https://broscoding.tistory.com/114 머신러닝.iris data 불러오기 import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris=load_iris() iris dir(iris) ['DESCR', 'data', 'feature_names', 'target', 'target_names'] iris.data.shape.. broscoding.tistory.com https://broscoding.tistory.com/115 머신러닝.테스트데이터 뽑기 from sklearn.model_selection import train_test_split X_train,X_test,y..