일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- cudnn
- 자료구조
- 웹 용어
- 재귀함수
- web 용어
- html
- web
- 머신러닝
- CES 2O21 참가
- discrete_scatter
- CES 2O21 참여
- Keras
- 데이터전문기관
- pycharm
- mglearn
- web 개발
- bccard
- inorder
- postorder
- java역사
- paragraph
- classification
- broscoding
- web 사진
- 대이터
- tensorflow
- KNeighborsClassifier
- C언어
- vscode
- 결합전문기관
- Today
- Total
목록[AI]/python.sklearn (95)
bro's coding
보호되어 있는 글입니다.
구분선을 긋고 선과 가장 가까운 점들을 찾는다. 그 점을 support vector 라고 한다. import numpy as np import pandas as pd import matplotlib.pyplot as plt # data 준비 from sklearn.datasets import load_iris iris=load_iris() from sklearn.model_selection import train_test_split col1=0 col2=1 X=iris.data[:,[col1,col2]] y=iris.target y[y==2]=1 from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_..
from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=[10,8]) ax = Axes3D(fig) a = np.arange(-4,12,0.2) b = np.arange(-4,12,0.2) xx, yy = np.meshgrid(a,b) ax.plot_surface(xx, yy, model.coef_[0,0]*xx + model.coef_[0,1]*yy + model.intercept_[0], shade=True, alpha=0.1, color='b') ax.plot_wireframe(xx, yy, model.coef_[0,0]*xx + model.coef_[0,1]*yy + model.intercept_[0], rstride=2, cstride..
https://broscoding.tistory.com/114 머신러닝.iris data 불러오기 import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris=load_iris() iris dir(iris) ['DESCR', 'data', 'feature_names', 'target', 'target_names'] iris.data.shape.. broscoding.tistory.com https://broscoding.tistory.com/115 머신러닝.테스트데이터 뽑기 from sklearn.model_selection import train_test_split X_train,X_test,y..
https://broscoding.tistory.com/114 머신러닝.iris data 불러오기 import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris=load_iris() iris dir(iris) ['DESCR', 'data', 'feature_names', 'target', 'target_names'] iris.data.shape.. broscoding.tistory.com https://broscoding.tistory.com/115 머신러닝.테스트데이터 뽑기 from sklearn.model_selection import train_test_split X_train,X_test,y..
https://broscoding.tistory.com/132 머신러닝.linear_model.LogisticRegression(Class=3) https://broscoding.tistory.com/128 머신러닝.datasets .make_blobs 사용하기 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축.. broscoding.tistory.com C가 커질 수록 과적합 된다 C가 커질수록 세밀하게 나눠준다 1/C=a c:(cost,lose,penalty) for j in range(-5,5): from sklearn.linear_model im..
https://broscoding.tistory.com/128 머신러닝.datasets .make_blobs 사용하기 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축)(전부 X값임) # 중심점의 위치 # 각 중심점에 대한 편차 2, 3 plt.scatter(X[:.. broscoding.tistory.com # data 준비 from sklearn.datasets import make_blobs X,y=make_blobs(300,2,[[0,0],[-10,10],[10,10]],[2,3,5]) # model 훈련 from sklearn.linear_mo..
https://broscoding.tistory.com/129 머신러닝.linear_model.LogisticRegression(로지스틱 회귀) import numpy as np import matplotlib.pyplot as plt # data 준비 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) https://broscoding.tistory.com/128 머신러닝.datasets.. broscoding.tistory.com # predict_proba() # proba = probablity(확률) # 0일 확률 1일 확률 표시 np.round(model.predict_proba(X)[:10],2) ..