일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- web
- CES 2O21 참여
- CES 2O21 참가
- broscoding
- inorder
- discrete_scatter
- Keras
- paragraph
- postorder
- mglearn
- 대이터
- html
- classification
- java역사
- KNeighborsClassifier
- C언어
- cudnn
- tensorflow
- bccard
- 데이터전문기관
- 재귀함수
- 머신러닝
- 자료구조
- web 개발
- web 사진
- vscode
- pycharm
- 웹 용어
- web 용어
- 결합전문기관
- Today
- Total
목록[AI] (189)
bro's coding
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/Slx3w/btqDBzK8mtU/aCCBlmrDO3Vks3WkHZLcMk/img.png)
people=fetch_lfw_people(min_faces_per_person=20,resize=0.7) dir(people) people.target_names array(['Alejandro Toledo', 'Alvaro Uribe', 'Amelie Mauresmo', 'Andre Agassi', 'Angelina Jolie', 'Ariel Sharon', 'Arnold Schwarzenegger', 'Atal Bihari Vajpayee', 'Bill Clinton', 'Carlos Menem', 'Colin Powell', 'David Beckham', 'Donald Rumsfeld', 'George Robertson', 'George W Bush', 'Gerhard Schroeder', 'Gl..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/s2je4/btqDyMYZOmT/izXVbFulqYKYfHETsS5CdK/img.png)
있다/없다 처럼 이진 속성을 가지고 속성이 아주 많은 경우 : BernoulliNB 이진 속성이 아닌 일반적인 연속값 속성을 가지는 데이터 : GaussianNB import numpy as np from sklearn.naive_bayes import BernoulliNB X = np.random.randint(2, size=(6, 100)) #y = np.array([1, 2, 3, 4, 4, 5]) y = np.array([1, 0, 0, 1, 1, 0]) model = BernoulliNB() model.fit(X, y) pred_y = model.predict(X) print(pred_y) # [1 0 0 1 1 0] X ''' array([[0, 1, 1, 0, 0, 1, 0, 1, 0, 1,..
그래디언트 부스팅도 랜덤 포레스트 처럼 나무를 여러개 만듬. 하지만, 한꺼번에 나무를 만들지 않고 나무를 하나 만든 다음 그것의 오차를 줄이는 방법으로 다음 나무를 만듬 이런 과정을 단계적으로 진행 그래디언트 부스팅은 머신러닝 경연대회에서 우승을 많이 차지함 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer from sklearn.ensemble import GradientBoostingClassifier cancer = load_breast_canc..
1 https://broscoding.tistory.com/157?category=855525 2 https://broscoding.tistory.com/158 3 https://broscoding.tistory.com/162 4 https://broscoding.tistory.com/170 https://broscoding.tistory.com/148?category=855525 5 https://broscoding.tistory.com/164
1 [non-linear regression] data : breast cancer(col : 0,6) model : LinearRegression 2 [DecisionTreeClassifier] data : breast cancer(col : 0,1) model : DecisionTreeClassifier max_depth: 1~10 3 [RandomForestClassifier] data : breast cancer model : RandomForestClassifier 4 [PCA] data : breast cancer model : pca model : SVC(gamma=5) 5 [KMeans] data : make_blobs() / default graph 각 군집을 색으로 구분하는 scatte..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/caJqic/btqDzEF0H4D/UAHdH9Rzph7wfkQE1BVbQK/img.png)
X_pca=pca.transform(X_norm) plt.scatter(X_pca[:,0],X_pca[:,1],c=cancer.target,alpha=0.3) from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() cancer.feature_names.shape # (30,) from sklearn.decomposition import PCA pca=PCA(2) X_norm=(cancer.data-cancer.data.mean(axis=0))/cancer.data.std(axis=0) pca.fit(X_norm) # 속성 중요도 pca.components_ ''' array([[ 0.21890244, 0.10372458, 0..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/y4yj6/btqDCYpCjCD/ZG1KLcW0Dd6fKGyqug0Q90/img.png)
https://broscoding.tistory.com/167 sklearn.cluster.PCA PCA(Principal component analysis) 중요한 feature을 찾아내고 그것을 기준으로 축을 바꾼다. from sklearn.datasets import load_iris iris=load_iris() from sklearn.decomposition import PCA col1=0 col2=1 p.. broscoding.tistory.com https://broscoding.tistory.com/168 sklearn.cluster.PCA.visualization https://broscoding.tistory.com/167 sklearn.cluster.PCA PCA(Principal c..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/DC36I/btqDCZPzEPg/Ptr1TKxvkGEic0WoiKxO1K/img.png)
https://broscoding.tistory.com/167 sklearn.cluster.PCA PCA(Principal component analysis) 중요한 feature을 찾아내고 그것을 기준으로 축을 바꾼다. from sklearn.datasets import load_iris iris=load_iris() from sklearn.decomposition import PCA col1=0 col2=1 p.. broscoding.tistory.com plt.scatter(iris.data[:,col1],iris.data[:,col2],c=iris.target,alpha=0.3) plt.scatter(x_pca[:,0],x_pca[:,1],alpha=0.3) plt.plot([-3,3],[0,0]..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/l4Dsj/btqDzjWkO3I/pDfGo2QZERw7C9b1nE5GDk/img.png)
PCA(Principal component analysis) 중요한 feature을 찾아내고 그것을 기준으로 축을 바꾼다. from sklearn.datasets import load_iris iris=load_iris() from sklearn.decomposition import PCA col1=0 col2=1 pca=PCA() pca.fit(iris.data[:,[col1,col2]]) # 뱡향 백터 com=pca.components_ com ''' array([[ 0.99693955, -0.07817635], [ 0.07817635, 0.99693955]]) ''' plt.scatter(iris.data[:,col1],iris.data[:,col2],c=iris.target) # 바뀐 축 plt...
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/c6YWzw/btqDCbWMafU/28KHWIkOt9TRXmCriO3zG0/img.png)
https://broscoding.tistory.com/165 머신러닝.sklearn.datasets.make_moons from sklearn.datasets import make_moons X,y=make_moons(noise=0.1) plt.scatter(X[:,0],X[:,1],c=y) broscoding.tistory.com import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs,make_circles,make_moons X,y=make_moons(noise=0.07,random_state=1) plt.scatter(X[:,0],X[:,1],c=y,cmap='Reds') from sklea..