일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- tensorflow
- CES 2O21 참여
- cudnn
- mglearn
- web 용어
- 데이터전문기관
- 재귀함수
- discrete_scatter
- Keras
- 웹 용어
- bccard
- 머신러닝
- broscoding
- paragraph
- html
- C언어
- web
- java역사
- CES 2O21 참가
- classification
- web 사진
- 결합전문기관
- KNeighborsClassifier
- 자료구조
- postorder
- inorder
- pycharm
- web 개발
- 대이터
- vscode
- Today
- Total
목록[AI] (189)
bro's coding
alphas = [10, 1, 0.1, 0.01, 0.001, 0.0001] train_scores = [] test_scores = [] ws = [] for alpha in alphas: lasso = Lasso(alpha=alpha) lasso.fit(X_train, y_train) ws.append(lasso.coef_) s1 = lasso.score(X_train, y_train) s2 = lasso.score(X_test, y_test) train_scores.append(s1) test_scores.append(s2) display(train_scores, test_scores, ws) [0.0, 0.40725895623295394, 0.900745787336254, 0.92796316315..
릿지(Ridge) 와 라쏘(Lasso) 는 오차값에 규제(Regulation) 항 또는 벌점(Penalty) 항을 추가해서, 좀 더 단순화된 모델 또는 일반화된 모델을 제공하는 방법이다. import numpy as np import matplotlib.pyplot as plt # graph size fig=plt.figure(figsize=[12,6]) # -10부터 10까지 100개로 분할함 rng=np.linspace(-10,10,100) # mse mse=(0.5*(rng-3))**2+30 # ridge's alpha = 1 l2=rng**2 # rasso's alpha = 5 l1=5*np.abs(rng) # ridge ridge=mse+l2 # lasso lasso=mse+l1 # visual..
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() col1=0 col2=5 X=cancer.data[:,[col1,col2]] y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) X_mean=X_train.mean(axis=0) X_std=X_train.std(axis=0) X_train_norm=(X_train-X_mean)/X_std X_test_norm=(X_te..
import numpy as np import matplotlib.pyplot as plt # data set from sklearn.datasets import load_breast_cancer cancer=load_breast_cancer() col1=0 col2=5 X=cancer.data[:,[col1,col2]] y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) X_mean=X_train.mean(axis=0) X_std=X_train.std(axis=0) X_train_norm=(X_train-X_mean)/X_std X_test..
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer cancer =load_breast_cancer() X=cancer.data y=cancer.target from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y) from sklearn.svm import SVC model=SVC() model.fit(X_train,y_train) model.score(X_test,y_test) # 0.6083916083916084 ..
높이를 정하는 함수를 만들어 높이를 만들고 구분이 되면 구분 기준을 가지고 다시 높이를 제거한다. C 가 증가하면 곡선이 디테일 해지고 감마가 증가하면 섬들이 많이 생긴다. from sklearn.datasets import load_iris from sklearn.svm import SVC from sklearn.svm import LinearSVC iris=load_iris() col1=0 col2=1 X=iris.data[:,[col1,col2]] y=iris.target X_train,X_test,y_train,y_test=train_test_split(X,y) # SVC : 성능은 좋지만 튜닝이 어렵다 model1=SVC() model1.fit(X_train,y_train) mglearn.plo..
https://broscoding.tistory.com/145 머신러닝.make_circles 사용하기 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import make_circles X,y=make_circles(factor=0.5,noise=0.1) # factor = R2/R1, noise= std) plt.scatter(X[:,.. broscoding.tistory.com X,y=make_circles(factor=0.5,noise=0.1) X=X*[1,0.5] X=X+1 plt.scatter(X[:,0],X[:,1],c=y) plt.vlines([1],-0,2,linestyl..
https://broscoding.tistory.com/145 머신러닝.make_circles 사용하기 import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import make_circles X,y=make_circles(factor=0.5,noise=0.1) # factor = R2/R1, noise= std) plt.scatter(X[:,.. broscoding.tistory.com import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_tes..
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import make_circles X,y=make_circles(factor=0.5,noise=0.1) # factor = R2/R1, noise= std) plt.scatter(X[:,0],X[:,1],c=y) plt.colorbar()
1 [svm] data: iris / col: 0,1 mglearn을 이용해 그래프를 그리시오 model: SVC / 속성: default https://broscoding.tistory.com/148 2 [corrcoef], [Linear Regeression] data: breast_cancer / col: 0,3 2-1)상관계수를 구하시오 2-2)선형 회기 선을 그리시오 model : Linear regression https://broscoding.tistory.com/143 (2-1 답 : 0.9873571700566123 or array( [[1. , 0.98735717], [0.98735717, 1. ]] ) ) 3 [normalization] data: breast cancer train_..