일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- tensorflow
- 재귀함수
- cudnn
- broscoding
- 결합전문기관
- vscode
- C언어
- classification
- 대이터
- inorder
- web
- CES 2O21 참여
- KNeighborsClassifier
- 데이터전문기관
- discrete_scatter
- Keras
- web 사진
- 자료구조
- web 개발
- 머신러닝
- postorder
- pycharm
- paragraph
- bccard
- 웹 용어
- html
- mglearn
- java역사
- CES 2O21 참가
- web 용어
- Today
- Total
목록분류 전체보기 (688)
bro's coding
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/OqfPV/btqDoS4UqtV/zKC4FNm2wjWJkxsu50ugf1/img.png)
https://broscoding.tistory.com/114 머신러닝.iris data 불러오기 import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris=load_iris() iris dir(iris) ['DESCR', 'data', 'feature_names', 'target', 'target_names'] iris.data.shape.. broscoding.tistory.com https://broscoding.tistory.com/115 머신러닝.테스트데이터 뽑기 from sklearn.model_selection import train_test_split X_train,X_test,y..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/HrKcP/btqDogdMZ4p/23iOBp88Wpm3zoezRK6eM1/img.png)
https://broscoding.tistory.com/132 머신러닝.linear_model.LogisticRegression(Class=3) https://broscoding.tistory.com/128 머신러닝.datasets .make_blobs 사용하기 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축.. broscoding.tistory.com C가 커질 수록 과적합 된다 C가 커질수록 세밀하게 나눠준다 1/C=a c:(cost,lose,penalty) for j in range(-5,5): from sklearn.linear_model im..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/bMMilM/btqDoT3x49k/VDg3fANBKlgkzQv0PxhJx0/img.png)
https://broscoding.tistory.com/128 머신러닝.datasets .make_blobs 사용하기 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축)(전부 X값임) # 중심점의 위치 # 각 중심점에 대한 편차 2, 3 plt.scatter(X[:.. broscoding.tistory.com # data 준비 from sklearn.datasets import make_blobs X,y=make_blobs(300,2,[[0,0],[-10,10],[10,10]],[2,3,5]) # model 훈련 from sklearn.linear_mo..
https://broscoding.tistory.com/129 머신러닝.linear_model.LogisticRegression(로지스틱 회귀) import numpy as np import matplotlib.pyplot as plt # data 준비 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) https://broscoding.tistory.com/128 머신러닝.datasets.. broscoding.tistory.com # predict_proba() # proba = probablity(확률) # 0일 확률 1일 확률 표시 np.round(model.predict_proba(X)[:10],2) ..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/cxchZ2/btqDk7ib5bN/kVftFAhKdoWcIdmuqV7KSk/img.png)
def sigmoid(x): return 1/(1+np.exp(-x)) xxx=np.arange(-15,15,0.01) yyy=sigmoid(xxx) plt.plot(xxx,yyy) yyy=sigmoid(xxx*0.5) plt.plot(xxx,yyy) yyy=sigmoid(xxx*10) plt.plot(xxx,yyy) lim(x->inf) =1 lim(x->-inf)=0 lim(x->+-0)=0.5 확률 값을 계산 할 때 사용 신경망에서 가장 중요한 함수 함수의 성격이 매우 중요 p86
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/bCICLj/btqDpfrFbNP/6AuSbOjy3NT89e26zsOBN0/img.png)
import numpy as np import matplotlib.pyplot as plt # data 준비 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) https://broscoding.tistory.com/128 머신러닝.datasets .make_blobs 사용하기 from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축)(전부 X값임) # 중심점의 위치 # 각 중심점에 대한 편차 2, 3 plt.scatter(X[:.. broscoding.tistory..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/pjfrJ/btqDnxthtI8/QSAV6aknVMkqP90iIUaCgK/img.png)
from sklearn.datasets import make_blobs X,y=make_blobs(400,2,[[0,0],[5,5]],[2,3]) # 400 : 행의 갯수 # 2 : 속성의 갯수 2개(축)(전부 X값임) # 중심점의 위치 # 각 중심점에 대한 편차 2, 3 plt.scatter(X[:,0],X[:,1],c=y,s=60,alpha=0.3) plt.colorbar()
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/5lme0/btqDiFshYEu/h2TMUyLDnAy28h6lYVh9yK/img.png)
# 와인의 속성을 사용해서 점수 예측 import numpy as np import matplotlib.pyplot as plt wine=np.loadtxt('winequality-red.csv',skiprows=1,delimiter=';') # x=전체 속성값 X=wine[:,:-1] # y=와인 등급 y=wine[:,-1] # lienar regression from sklearn.linear_model import LinearRegression model=LinearRegression() model.fit(X,y) #result w=model.coef_ b=model.intercept_ print('w=',w) print('b=',b) ''' w= [ 2.49905527e-02 -1.08359026..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/McFeP/btqDiFyJzY3/ItVpkxKpWBsYXYjFd9vpS1/img.png)
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris # data 수집 iris=load_iris() # column 컨트롤 col1=2 col2=3 # 전체 data에 대한 scatterplot graph plt.scatter(iris.data[:,col1],iris.data[:,col2],c=iris.target,alpha=0.7) X=iris.data[:,[col1]] # 주의 : col1->[col1] y=iris.data[:,col2] # LinearRegression model = LinearRegression() # train model.fit(X,y) #predict pred_y=mo..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/VAHgD/btqDhwI35sf/ApAW26Nc5KPzEpQOm5CT2K/img.png)
1 iris data에서 속성 : SL,SW에 대해 (col : 0, 1) 꽃 별로 선형 회귀선 을 그리시오 model : linear regression https://broscoding.tistory.com/126?category=855525 1-1 iris data에서 속성 : SL,SW에 대해 (col : 0, 1) MSE,RMSE,MAE,model.score(X,y) 를 구하시오 model : linear regression https://broscoding.tistory.com/126?category=855525 2 wine data file에서 (sklearn load 아님 numpy loadtxt) winequality-red model : linear regression 을 이용해서 등급..