외장_하드
외장하드
외장_하드
전체 방문자
오늘
어제
  • 분류 전체보기 (423) N
    • 국내 여행 (197)
      • 수도권 (20)
      • 충청도 (17)
      • 강원도 (32)
      • 전라도 (21)
      • 경상도 (50)
      • 제주도 (57)
      • 나들이 (0)
    • 일상 (40)
      • 휴식 (15)
      • 취업 준비 (19)
      • 월간일상 (3)
      • 군생활 (3)
    • 국외 여행 (44) N
      • 일주 준비 (4)
      • 중국 (11)
      • 대만 (5)
      • 태국 (20)
      • 일본 (4) N
    • 취미 (19)
      • 제품 후기 (11)
      • 지식 (8)
    • 자동차 (21)
      • 뉴 그랜저 XG (15)
      • 올 뉴 K3 (5)
    • IT (102)
      • Spring (18)
      • DataBase (11)
      • AWS (22)
      • GIT | github (9)
      • 기타 (23)
      • 머신러닝 | 딥러닝 (8)
      • Tech | Eng (3)
      • Python (2)
      • nodejs (6)

블로그 메뉴

  • 홈
  • 태그
  • 방명록

공지사항

인기 글

태그

  • 설치
  • 스쿠터
  • EC2
  • 뚜벅이
  • aws
  • 코스
  • 카페
  • 부산
  • 전라도
  • 방법
  • 스프링
  • 가성비
  • 제주도
  • 여행
  • 후기
  • 혼자여행
  • 주차장
  • 추천
  • 푸켓
  • 강원도

최근 댓글

최근 글

티스토리

hELLO · Designed By 정상우.
외장_하드

외장하드

[Deep Learning] #2 History of Artificial Intelligence / XOR Problem of Perceptron / Artificial Neural Network (ANN)
IT/Tech | Eng

[Deep Learning] #2 History of Artificial Intelligence / XOR Problem of Perceptron / Artificial Neural Network (ANN)

2020. 8. 23. 17:26
반응형

We studied what artificial intelligence is last time. Simply put, what artificial intelligence was, you can sort it out to a "system for creating human intelligence."

The history of artificial intelligence began in the 1950s when Turing machines came out. Later in 1965, a chatbot system called Alisa was developed. Alisa is also known as artificial intelligence, but it was not the kind of AI that we thought would acquire knowledge by learning for ourselves because it was actually just a program that could branch out every situation and answer any situation.

XOR problem of Perceptron

= Cannot process beta logical circuit

As the history of AI begins, the development of AI, which seemed to be successful, faces a problem and faces a recession. On the one hand, it is said to be an XOR problem of Perceptron, and those who first started artificial intelligence now will naturally not know what Perceptron is, so you can just ignore it. I'll post it later, but right now, you can think of it as a solution that mimics a tiny neuron to implement AI.

​

Looking at the graph of the left-most XOR gate, there was a problem in which the results of + and - could not be divided into one straight line (linear). If you look at the OR gate and the AND gate can distinguish between + and - with a single line, XOR is impossible!! So I wondered if the history of AI, which had been popular, would end like this, but Professor Marvin Minsky said, "We can solve it using multi-layer."

MLP (Multilayer Layer Perceptron)

Using two layers (blue) instead of one-layer layers as shown above, it has been revealed that XOR results can be divided into two straight lines, as in the far right graph. However, at that time it was theoretically possible to model the above concept, but it was too complex to actually implement it. (You don't have to understand what the picture above is. I will study properly later on, so non-specialists just have to be aware that there is a model in shape.

​

In addition, he pointed out the limitations of this MLP model because it is a model that cannot be taught. However, this problem is solved through the reverse error propagation method in the future.

AI stagnation period

1970~1990

That's how the first AI winters, or AI winters, came in 1974 to 1980, and then the second AI winters from 1987 to 1993. This is the time when AI's development was stagnant.

 

In a chess game called DeepBlue, which was created by IBM in 1997, he defeated then-world chess champion Gary Kasparov in a showdown in a game he thought was only human territory. But at the time, Deep Blue was able to win by learning all the cases that could come out of the chess game, but if you put them into Go, the number of cases would be as large as the number of atoms in the universe, making them impossible to learn.

​

Thus, models that investigate and learn all situations like Deep Blue have limitations in applying them to other areas, which was shocking to the public, but not enough to change the paradigm academically.

​

the advent of the Neural Network

Artifical Neural Network : ANN

Over time, a model called artificial neural network came out. Using this model, we were able to solve the XOR problem in the way Professor Marvin Minsky said, using multi-layers. (Let's find out more about Neural Networks in future postings.)

In 2009, Google started the car in the above manner to build a self-driving car, and time passed so that AlphaGo of DeepMind, a subsidiary of Google, won the Go field that it thought was a real human domain in 2016, shocking the world.

Since then, there is no area where AI can't do it anymore. It has been confirmed that A can also be used in the realm of creation, which was thought to be a human domain. So far, we have briefly learned the history of artificial intelligence. Let's talk about machine learning and deep learning in earnest next time.

[Deep Learning] #2 History of Artificial Intelligence / XOR Problem of Perceptron / Artificial Neural Network (ANN)

​

#Deeplearning #Basic #Artificialintelligence #Perceptron #XORProblem #Artificialneuralnetwork #ANN

반응형
저작자표시 비영리 변경금지 (새창열림)

'IT > Tech | Eng' 카테고리의 다른 글

[Deep Learning] #3 Let's find out what machine learning is! / Relationship with Artificial Intelligence / Map Learning / Non-Map Learning / Reinforcement Learning  (0) 2020.08.23
[Deep Learning] #1 Let's find out what artificial intelligence is / Strong artificial intelligence vs. Weak artificial intelligence  (0) 2020.08.16
    'IT/Tech | Eng' 카테고리의 다른 글
    • [Deep Learning] #3 Let's find out what machine learning is! / Relationship with Artificial Intelligence / Map Learning / Non-Map Learning / Reinforcement Learning
    • [Deep Learning] #1 Let's find out what artificial intelligence is / Strong artificial intelligence vs. Weak artificial intelligence
    외장_하드
    외장_하드
    자동차 / 여행 / 취업 / 일상 / IT / 코딩

    티스토리툴바