Artificial Intelligence
Essay by 24 • November 12, 2010 • 694 Words (3 Pages) • 1,793 Views
Nowadays, technology has been increased into create robots which has a concept that is called 'Artificial intelligence'. Artificial Intelligence means that it is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible. Find out how the military is applying AI logic to its hi-tech systems, and how in the near future Artificial Intelligence may impact our lives. Therefore, a reading which is called 'Cooperating With New Intelligence' about it from Michael Amissimov. He argues that today's technology is enough to create artificial general intelligence so while doing this improvement we should be careful due to the fact that they are not only like robots that do same thing but also has self-improvement and moral sense. In his opinion, he claims that supercomputers are powerful and Artificial Intelligence should have a goal system which is not harmful to human beings. This idea is valid and acceptable because this idea is not contradictory within his idea that artificial general Intelligence can create.
Firstly, he discussed self-improvement of Artificial Intelligence. He claims that artificial intelligence should have self improvement rather than have a few functions that doing same thing and no improvements. He states that through Artificial General Intelligence a computer gets intelligent enough (probably around a human level), and starts improving its own source code. This creates a loop where it would become faster and faster each time it improves itself. It could in a short amount of time become more intelligent than anything before. There is high chance then that it doesn't need people anymore. It may even be "right" in its reasoning that humans are not needed. Therefore we should be careful not to make a "unfriendly" AGI.
While creating a robotic, we should not see it like only robot that is controlled by us, we should see them like a human because they have also self-improvement. Therefore, we create an Artificial Intelligence with some of knowledge and then it can improve itself. This result causes us to be careful while creating them. If they can improve theirselves they cause the end of human life so we should determine their roles in life then they improve theirselves in that way. Michael Anissimov states that ''the first AGI we create may be the last AGI we create(or the last piece of technology, for that matter).all other AGI's will build each other or themselves.Simply put, all of humanity's future well-being may be contingent on programmers getting the top-level
...
...