Past and Future of Artificial Intelegence
Artificial intelligence can be briefly defined as “the activity dedicated to giving intelligence to machines”. Natural intelligence, on the other hand, is the quality that enables an entity to function properly in its environment and to anticipate what is happening. Many beings (humans, animals) are intelligent. Some animals are at the primitive end of the broad spectrum of beings ranked in various degrees, while humans are at the other end. Humans can reason, plan to achieve goals, understand a language and form sentences in that language, perceive and react to sensory input, analyse mathematical theorems, synthesise and summarise information, and create works of art and music.
Functioning requires different skills. The ability of human beings to perform one or more of them is based on a continuous range of several intelligences. The central control of these occurs as a result of chemical reactions in our brain inside our skull, which contains 100 billion nerve cells called “NERON”. However, in today’s conditions, an Artificial Intelligence model capable of doing many of these has not yet been developed. The limited capabilities that make up Artificial Intelligence consist of a multidisciplinary chain of processes that require advanced knowledge of mathematics, physics, statistics, linguistics and logic in parallel with control engineering, electrical electronics and software engineering. Artificial Intelligence is directly related to the technological level reached by computer systems. A computer system includes both hardware and software;
Machines that embody human abilities have long been imagined: automata that move, devices that reason, etc. Human-like machines have been depicted in many stories, sculptures, paintings and drawings in the past. In Greek legend, the sculptor Pygmalion sculpts the beautiful virgin Galatea from ivory and Aphrodite gives life to this statue. The Greek philosopher Aristotle also dreamed of automation and wrote in his work called Politics: “Imagine that every tool in our hands performs its own work either at our command or in case of need. But if the shuttle of the loom moved by itself and the plectrum of the lyre played by itself, then neither master craftsmen would need servants, nor masters slaves…
Around 1495, Leonardo Da Vinci drew humanoid robot designs in the form of a medieval knight. Leonardo’s knight could sit upright, move his arms and head, and open his jaw.
In the Talmud, artificial beings called “Golems”, like Adam, were mostly made of earth. Stories of rabbis using golems as servants tell of the difficulties of keeping them under control.
In 1651, Thomas Hobbes hinted that it would be possible to build “artificial animals” in his work Leviathan, which deals with the social contract and the ideal state. For this reason, historians of science call Hobbes “the ancestor of Artificial Intelligence”.
The mechanical duck designed and built by the French inventor Jacques de Vaucanson is considered the first example of robotic beings. This duck quacked, flapped its wings, waddled, drank water and ate and ground grain. This prototype of Vaucanson is considered a marvellous piece of engineering.
Jacques Offenbach’s mechanical doll Olympia, also of natural size, sang and danced in the first scene of his opera “The Story of Hoffman”…
The best known of such stories of the development of intelligent machines is the play R.U.R. (Rossum’s Universal Robots) by Czechoslovak playwright Karel Capek in 1921. The term “ROBOT” was first used for human-like beings made of tin in this play. The word “robot” means “forced labour” or “hard labour” in Czech and refers to slave-like workers. Robotnik means peasant or serf.
The play begins in Prague in January 1921. The robots are mass-produced on the island where the factory Rossum’s Universal Robots is located; during production, a chemical is used to replace protoplasm. Describing the play, “The robots remember everything but think of nothing new. The human managers of the factory consider such an event a manufacturing defect, but Helena prefers to interpret it as a sign of the soul. In a newspaper article written in 1935, the author states that machines may one day replace human beings
Science fiction writer Isaac Asimov, originally a chemistry professor, has written numerous stories about robots. His first collection, I, Robot, consists of nine stories about “positronic” robots. After Asimov’s science fiction stories about robots destroying and ruining everything, he applied the famous “Three Laws of Robotics” to the positronic brain of his robots.
- First Law: No robot can harm a human being or remain inert and allow harm to come to a human being.
- Second Law: The robot must obey human commands as long as they do not contradict the First Law.
- Third Law: The robot must preserve its own existence as long as it does not contradict the First or Second Law.
As can be summarised, the stories of intelligent machines from the past have been fed by approaches based on engineering disciplines such as mechanics, physics, statistics, as well as philosophy, psychology, logic and even biology. In the adventure of automating some aspects of intelligence, the clues accumulated from these fields have come to the present day with increasing intensity over time.
The first person to analyse and define the process of logic was the Greek philosopher Aristotle.
- All living things are mortal.
- All human beings are living beings.
- All men are mortal.
This contribution of Aristotle is directly related to the first design form of ARTIFICIAL INTELLIGENCE. This Aristotelian logic provides two clues towards its automatisation. These patterns construct general symbols that can be substituted for many different concrete situations. Secondly, when general symbols are replaced by symbols specific to a particular problem, one need only “press a button” to arrive at an answer. The use of generic symbols and similar transitional operations is the basis of all modern Artificial Intelligence reasoning programmes.
Gottfried Wilhelm Leibniz was one of the first to think about logical reasoning. Leibniz, a German philosopher, mathematician, logician, among many other achievements, was also one of the inventors of calculus (derivative and integral calculus) and is famous for his discussions with Isaac Newton on this subject). Leibniz writes in one of his essays: “It is a pity that perfect men spend hours slaving over calculations, whereas if machines were used, this work could be safely left to others”.
Leibniz attempted to conceive and design a language in which all human knowledge, even the knowledge of philosophy and metaphysics, could be expressed, arguing that the propositions that constitute knowledge could be constructed with a smaller number of basic propositions; the “Lingua Characteristica” that Leibniz referred to was the new universal language and would be composed of these basic propositions, thus forming the basis of an alphabet of human thought. The main problem in the implementation of this idea was to discover the components of the basic “alphabet”.
In the early 19th century, the British scientist and politician Charles Stanhope built and experimented with devices that solved simple logic and probability problems. Today, the Stanhope box can be said to be a kind of analogue computer.
In 1854 George Boole, an English mathematics teacher, published a book entitled “An enquiry into the laws of thought on which the mathematical theories of logic and probability are founded”. Boole’s aim (among other things) was “to collect some possible similarities between the nature of the human mind and his theorems. Boole reviewed various logical principles of human reasoning and tried to express them mathematically. In Boolean algebra, which grew out of Boole’s work, (0) represents false and (1) represents true, and the two basic operations of logic, “OR” and “AND” are represented by (+) and (x), respectively.
The brain is undoubtedly the organ responsible for synthesising the inner feelings and acquired knowledge of humans, including animals, into action. Research on how the brain carries out these processes has shown that the main component is the neurons.
the main goal for neurophysiologists and psychologists studying nerve cells has been how to parallel the evolutionary processes that produced intelligent life.
At the beginning of the 20th century it was recognised that the living cells called “nerve cells”, together with their interconnections, are the basis for brain function. Santiago Ramon y Cajal, a Spanish neurological anatomy researcher, was one of the proponents of this view. According to Cajal, the nerve cell is alive and there are about 100 billion of them in the brain. Although they take different forms, they generally consist of a central part (nucleus) called the cell body (soma), an input connection called the dendrite and an output connection called the axon. In the axon of a nerve cell, protrusions called end buttons come close to one or more dendrites of other nerve cells. The gap between the end button of a nerve cell and the dendrite of another nerve cell is called a synapse. The size of this gap is about 20 nanometres.
Through electrochemical action, a nerve cell can send a series of pulses (electrical impulses) along its axon, or more commonly known as “firing”. When the pulse reaches the synapse neighbouring the dendrite of another nerve cell, it can trigger an electrochemical effect in the other nerve cell on the opposite side of the synapse or inhibit the activity of that cell. The fact that this second nerve cell sends pulses on its own depends on how many and what kind of pulses (triggering or inhibitory) reach the synapses of the various dendrites arriving at the cell, and
The electrochemical activity depends on the adequacy of those synapses in transmission. It is estimated that there are more than half a trillion synapses/firing in the human brain. The nerve cell doctrine,
claims that various brain activities, including perception and thinking, are directly linked to nerve cell activity.
In 1943, the American neurophysiologist Warren McCulloch and the logician Walter Pitts proposed that the nerve cell is essentially a “logic unit”.
In their famous and important paper, they proposed simple neural cell models and showed that networks composed of these models could perform all possible computational operations. McCulloch-Pitts proposed that a “neural cell” is a mathematical abstraction with inputs and outputs (roughly corresponding to dendrites and axons, respectively). Each output can take the value 1 or 0. These neural elements can be connected in a network such that the output of one neural element is the input of other elements. Some neural elements are triggers; these are
their output contributes to the “signalling firing” of the neural elements to which they are connected. Others are inhibitory; their output contributes to inhibiting the “signalling firing” of the neural elements with which they are associated. If, when the sum of the triggering inputs affecting a neural element is subtracted from the sum of the inhibitory inputs, the result exceeds a certain “threshold”, the neural element fires and sends the output 1 to all neural elements with which it is connected.
The Canadian neurophysiologist Donald O. Hebb has also contributed that the nerve cells in the brain are the basic unit of thought. In his book, Hebb wrote: “If the axon of cell A is close enough to cell B that it is repeatedly or consistently involved in stimulating and signalling to cell B, some kind of growth process or change in metabolism takes place in one or both of these cells, such that the efficiency of cell A as one of the cells signalling to cell B increases.” This thesis about changes in neural “synapse strength” is known as Hebb’s rule. This rule has indeed been observed over time in experiments with living animals.
Hebb also proposed that clusters of nerve cells firing together form clusters of cells. Hebb thought that “signalling together” was always present in the brain and that this led to the formation of the cluster of cells representing perception. According to Hebb, the work of “thinking” was the sequential activation of clusters of cell clusters.
Psychology is the study of mental processes and behaviour. The word is derived from the Greek words psyche (breath, tin or spirit) and logos (science). Until the second half of the 19th century, most theories in psychology were based on the insights of philosophers, writers and other astute observers of humanity.
Psychiatrist Sigmund Freud went further and hypothesised that the brain has internal components called id, ego and super ego, and explained how these interact and influence behaviour. The behaviourists, on the other hand, rejected the idea of defining internal states of mind such as beliefs, intentions, desires and goals, arguing that psychology should be a science of behaviour, not a science of the mind.
Skinner’s work gave rise to the idea of reinforcing stimuli, i.e. stimuli that reward recent behaviour and make it more likely that this behaviour will occur in the future (under similar circumstances). Reinforcement learning has been a favoured strategy of Artificial Intelligence researchers.
The psychologist George A. Miller has concluded that the human capacity for “immediate memory” is about seven “chunks” of information. It does not matter what the chunk represents, whether it is a single digit in a telephone number, a person’s name that has just been spoken, or the title of a song; we can hold only seven (plus or minus two) of these chunks in our immediate memory.
In 1960 Miller and his colleagues wrote a book arguing that an internal mechanism called the TOTE unit is responsible for behaviour. The TOTE unit uses its own perceptual abilities to test whether the goal has been achieved. If the goal has been achieved, the unit goes to rest (exits). If the goal has not been reached, it is tested again whether the goal has been reached by performing operations specific to reaching this goal. This process is repeated until the goal is reached.
Cognitive science and Artificial Intelligence have been closely related since their beginnings. Cognitive science has been the main source of clues for AI researchers, while AI has helped cognitive science with newly invented concepts useful for understanding the workings of the mind.
The fact that living things evolve gives two clues about how to build intelligent objects. One is that the processes of evolution, i.e. random investment and selective survival, can be simulated in computers to build the machines we imagine, and that the paths evolution has followed in creating increasingly intelligent animals can be used as a guide to creating ever more intelligent objects.
The direction of the targeted Artificial Intelligence designs starts with imitating animals with simple orientations and continues with progress towards more complex ones.
The first significant Artificial Intelligence work inspired by biological evolution was J. Holland’s development of “genetic algorithms” in the 1960s. Holland, a professor at the University of Michigan, called the genetic material of biological organisms “chromosomes” using binary symbol sequences (0s and 1s).
Rather than replicate evolution itself, some researchers have chosen to build machines that try to follow evolutionary paths to intelligent life. The British neurophysiologist W. Grey Walter built a number of machines that acted like some of the most primitive forms of life. Walter’s work marked the beginning of the journey of the increasingly sophisticated “machine in action” that later researchers would develop.
The Swiss psychologist Jean Piaget carefully studied the behaviour of young children, suggesting that they go through a series of stages in the maturation of their thinking skills from infancy to adolescence, and identified these stages as pioneering steps that can guide the designers of intelligent objects. This particular clue forms the basis of deep learning in Artificial Intelligence
At a symposium in 1960, Major Jack E. Steele of the US Air Force Space Agency used the term “Bionics” to describe the field that applies the lessons of nature to technology. “Bionic beings refer to the evolution of living systems that have adapted themselves to the most favourable living conditions over millions of years of evolution. One of the most extraordinary achievements of evolution is the ability of living systems to process information.
By the way, one of the often mandatory components of Artificial Intelligence is Mechanical infrastructures (production line, Robotics, Construction, Machines…) Self-moving machines, even machines that do useful work on their own, have existed for a long time. The first examples
among them are weight-driven clocks. These clocks were first used in the towers of Italian cities in the second half of the Middle Ages, but are known to have been invented in China much earlier.
One of the first automatic machines used in production was Joseph-Marie Jacquard’s loom, built in 1804. The operation of this machine is based on the journey of the loom inherited from long ago and the “punch card” design of Vaucanson’s loom of 1775. It is clear that Jacquard, the lazy son of a workshop owner who had classical looms at the time, was inspired by these developments. The punched cards he used in the design of his loom made possible the automatic manufacture of fabric design by controlling the movements of the shuttles. Just a few years after its invention, there were about 10,000 Jacquard looms weaving in France, and about 20,000 weavers were unemployed in the highly sought-after art of weaving. The idea of using punched cards or paper was later used by Herman Hollerith to sort the 1890 US census data.
and would also be used in automatic pianos.
For these early machines, it was sufficient to provide an external source of energy (a falling weight,
compressed spring, people pedalling). Apart from that, their movements were fully automatic, but they had no feedback mechanisms to perceive their environmental conditions. Whereas the environment
conditions (feedback loop) is an extremely important criterion for evaluating the machine in the intelligent behaviour class.
“Feedback control is one of the simplest ways to allow sensing from peripheral units to influence the behaviour of the machine. Today, this word determines certain characteristics of the machine’s behaviour, such as processing speed. If the characteristic of the data being fed back serves to reduce the production speed of the machine or to identify disruptions, this process is called “negative feedback”, or if it serves to increase or strengthen that characteristic of the behaviour, this process is called “positive feedback”. A classic example of this is a floating regulator designed by the Greek inventor and barber Tesibius of Alexandria in 270 BC. This device is a water
by controlling the flow, maintaining the water level at a constant depth in the reservoir that fed a water clock. The feedback device was a float valve consisting of a cork attached to the end of a wand. Today, the water level in flush toilets works in much the same way. In 250 BC, Philon of Byzantium used a similar float regulator to keep the oil level constant in an oil lamp. In summary, feedback is the critical factor that plays an important role in engineering.
Before the Greek civilisation, dolls with moving arms were found in ancient Egyptian tombs. These toys are the first known works in this field. More complex ones are also found in ancient Greece. The most well-known of these are the water satellites of the Greek scholar Ctesibus (3rd century BC), the water-powered systems of Philon of Byzantium (2nd century BC) and the air pressure and steam-powered mechanisms of Heron of Alexandria (BC).
1st century). However, these works are not comparable to the works of Al-Jazari. Al-Jazari’s systems also work with water, steam power and air propulsion. However, the sensitivity of the feedback systems is very skilful.
This book, written by Al Jazari in the 1200s and translated as “Skilful mechanical devices” or more broadly as “The combination of science and technique and the art of imagination”, is very famous in the West. The original title of the book is abbreviated as “Sanat el Hiyel”. Beyond the fact that the inventions are toys, the fact that the energy source, management mechanism and feedback systems are made with mechanisms made with water, steam power and air propulsion is far beyond the age and is almost like a miracle in terms of time and space conditions. Aesthetic concerns and sense of humour are important in inventions.
is at the forefront. El cezeri also dealt with this issue much earlier than Descard, Jackard, Pascal, Leibniz, Bacon, Amper and even N.Winner in the west. Moreover, long before J. Watt, he invented the system that balances according to thermodynamic rules.
The most vivid use of feedback control for its time was the centrifugal mechanical regulator balls developed by James Watt in 1778 to regulate the speed of the steam engine. As the speed of the machine increases, the regulator balls open towards the outside, so that a mechanism connected to it reduces the air flow, the speed of the machine decreases, the regulator balls gather inwards, which increases the speed of the machine and thus a balanced speed adjustment is provided.
In the first half of the 1940s, Norbert Wiener and some other scientists drew attention to the similarities between the characteristics of the feedback control systems of machines and animals.
Wiener first used the term “CYBERNETICS” in an article in 1943. The word cybernetics means “Gubernator” (helmsman) in Latin. In Greek, it derives from the word “Kibernetike”, which means the art of steering. Cybernetics is the science of communication, balancing and automatic adjustment. The honour of being the founder of cybernetics was given to El Ceri by western scholars.
The British psychiatrist W. Ross Ashby contributed to the field of cybernetics with his research on “other-balance” and “internal balance”. According to Ashby, beyond-equilibrium is the capacity of a system to reach a balanced state under various environmental conditions. The electromechanical device he built, which he called “Homeostat”, consisted of four fixed magnets, and the position of the magnets was dependent on each other through feedback mechanisms.
When the position of any magnet was disturbed, the effect of the other magnets and the self-effect of that magnet resulted in all magnets returning to equilibrium.
The ideas inspired by Ashby’s device have played an important role in AI research.
Dealing with uncertainty is important in automating intelligence. Attempts to quantify uncertainty and the “laws of chance” gave rise to statistics and probability theory. The rule is named after the English priest Thomas Bayes. Bayes’ rule is at the centre of other modern work in Artificial Intelligence.
Leibniz’s and Boole’s propositions are considered to be the first attempts to lay the foundation for what would later become Artificial Intelligence “software”. But reasoning and all other aspects of intelligent behaviour require a physical engine beyond software. In humans and animals, this driving force is the brain. As we shall see later, the first networks of nerve cell-like units were conceived in physical form. However, many clues from logic, neurophysiology, cognitive science, etc. required more powerful motors to explore the ideas embedded in them. The inventions of McCulloch, Wiener, Walter, Ashby and others led to the idea of a very powerful and versatile digital computer for the mechanisation of intelligence. This machine became the platform that provided the main engine for all these ideas and more. This infrastructure is the most dominant hardware on the road to the automation of intelligence.
In 1642 Blaise Pascal created the first of about fifty calculating machines of his own. It was an adding machine that could hold hands from one step to the next. “The device was large enough to fit on a writing desk and consisted of a box with many gear wheels.
Inspired by Pascal’s machines, Gottfried Leibniz built a mechanical multiplication device called the “Digit Calculator” in 1674. This device could add, subtract and multiply (by repeating the addition process). “To multiply a number by five, it was only necessary to turn the shaft five times.”
A particularly interesting machine, which was extremely difficult to understand in its day, was designed by the English mathematician Charles Babbage in 1822. The task of this so-called “difference machine” was to calculate mathematical tables using the finite difference method. Between 1834 and 1837 Babbage worked on the design of a so-called “analysing machine”, which incorporated most of the ideas necessary for general calculation. It stored intermediate results in a unit called a “mill” and also
programmable. However, in the process of realising his steam-powered, interactive brass wheels and cams, he encountered funding problems and was unable to finalise his project.
Ada Lovelace, Lord Byron’s daughter, is considered the “world’s first programmer” because of her alleged role in designing the program for the Analytical Machine. However, it is also claimed that there is no concrete evidence that Ada Lovelace designed a programme for the Analytical Machine.
On the other hand, the most detailed information about Babbage’s Machine is based on Lovelace’s memoirs. Lovelace; “The Analysing Machine has no pretence of inventing anything. Its aim is to be able to do everything we know how to do when we command it to do it”.
In the early 1940s, electromechanical relays were used in the first computers. Soon vacuum tubes (thermoionic valves, as they were called in Britain) were used because they enabled faster and more reliable calculations. Today’s computers consist of chips, which are made up of billions of tiny transistors arranged on silicon wafers.
THEORY OF COMPUTATION
Alan Turing, a British logician and mathematician, stated that his imaginary machine, which he called “logical computing machine LCM” (Logical Computing machine LCM) and which is now called “Turing machine”, would not only calculate many mathematical functions but also find answers to human-specific thought patterns. Although this claim has been strongly supported by logicians, it has not yet been completely proven.
The Turing machine is a hypothetical computing machine that is quite simple to understand. It consists of only a few components.
Turing proved that a certain unit of logic can always be determined for his machine and that any computable function can be computed with this unit.
More importantly, it also showed that instructions could be processed on the tape itself for any logic unit specialised to solve a particular problem, followed by a multi-purpose logic unit for all problems. The coding process for the special purpose logic unit can be thought of as a machine “programme”. However, it is still not possible today to create a set of rules that purports to define what man will do in every conceivable circumstance.
DIGITAL COMPUTERS
Claude Shannon (1961-2001), an American mathematician and inventor, developed some of the key ideas for designing logic circuits for computers. Shannon, Telephone switch
In order to simplify, he showed that Boolean algebra and binary arithmetic can also be used to implement operations in Boolean logic;
“Konrad Zuse’s Z3 was arguably the world’s first fully functional program-controlled (freely programmable) computer… The Z3 was built in Berlin in May 1941.
It was presented to an audience of scientists and destroyed during an Allied air raid in December 1943. Z3 used 2,400 electromagnetic relays instead of vacuum tubes
The working principle of EDVAC, the first programme-storing computer designed by John Von Neumann (1903-1957), an American mathematician, originally a Hungarian Jew and said to be one of the most intelligent constructors in the world, is the pioneer of today’s computers. This design is known as the “Von Neumann architecture” and is also known as the Neumann funnel. The most important feature of the Neumann architecture is its (task-specific) stored programming structure, which distinguishes it from the hardware circuits that process the “sequential” instructions of the programme used in other computers.
In most computers, some of the programmes are built directly into the circuits. Other computers storing programmes were designed and built in the 1940s in Germany, Great Britain and the USA. These were very bulky machines. In Great Britain and the USA they were mainly used for military purposes. The ENIAC, known as the first computer, was one of them.
The importance of digital computers loaded with programmes lies in the fact that they can be used for any purpose.
“THINKING COMPUTERS
After the first computers were built, Turing reasoned that if they were universal in application, they should be able to do everything. In 1948 he wrote: “The importance of the universal machine is obvious. No infinite number of different machines are needed for different jobs. One machine will do all the work.
Among the things Turing thought a computer could do was to imitate human intelligence. Turing “believed that the domain of computability would encompass much more than explicitly written sequences of instructions, so much so that he predicted that the scale would be large enough to encompass everything the human brain does, no matter how creative or original. He believed that machines of sufficient complexity would have the capacity to improve their behaviour.”
The first modern paper dealing with the possibility of fully mechanising human-style intelligence was published by Turing in 1950. This paper is famous for several reasons. Firstly, Turing is the one who posed the question “can machines think?”. He also argued that the question of machine intelligence, how intelligent they are, would be resolved by his own proposition, the “Turing test”
The Turing Test is described in the context of an “imitation game”. The game is played with three people: a man (A), a woman (B) and an interrogator of either sex (C). The interrogator stands in a separate room from the other two. For the interrogator, the aim of the game is, for example, to determine which of the two people is a man and which is a woman. He recognises them by the names X and Y
Answers are given in text so that the tone of voice does not help the interrogator. When A is replaced by a machine in the game, whether or not the interrogator recognises this will determine the outcome. To summarise, the aim of the test is to trick the interrogator in the “other room” into believing that he/she is a human being.
is based on the performance of a computer trying to convince the interrogator that it is a machine. If the imitation is successful, i.e. if the Machine is able to mislead the interrogator, such a computer can be said to “think” and is categorised as an Intelligent Machine. The evaluation of the result will also answer the question “Can machines think?”. At this point, we should say that the results that “thinking machines” will create will be very important for human life in the future.
will be frightening.
The third important feature of Turing’s 1950 paper is his suggestion on how to start producing programmes with human-level intellectual capabilities. Towards the end of the paper he states: “Instead of trying to create a programme that imitates the adult mind, it is more accurate and predictive to create a programme that imitates the child mind. When this programme is properly trained, the adult brain can be obtained”. (The Artificial Intelligence-based chat programmes ChatGPT-4, Bing and the like, which have become very popular today, seem as if they are candidates to approach the result).
Throughout the 1950s, researchers armed with a general-purpose digital computer set out to explore various avenues to mechanise intelligence. Some, firm believers in the symbol system hypothesis, began programming computers to perform some of the intellectual tasks that humans were capable of
At about the same time, other researchers began to explore different approaches. They often drew their inspiration from the work of McCulloch and Pitts on networks of nerve cell-like units, and from statistical analyses of decision-making.
approaches. The distinction between the methods persists today.
FIRST DISCOVERIES
If machines are to be intelligent, they must at least be able to perform thought-related tasks that humans are capable of. The first steps in the search for intelligence have been to identify “some” tasks that require intelligence and to determine how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were the main occupations of pioneers in the 1950s and 1960s. Moreover, cognitive psychology research and artificial intelligence research were often intertwined, as some researchers were interested in explaining how the human brain solves problems while trying to make machines solve problems.
MEETINGS
In September 1948, an interdisciplinary conference was held at the California Institute of Technology (Caltech) on how the nervous system regulates behaviour and how the brain can be compared with computers. This meeting was called the Hixon Symposium on Brain Mechanisms in Behaviour. Among those presenting papers at this conference were Warren McCulloch, John Von Neumann and psychologist Karl Lashley.
Lashley attacked behaviourism for adopting a static view of brain function and argued that psychologists should begin to consider dynamic and hierarchical structures as a way of explaining human design and language abilities. Lashley’s speech laid the foundations of what would later become cognitive science
The emergence of Artificial Intelligence as a full-fledged field of research can be traced back to three important meetings: one in 1955, one in 1956 and one in 1958.
are conferences that have taken place. In 1955, the “Session on Learning Machines” was held in Los Angeles in conjunction with the 1955 Western Joint Computer Conference. In 1956, the “Summer Research Project on Artificial Intelligence” was convened at Dartmouth College. In 1958, the symposium on “Mechanisation of Thought Processes” was sponsored by the National Physical Laboratory in the United Kingdom.
In papers presented in Los Angeles in 1955:
A – The processing speed in computers has to increase exponentially. The size of random access memory must increase several times. Input-output types have to be defined. It has been emphasised that with the techniques described in these papers, there is considerable hope that systems can be built in the relatively near future that will mimic a considerable part of the brain and nervous system. Today these have already been realised.
Starting from Hebb’s proposition that clusters of nerve cells can learn and adapt by adjusting the strength of their interconnections, experimentalists have developed various devices that they simulate on computers in order to adjust the strength of connections within their networks. These networks are referred to as neural networks.
B – Gerald P. Dinneen’s article describes computational techniques that can be used in image processing. The subtleties of using filtering methods to thicken lines, find edge lines, and filter random images are analysed. Pioneered by Selfridge and Dinneen
Their methods laid the groundwork for much of the work that would later be done to give machines the ability to “see”.
The first memory test in computers was the ferrite core random access memory modules developed by Jay Forrester. In 1953, the computer designed by Ken Olsen of the “Digital Equipment” Company (DEC) was the first computer to mimic the functioning of living neural networks.
DARTMOUTH PROJECT:
In the final declaration of this event, which lasted for six weeks in 1956, funded by the Rockefeller Foundation, Artificial Intelligence is defined as “The current aim is to have machines perform some behaviours that can be described as intelligent.
Although the term “Artificial Intelligence” proposed by McCarthy was adopted at this meeting, there are still some disagreements about this term today.
The 1956 workshop is the official start date of serious work in Artificial Intelligence. Minsky, McCarthy, Newell and Simon are considered the early pioneers of Artificial Intelligence. A plaque dedicated to this event and placed in Baker Library at Dartmouth has come to symbolise the birth of Artificial Intelligence as a scientific discipline.
About two years after this conference, in November 1958, a symposium entitled “Mechanisation of thought processes” was held at the National Physical Laboratory in Middlesex, England. The aim of the symposium was “artificial thought, character and image recognition,
to bring together scientists researching learning, automatic language translation, biology, automatic programming, industrial planning, mechanisation of clerical work”
PATTERN RECOGNITION
Many of the participants in the Dartmouth Summer Project were interested in mimicking higher levels of human thought. Their work benefited to a certain extent from introspection about how people solve problems. However, many of our mental faculties are beyond our powers of introspection. We do not yet know how we recognise speech sounds, how we read crooked writing, how we distinguish a cup from a saucer, or how we recognise faces. We do these things automatically, without thinking about it.
we are capable of. Lacking clues from introspection, early researchers were interested in mechanising some of our perceptual abilities and based their work on intuitive ideas about how to proceed, networks of simple neural cell models and statistical techniques. This lasted until later researchers were able to gain additional insights from neurophysiological studies of animal vision.
It describes the process of analysing an input image, a snippet of speech, an electronic signal or any other data set and placing it into one of a number of categories.
Most of the pattern recognition work carried out during this period dealt with two-dimensional materials such as printed pages and photographs. Scanning images and converting them into strings of numbers (later called pixels) was already possible in those days. In 1957, an engineer named Russell Kirsch even designed the first roller scanner to scan a photograph of his three-month-old son.
TYPEFACE RECOGNITION (OCR):
Early efforts in visual image perception focused on recognising ABC numeric typefaces in documents. This field is known as “optical character recognition” and early applications were to design devices that recognised fixed font characters on documents (typewritten or printed) with reasonable success
At that time, most recognition methods were based on matching a typeface (after it had been recognised on the page and converted into a sequence of 0’s and 1’s) with prototype versions of that character, called templates (which were stored as arrays in the computer). If a typeface matched the “A” template, i.e. mapped better to it than to other templates, the input was said to be an “A” typeface. If the input characters are not presented with a standard slant, are not in the same font as the template, or have defects, the error margin of the recognition process increases.
It was one of the successful, early attempts to use image processing, feature detection and learnt probability values for handwritten character recognition. Letters
scanned and represented on a 32 x 32 “retina”, an array of 0s and 1s.
The “cleaned” images were analysed for the presence or absence of certain “features”. In total, 28 features were utilised.
In 1957, psychologist Frank Rosenblatt began working on neural networks as part of a project called PARA (Perceiving and Recognizing Automotons). He was interested in these networks, which he called perceptrons, as a possible model of human learning, cognition and memory.
Another team, led by Prof. Bernard Widrow of Stanford’s Department of Electrical Engineering, was working on neural network systems in the second half of the 1950s and early 1960s. One of the devices Widrow built was called “ADALINE” (adaptive linear network). It was a single neural element that could be tuned. In the meantime, Hoff, Jr. who would later invent the first microprocessor at Intel, invented an adjustable weight called “memistor”. The electrical resistance of a graphite rod was changing. The method Widrow and Hoff developed for the ADALINE neural element is called the Widrow-Hall least-mean-squares adaptation algorithm.
Between 1958 and 1967, the US Army Combat Class primarily supported the project “to investigate and experiment with techniques and equipment specifications for practical applications of graphical data processing in accordance with military requirements”.
The main objectives of the project were the automatic recognition of icons on military maps and the recognition of military vehicles such as tanks in aerial photographs.
After training the neural network part of the system, it was able to achieve a recognition accuracy exceeding 98% on a large sample set. Recognising handwritten letters with this level of accuracy was a significant achievement in the 1960s.
Approaches that involve neural networks and statistical techniques in solving H.R. problems are characterised as “non-symbolic” approaches, in contrast to the “symbol processing” work of those interested in proving theorems, playing games and solving problems. Non-symbolic approaches have mostly found applications in pattern recognition, speech processing and computer vision. Workshops and conferences devoted to these topics began to be organised in the 1960s.
Playing perfect chess is an intellectual endeavour. The project of designing a successful chess machine aimed to get to the core of human intellectual endeavour. Thinking about chess-playing machines dates back to Babbage. Babbage’s 1854 book (Life of a Philosopher) contains the first documented discussion of programming a computer to play chess.
In 1946, Turing had the idea of demonstrating the intelligence of computers through the paradigm of a chess game. To this end, Turing began writing a chess programme in 1948. In 1952, lacking a computer powerful enough to run the programme, Turing played a chess match with his computer, which took half an hour for a single move. The programme lost the match with one researcher and defeated another…
After these first programmes, work on chess programmes continued steadily over the following decades. According to John McCarthy, the Russian AI researcher Aleksander Kronrod said, “Chess is the Drosophila (fruit fly) of AI.
Although chess has presented Y.Z. with many difficult challenges, competent chess
programmes did not appear until the mid-1960s. However, an even more impressive early success had been achieved with the simple game of checkers…
“Programming computers that learn from experience were the first steps in the elaborate programming effort, eliminating many of the requirements that can be encountered today. It should be noted that the first programme to include the ability to learn was completed in 1955.
NATURAL LANGUAGE PROCESSING
Languages such as English, Turkish, Arabic, etc. are called “natural languages” to distinguish them from the languages used by computers. Behind recognising the pattern of ABC numeric characters lies the problem of understanding the sequences of letters that make up the words, sentences and larger texts of a “natural” language. Natural language input is natural language output, both written and spoken.
creates. Translating from one language into another involves both understanding and production (Generation).
There are different levels for phonetics and phonology. For both speech and text, morphology analyses how lexical units are constructed from smaller parts. This is followed by Syntax, which deals with sentence structure and grammar. Syntax is concerned with determining the rules of grammaticality of the lexical chain in a given language.
The semantics level helps to determine the meaning (or meaninglessness) of a sentence by introducing logical analyses. This is followed by pragmatics. Pragmatics deals with meanings in the context of specific situations. One of these levels is syntax.
In 1957, the American linguist Noam Chomsky, in his book Syntactic Structure, proposed a set of grammatical rules. Grammars define words in sequences by specifying rules that replace them with symbols belonging to syntactic categories such as nouns, adjectives and predicates. Grammars also contain rules for replacing these chains of syntactic symbols with other symbols.
Some of the first attempts to use computers for more than ordinary numerical calculations involved automatic sentence translation from language to language. Vocabulary words could be stored in computer memory. These words could be used to find English equivalents for foreign words.
By selecting the appropriate equivalent of each foreign word in the sentence, together with some syntactic analysis, it is now possible to translate a foreign sentence into English.
INFRASTRUCTURE OF THE 1960S
The technical developments that took place during the 1960s were supported by a number of system supports and social factors. New computer languages made it easier to build AI systems. Researchers from mathematics, cognitive science, linguistics and what would soon be called “computer science” gathered at meetings and in start-ups to grapple with the problem of mechanising intelligent behaviour.
Government agencies and corporations, anticipating significant benefits from these new initiatives, were providing the necessary research support.An internal report prepared around 1960 strongly opposed broad support for AI. IBM
Despite the initial activity of its researchers, the company’s interest in Artificial Intelligence cooled. IBM probably wanted to emphasise that computers should assist people in their work, rather than replace them.
As the computing systems required for Artificial Intelligence researchers became larger and more expensive, and as laboratories were built, it became imperative to find more financial support than in the days when individual researchers began to work in this field. In the USA, during the second half of the 1950s and the first half of the 1960s, the main funding came from the Advanced Research Projects Agency (ARPA), an agency of the US Department of Defence.
As far as is known, the formation of ARPA was, according to some sources, a reaction to the successful launch of the Soviet satellite Sputnik in 1957. ARPA’s mission was to provide a large amount of research funding to implement projects important for US defence. These projects were particularly in the field of smart weapons and remotely controlled long-range guided missiles. For example, in the second half of the 1950s, one of the most important projects was the development of fusion tips for rockets to absorb and dissipate the heat generated during the re-entry of ballistic missiles into the atmosphere. Support was provided and overseen by ARPA’s Information Processing Technology Office (IPTO) ARPA was established in 1962 under the direction of Psychoacoustician J.C.R. Licklider. In Lick’s 1960 paper “Man-Computer Symbiosis”, he argued that humans and computers should co-operate in controlling complex situations and making decisions without rigid reliance on predetermined programmes.
Licklider funded the creation of the MAC (short for Machine Aided Cognition or Man And Computers) Project at MIT. This laboratory later became the Computer Science Laboratory and later the Computer Science and Artificial Intelligence Laboratory.
Such ARPA funding helped to establish “centres of excellence” in computer science. In addition to MIT, these centres included Stanford, Carnegie Mellon and SRI. Artificial Intelligence was just one of ARPA’s areas of interest. IPTO strongly supported research that led to graphical user interfaces, computer mice, supercomputers, computer hardware and very large scale integrated circuits (VLSI) and even the Internet. Interestingly, ARPA budgets did not even include AI as a separate line item until 1968.
ARPA was later renamed DARPA (Defence Advanced Research Projects Agency) to emphasise its role in defence-related research. DARPA grants mostly enabled the purchase of computer equipment and personnel expenditures. DARPA played a pioneering role in the development of today’s computer-based infrastructure.
At that time, within ten years, “a digital computer would be the world chess champion, computers would compose music, prove mathematical theorems and contain a sense of spirit as a programme.
It is no longer surprising to say that there are machines in the world today that think, learn and create. Moreover, there are many who predict that in the foreseeable future the ability of these computers to do such things will increase rapidly, and the scale of the range of development will even go beyond the range of problems that the human mind can deal with. But today we are still far from being able to do the “same” things that the human mind can achieve.
Marvin Minsky, head of the Artificial Intelligence Laboratory at MIT, declared in a press release in 1968 that “in 30 years we will have machines whose intelligence is equal to human intelligence”, but it is obvious that he was a bit premature.
Today, Artificial Intelligence researchers have the tools to represent knowledge by encoding it as logic formulae in networks or other symbolic structures tailored to specific problem domains. Moreover, they have gained considerable experience in heuristic search processes and other techniques for manipulating and utilising that knowledge. For example, neural networks and statistical approach techniques for pattern recognition have laid a solid foundation for the next stage of Artificial Intelligence development.
ARTIFICIAL INTELLIGENCE FROM THE MID-1960S TO THE MID-1970S
Throughout the 1960s and until the mid-1970s, rapid progress was made in the field of Artificial Intelligence research. The infrastructure of laboratories at MIT, Carnegie Mellon, Stanford, SRI, etc. was expanded, and new teams were established at other universities and companies. Although the work done in the previous years may seem modest from today’s point of view, it has actually aroused excitement and raised hopes, triggered many researchers and engineers to take an interest in this field, many new and important inventive ideas have multiplied, and doctoral thesis research projects in this field have increased remarkably. The most popular of these is “Computer Based Vision”.
COMPUTER VISION / MACHINE VISION (COMPUTER VISION / MACHINE VISION:
Humans acquire most of their knowledge through vision. The part of Artificial Intelligence called “computer vision” (or Machine Vision) is concerned with giving computers the ability to see. Much of this work is based on the processing of two-dimensional images collected by a camera from a three-dimensional world.
The two-dimensional images formed in the retina in the human eye are processed and rectified by the human brain and interpreted to provide accurate and sufficient information about the three-dimensional world, providing depth information through the use of two eyes (stereo vision).
In the same way in computer vision, by placing two cameras at different points, “binary”
vision can be obtained. Highly mathematical calculations on the computer analyse information components such as coordinates and contrast on the image.
There is a constant flow of information between scientists trying to understand how vision works in animals and engineers working on computer vision. In biological studies, McCulloch and Water Pitts were among the guiding pioneers.
Beginning in 1958, neuro-physiologists David Hubel and Torsten Wiesel conducted a series of experiments and demonstrated that certain nerve cells of the mammalian visual cortex selectively respond to images and image fragments of certain shapes. In 1959, in studies on a drugged cat, they revealed the existence of nerve cells specialised to respond to images containing complex shapes such as corners, long lines and large edge lines. In later studies, they found that similar specialised nerve cells also exist in the brain of monkeys, the closest mammal to humans.
From such information, vision researchers have been able to develop methods for extracting lines from computerised images (they must have been guided by the work of Hubel and Wiesel). But straight lines are a possible answer to why they are rarely found in the natural environment in which cats (and humans) evolved, Anthony J. Bell and Terrence
J. Sejnowski. They showed mathematically that natural landscapes can be analysed as a weighted sum of small edge lines, even without clearly visible edge lines.
Such research has continued to the present day, and the hardware supporting various software algorithms has been perfected and the physical dimensions of machine vision systems have been reduced, while their perception capabilities and image capture speed and quality have increased. Today, machine vision systems continue to develop as Artificial Intelligence-based Smart Cameras. Developments are in the form of Deep Learning under Machine Learning as functions of Artificial Intelligence. In this technology, the system is taught enough correct and incorrect examples to enable the system to establish a relationship between the data. These steps are frequently used especially in mass production factories.
Some people distinguish between “computer vision” and “machine vision”. Machine vision is often associated with robotics in industry.
FACE RECOGNITION
One of the most popular applications of machine vision today is face recognition algorithms. The first steps of face recognition techniques were taken in the first half of the 1960s, when the CIA
in projects sponsored by the Chinese government. The use of these technologies is at its peak, especially in today’s China
The researchers extracted the coordinates of a series of features (centre of the pupils, inner corners of the eyes, outer corners of the eyes, where the hair triangles on the forehead, etc.) from the photographs.
They adapted them to computers running face recognition algorithms. Based on these coordinates, the width of the mouth and the width between the pupils, the forehead
Recognition functions were perfected by subjecting 20 features, including nose to chin ratios, etc., to mathematical calculations. In 1970, a PhD student at Stanford wrote the first programme to detect facial features in pictures and use them to recognise people.
COMPUTER VISION FOR THREE-DIMENSIONAL SOLID OBJECTS
Lawrence G. Roberts, a PhD student at MIT, was the first person to write a program that detected objects in black/white (16 digit grey level) photographs and determined their orientation and position in space. His algorithm was crucial to later work on computer graphics, and as chief scientist and later director of ARPA’s information processing techniques division, Roberts was instrumental in the creation of ArpaNet, the precursor to the Internet.
Seymour Papert, a mathematician and psychologist who had recently joined the Artificial Intelligence Team at MIT, organised a “vision project” in the summer of 1966. The goal of this project was to develop a set of programmes that would analyse an image from a kind of scanner and name objects by matching them with “known objects and their associated vocabulary”. The project was called Computer Vision
was successful in initiating research, which has continued to the present day.
Another person who worked on the identification of images was David Huffman. “David Huffman, a professor of electrical engineering at MIT, invented Huffman coding. This efficient method is used today in many applications involving the compression and transmission of digital data. In 1967, at MIT and the University of California, the theory of assigning definitions to lines in drawings of three-dimensional objects was completed.
HAND-EYE RESEARCH
For a system to be intelligent, it must have knowledge of its world and the means to draw conclusions from that knowledge, or at least to act on that knowledge. Humans and machines alike, whether the information is embodied in proteins or silicon, must have the infrastructure to perform tasks or make decisions that can be classified as intelligent.
The driving force for much of computer vision research has centred on the transfer of coordinate information from a monochrome camera to servomotors to move the arms in order to steer the robot arm.
In 1961, as part of his PhD project in Electronic Engineering at MIT, he developed a computer-based mechanical “hand”. His supervisor was Claude Shannon and the hand, called the MH-1, “was a mechanical servo manipulator adapted to the TX-0 computer. Touch sensors mounted on the hand were used to steer it. The abstract of Ernst’s thesis describes part of what the system can do, and what he wrote are still the most widely applied functions.
Ernst’s project was the first device to use touch sensors to guide gestures. Based on this invention, he and engineer Joseph F. Enge lberer founded the “Unimation” Company. Shortly afterwards, the first industrial car called “Unimate” for General Motors
a prototype of their robot. This system consisted of precise lighting and an elaborate wheeled construction.
In Japan, Hand-Eye work has also been carried out at Hitachi’s Central Research Laboratory in Tokyo. Here, a Robot system called HIVIP was developed. It consisted of three subsystems: eye, brain and hand.
Hand-eye systems can be considered “robots”, but they cannot change their fixed position. Starting in the mid-1960s, several teams began working on mobile robots. Researchers at the John Hopkins University Applied Physics Laboratory had developed a mobile robot. Controlled by on-board electronic circuits and navigated by sonar, light sensors and “arms touching wall beams”, it navigated through white-walled corridors and could plug itself into a wall socket and recharge itself when its battery was low.
One of the projects at SRI was to physically realise a robot whose actions would be controlled by a series of programmes. The prototype was characterised by some engineering quirks that caused it to make sudden stops and, more importantly, to shake. The device was therefore called “Shakey”. The Shakey was equipped with a monochrome camera to visualise its surroundings, a distance-defining laser sensor to measure its distance to walls and other objects, and another sensor to detect bumps. Shakey was the first robotic system with planning, reasoning and learning capabilities, using vision, distance and touch sensors to perceive its surroundings, and a control system that could supervise planned tasks. This robot is considered a little ahead of its time.
Although SRI researchers had big plans for Shakey, DARPA curiously rejected them and the project was terminated in 1972. However, work on planning, vision, learning and their integration in robotic systems accelerated rapidly.
had won, causing a stir among SRI researchers. Moreover, new ideas for planning and visual perception were being explored. Many of these ideas led to later work.
In recent years, due to the increase in the elderly population in Japan and the increasing need for care for these people, but limited opportunities to find reliable carers, studies on mobile Robots (later defined as Humanoid Robots) have accelerated.
CHESS MASTER COMPUTERS
In the early rounds of the first World Computer Chess Championship organised by the International Federation of Information Processing Associations, which met in Stockholm in 1974, the Russian program Kaissa won all four of its matches and became the world computer chess champion.
These years, from the second half of the 1960s to the mid-1970s, saw the gradual development of computer chess programmes from beginners to intermediate players. Over the next two decades, there was a great deal of research on computer chess.
would eventually reach the level of a master actor. The most famous of these is IBM
Deep Blue 2, a computer built by the Russian chess world champion Garry Kasparow in May 1997. With a capacity of 200 million operations per second, this computer could calculate 36 billion possible moves in 3 minutes.
The first known knowledge-based system was developed at Stanford. It was written to analyse the soil of Mars. This software was the first programme to digitise specialist knowledge. The programme was later developed and became the basis for MYCIN, which diagnoses infectious blood diseases.
Embedding specialised knowledge in AI programmes has led to the emergence of many “expert systems”. At the same time, it has increased the focus on specific and very limited problems and decreased the focus on general intelligence mechanisms in any case.
In 1965, the first of the invitation-only “Machine Intelligence” workshops was organised at the University of Edinburgh. Although this workshop was attended by American and European researchers, it was the first major conference focussed solely on Artificial Intelligence.
It was held in Washington DC in May 1969. This conference was called the International Joint Conference on Artificial Intelligence (IJCAI). Sixteen different technical associations from the USA, Europe and Japan supported the conference. Approximately 600 people attended the conference, with participants from nine different countries.
Sixty-three papers were presented by researchers. One of the special interest groups of the Association for Computing Machinery (ACM) is “SIGART” (Special Interest Group for ARTifical intelligence).
In 1972, the Advanced Research Projects Agency (ARPA) was renamed Advanced Defence
Research Projects Agency (DARPA) reflected that the emphasis would now be on projects to enhance military capabilities.
At this time, Artificial Intelligence researchers in Britain were experiencing a funding crisis. Artificial Intelligence research was divided into three categories: advanced automata, computer-based investigations of the central nervous system and bridges between them… Most basic Artificial Intelligence research, including robotics and language processing efforts, was left behind. It was only after this period that many AI methods were applied to real problems.
After that, a period of expansion in Artificial Intelligence research studies began.
The study of Artificial Intelligence began to shift explicitly into application domains, confronting important problems in the real world. It was only after this period that successful application studies encouraged specialisation in sub-disciplines such as natural language processing, expert systems and computer vision.
SPEECH RECOGNITION AND UNDERSTANDING SYSTEMS
Humans generally speak faster than they can write (about one word per second versus about three words per second) and can speak while moving. They can also continue to perceive with their eyes while using their hands to speak.
Attempts at speech recognition increased after linguists determined that English speech was composed of about forty different sounds, and in the 1930s Bell Laboratories engineers implemented a system that recognised digits from “zero” to “nine”.
DARPA launched a five-year Speech Understanding Research (SUR) programme (SUR – Speech Understanding Research)
Research was conducted at Haskins Laboratories, Speech Communication Research Laboratory, Sperry Univac Speech Communication Department, and the University of California, Berkeley.
On the other hand, DRAGON introduced powerful new techniques for speech processing, the development of which is used in most speech recognition systems today. This system uses statistical techniques to predict the most probable word chains from the collected speech signals.
The DRAGON system makes simplifying assumptions. This assumption is called the Markow conjecture. Andret Andreyevich Markow was a Russian mathematician. Pushkin’s Yevgeny
He created a Markow model to analyse the statistics of a sequence of 20,000 Russian letters taken from the novel Onegin. Markow models are widely used in physics and engineering. Google, for example, still uses the Markow assumption when ranking pages. The Markow assumption simplifies calculations and ensures high performance
In 1976, DARPA started the Image Detection programme (much of the computer vision research in the USA has been supported by DARPA grants). This became a major effort, comprising the major research programmes working in the field and teams from both universities and companies. Among the laboratories participating alone were those of MIT, Stanford, the University of Rochester, SRI and Honeywell. University/Industry collaboration teams included USC-Hughes
The Research Laboratories were the University of Maryland-Westinghouse, Perdue University-Honeywell, and CMU-Control Data Company.
Regular workshops reported on the progress made. The meeting report of a workshop held in 1977 stated the programme’s objective: “Image Understanding Programming is a five-year research effort planned to develop the technology required for the automatic and semi-automatic interpretation and analysis of military photographs and related images”. As the programme continued, there was always tension between DARPA’s objectives and those of computer vision research. DARPA wanted the programme to produce “field-competent” systems. By 1979, the programme’s objectives had broadened to include computer vision, cartography and mapping for robot-controlled military vehicles. The five-year programme did not end in 1981, but it continued under DARPA’s wing until about 2001.
As a growing sub-speciality of Artificial Intelligence, computer vision papers began to appear in new journals with the title “Artificial Intelligence”. The era of challenges to basic AI research was over, and hopes that important applications would be found led to funding from both government and industry. Excitement, especially for expert systems, peaked in the mid-1980s.
The roughly ten-year period between 1975 and 1985 is widely regarded as the beginning of the rise of Artificial Intelligence before the winter period. Even if this upswing was followed by a period of austerity, successes were already beginning to follow. 1980 saw the founding of the American Association for Artificial Intelligence (AAI), with all its annual conferences, workshops and symposia. ArpaNet, which owes its beginnings to a few research organisations in the second half of the 1960s, evolved over time into the Internet, connecting computers all over the world.
This rise continued with Japan’s “Fifth Generation Computer Systems” project. This project in turn encouraged DARPA to establish the Strategic Computing Initiative. It also stimulated the creation of similar research efforts in Europe (such as the ALVEY Project in the United Kingdom or the European ESPRIT programme), as well as a consortium of American industry to further advance advances in computer hardware. Some of the challenges and achievements of Artificial Intelligence, the promise
In the second half of the 1980s, this upswing had come to an end, and the “Winter of Artificial Intelligence”, as some called it, had begun.
Since the early days of Artificial Intelligence, pessimists have always existed. Alan Turing foresaw the objections of these groups in his 1950 article. From outside this field
criticisms and expressions of disappointment facilitated the start of the AI Winter.
At the beginning of the criticism, it is said that “the machine is the product of the human mind, and it is clear that we cannot transfer the property of the Mind which is peculiar to us to anything other than our child. No machine can acquire this property, which is peculiar only to man”.
On the other hand, the British physicist Sir Roger Penrose is famous for his work on the limits to which computers are subject, including quantum physics, the theory of relativity, the structure of the universe and the Penrose mosaics. He believed that computers would never be conscious and could never reach the full scope of a human intelligence. He thought that these limitations only applied to machines based on the currently known laws of physics, and that the brain
that he had succeeded. He argued that a new kind of physics had to be invoked. This type of physics should include what he called “true quantum gravity”. Unfortunately, true quantum gravity, whatever it is, has not yet been discovered.
ARTIFICIAL INTELLIGENCE PERSON
During the first half of the 1980s, many AI proponents in government and industry raised expectations of what AI could do. Part of the blame for this unwarranted optimism lies with the AI researchers who made exaggerated promises. The failure to create systems based on unrealistic hopes, together with the increasing critical commentary I have mentioned, culminated in the so-called “AI winter” in the mid to late 1980s. During this period, applications in Artificial Intelligence research in the USA were discontinued for more than a decade. During these years, the focus of Artificial Intelligence activity shifted from the USA to Canada and Europe.
At the AAAI National Convention in 1984, some of the leading Artificial Intelligence researchers discussed the era and the future: “The Dark Age of Artificial Intelligence”.
and can we escape it, or can we get through it safely?
On the other hand, it was emphasised that there is a deep state of unrest among Artificial Intelligence researchers. It was underlined that this restlessness was caused by the high expectations from Artificial Intelligence, and it was said that we should discipline ourselves and educate the public in order to prevent the “Artificial Intelligence winter”.
However, in the second half of the 1980s, there was a period when some Artificial Intelligence companies locked their doors. Some of the larger computer hardware and software companies discontinued Artificial Intelligence research. Between 1987 and 1989, DARPA drastically reduced its funding. Nevertheless, the winter of Artificial Intelligence lasted only one season. During this time some new ideas were discovered. Soon the required data increased and computer efficiency improved. As a result, in a niche area such as computer vision, for example, there was the possibility of utilising fields such as optics, mathematics, computer graphics, electrical engineering, physics, neuroscience and statistics, all these disciplines continued to provide technical support and ideas.
STRONG AND WEAK ARTIFICIAL INTELLIGENCE
The concepts of Strong Artificial Intelligence or General Artificial Intelligence (GA) and Weak Artificial Intelligence or Narrow Artificial Intelligence (AI) are useful to distinguish between the two types of AI endeavour. Strong Artificial Intelligence (associated with the claim that a programmed computer can replace the mind and think at least as well as humans) Achieving strong Artificial Intelligence is the ultimate goal of many AI researchers. However, weak (Narrow) AI practitioners are associated with the endeavour to create programmes that assist human mental activities, rather than replicate them, by expressing and testing hypotheses about the mind. Weak (narrow) AI has been successful, but the search for strong (general) AI is still ongoing today.
Some people who argue that there are things that AI “shouldn’t do” are uneasy at the prospect of machines attempting to do inherently human-centred jobs such as teaching, mentoring and making legal judgements. Others, such as the Association of Socially Responsible Computer Professionals (CPSR), do not want to see AI technology (or any other technology) used in weapons or reconnaissance missions, or in jobs that require human judgement based on experience. In addition, like the Machine Foes of 19th century Britain, there is no shortage of people in the 21st century who are concerned that machines will replace humans and create unemployment and economic impoverishment. There are many who worry that Artificial Intelligence and computer technologies will dehumanise people, reduce interpersonal contact and change what it means to be human.
In his book, Joseph Weizenbaum, who wrote the ELIZA programme, emphasises the importance of the cultural environment: “No machine experiences a human-style past,
Therefore, he argues, no machine should be allowed to make decisions or give advice that require, among other things, the compassion and wisdom born of such a past”. He underlined his point by saying that inexperience in “these spheres of thought and action” also applies to “the relations between humans, machines and machine-human relations”.
Weizenbaum also opposes the idea of giving the machine a human-like body and sensory apparatus to provide it with the necessary infrastructure, writing: “The deepest and most grandiose idea that promotes the study of Artificial Intelligence is a machine created in the human model. This machine is to build a robot that has been his childhood, has learnt language like a child, has acquired knowledge by sensing the world with its own organs, and finally has glimpsed all areas of human thought.”
According to another view, in addition to the concern of using any technology for anti-social purposes (such as war), the real danger is to use machines prematurely: The real danger is to think that they can accomplish a certain task when they have not yet reached competence.
Of all the ideas about the future of the human species to date, the most dystopian is the concept of artificial intelligence. The ambition to build computers with the complexity and ingenuity that will eventually enable humans to transcend evolution and become more intelligent than themselves, and the replacement of these devices, will be a development that, according to many thinkers and even many thinking people, is deeply wrong, even demonic.
Even though some of the artificial intelligence researchers were involved in the work, they still thought so: Artificial Intelligence is a mechanical brain that can look at the past and make accurate predictions about the future, that can create perfect plans to change the future as it wishes, and that has a specific purpose. It is clear that this brain will create a new techno-cult in an informatised society, and that this growing cult will have corrupting cultural effects, and will create very painful, destructive and isolating negativities.
To summarise: “New-machine hostilities may be just over the horizon
In addition to its extraordinary powers, the human brain is capable of enormous storage and calculation. Thanks to this, it organises relatively disorganised information and creates internally ordered structures. Subtle, coherent AI-based symbolic and real-world actions can be based on these structures. This is what AI needs and strives to achieve in order to rival the brain.
ARTIFICIAL INTELLIGENCE (AI) / ARTIFICIAL INTELLIGENCE (AI) WHAT IS ITS ROLE IN INDUSTRIAL APPLICATIONS?
Artificial Intelligence (AI) refers to the ability of machines to mimic human intelligence. It aims to create machines and/or software that can imitate the characteristics of human behaviour and intelligence such as learning (extracting rules from information), reasoning (achieving goals or solving problems using rules), perception (making sense of data and interacting with it), language comprehension on the basis of an algorithm designed from the information received from environmental units.
When it comes to industrial applications, Artificial Intelligence has the ability to create alternative solutions for different purposes in a wide range of fields. Some of these applications are as stated in the following explanation.
- Quality Control: Using image processing and machine learning techniques, Artificial Intelligence can automatically detect errors, deficiencies or quality problems (Anomaly Detection) in production lines. In this way, by providing faster and more effective quality control processes, production-related errors are prevented from going to the end-user and business risks (loss of prejudice, brand value, legal processes, etc.) are minimised.
- Predictive maintenance (Preventive Maintenance): Artificial Intelligence can predict equipment failures (gear, vibration, friction, wiring, etc.) by analysing machine data. In this way, production interruptions caused by maintenance times of companies are prevented and general costs are reduced.
- Supply Chain Optimisation in Manufacturing: Artificial Intelligence can help make operations more efficient by analysing manufacturing and supply chain data, which is often large and complex. Through human-independent automation of these, better demand forecasting, inventory management and logistics planning are enabled.
- Worker Safety : Artificial Intelligence can be used to detect potential safety hazards to the occupational safety of employees in the workplace in advance. For example, an AI system can automatically detect dangerous behaviours or conditions by analysing video surveillance data and prevent potential negative developments by making independent decisions to take action.
Artificial Intelligence has found many applications in industry. For many businesses, this technology can significantly improve their ability to increase efficiencies, reduce costs and make more accurate and effective decisions.
WHAT ARE THE DIFFERENCES OF ARTIFICIAL INTELLIGENCE-BASED PRODUCTION TECHNOLOGIES COMPARED TO TRADITIONAL (RULE-BASED) SYSTEMS?
There are a number of important differences between AI-based systems and traditional “rule-based” systems. Some of these differences can be summarised as follows:
- Learning Capability: Artificial Intelligence based systems have the ability to learn from data. This capability means that they can adapt faster and more accurately than humans to deal with new situations. As it is known, traditional systems are usually programmed according to certain Rule sets (PLC based standard automation) and cannot go beyond these rules.
- Adaptability and Flexibility: Artificial Intelligence-based systems can create new results by drawing logical conclusions from new data coming in continuously and can quickly update their systems according to this new information. This feature is very valuable for rapidly changing environments. Traditional systems often do not allow such adaptations.
- Complex Decision Making: Artificial Intelligence based systems can easily handle complex decision making processes and can take into account a large number of variables together. Traditional systems follow simpler decision structures and have limited ability to manage complex situations.
For example, a production line uses a Machine Vision system based on Artificial Intelligence with deep learning algorithms to detect defective products on the product. This system learns to recognise defects through careful defect identification (labelling) and develops the ability to recognise and interpret more types of defects over time. However, when traditional systems are used for the same purposes, the system can only detect certain predetermined faults.
As a result, AI-based systems are generally more flexible and capable of adapting more quickly and managing more complex situations. However, it should be recognised that they will require a higher initial investment and ongoing engineering services. Therefore, the requirements and use cases of the application should be carefully considered when determining which system is more appropriate.
CRITERIA FOR THE SUCCESS OF ARTIFICIAL INTELLIGENCE BASED INDUSTRIAL CONTROL SYSTEM
In order for Artificial Intelligence-based image processing systems to be used effectively in manufacturing enterprises, several important stages must be passed.
The first step is the correct labelling step. These steps are an absolute prerequisite for the Artificial Intelligence model to learn and generalise correctly. The data taken from the photographs of the objects are analysed correctly in the background (in the black box).
The ability to recognise objects is directly proportional to the care in labelling. The correct labelling of objects is the most critical factor in the success of the model.
The second phase is the model building phase. The ability of the system to learn from the data and produce correct output depends on the ability of the algorithm running in the background, that is, the ability to establish the correct connection between good and bad. At this stage, the system learns by analysing the inputs. This process is similar to how a small child learns from experience and examples when learning something new. Your model produces output by making sense of the relationship between inputs and outputs.
In order for these processes to be realised quickly and effectively, it is critical to have a team experienced in managing the application based on an understanding of Artificial Intelligence logic. This team is responsible for data collection, labelling and training of the model.
manage and supervise all of the processes. The team should also monitor the performance of the system and identify abnormal situations and quickly debugging the algorithm when necessary.
As a result, an AI-based image processing system, under the management of an experienced team with accurate labelling and model building processes, can provide much more accurate and faster results compared to traditional image processing methods. However, these processes take a relatively long time. However, mistakes are inevitable at the beginning of the installation. Therefore, it is important to be patient and make continuous improvements throughout the process. In this way, the system approaches the most perfect possible result step by step over time. the results will significantly increase productivity for error-free production.
At this point, unfortunately, it is possible that some points of disagreement may arise as a result of situations such as high expectations and limited knowledge of the operation of the system by the operating staff.The most common disagreement is that the labelling processes for learning are expected entirely from the designer. However, this is a wrong assumption. In fact, it is necessary for the designer, once the algorithm has been established, to transfer how to do this to the engineer in charge of the company (labelling training) while making a reasonable number of labels. The limited introduction of errors is due to the small number of error samples at the beginning. When unforeseen errors arise over time, labelling is the short and practical way to be performed by the person receiving the training. The more error types are taught to the system, the more perfect results the system produces and the performance of the system increases. This is the logic of deep learning… Otherwise, it pushes the reasonable time limit for the delivery of the system and may cause unnecessary time losses for the designer (integrator), even additional charges and may create unwanted discussions.
THE REASON FOR HIGH EXPECTATIONS IN ARTIFICIAL INTELLIGENCE SYSTEMS
Much of the high expectations for Artificial Intelligence systems are due to the popular recognition of their potential to mimic, and in some cases even surpass, human intelligence. However, the flexibility and adaptability of the human brain
(In this context, there is not yet a technological development equivalent to this feature, but this is not very unlikely.
This point should be taken into account, at least at the present time.
may prevent some debates and confusion. In this context, addressing the differences between the brain and the computer at the basis of the work may facilitate the acceptance of Artificial Intelligence. Accordingly;
- While computers have perhaps hundreds of processing units, the brain has trillions of processors.
- While the computer performs billions of operations per second, the brain performs only thousands.
- The computer may crash, but the brain is fault-tolerant.
- The brain uses analogue signals while the computer uses binary signals.
- A computer is programmed by someone, but the human brain learns on its own. The computer only implements what the programmer says, including the Deep Learning algorithm
- While the computer performs sequential operations, the brain performs largely parallel operations.
- The brain is limited to being “logical”, whereas the brain can be “intuitive”.
Many of the differences between these distinctions seem to be narrowing in the wake of the dizzying advances in technology and science today. Some researchers believe that the level of comparisons will be integrated in the near future. Today, the brain’s numerous nerve cell components (axons, dendrites and synapse connections (100 billion neurons) and the computer’s Neumann funnel-based sequential operations (reading, processing and writing bits of information) performed by numerous transistorised circuits have been reduced to the level of comparison. At a time when the human brain and silicon chips are being combined to create a new species of human being, there will be little to say.
To this day, what is still valid is the unlimited capacity of the human brain. The human brain has the unique ability to cope with and adapt to changing situations almost instantaneously. This is defined as Plasticity. For example, a human eye and brain can perceive and quickly make sense of images in very different light and variable ambient conditions. (However, an Artificial Intelligence system cannot easily perform such adaptation, and especially if the algorithm it runs, the training data of the errors and the hardware infrastructure are not sufficiently suitable for the new situations encountered… For example, the human eye has a visual sensitivity of 576Mpixel at an angle of 120 degrees under optimum conditions and can perceive 2 Million different colours. No machine vision system has yet been designed with this sensitivity. Furthermore, the human brain can often make complex decisions quickly, taking into account many different factors. Artificial Intelligence systems often struggle when trying to make such complex decisions. This is because an AI model that is customised for a specific application or situation will often not work for another situation or application.
Artificial Intelligence systems developed to date are referred to as “Narrow Artificial Intelligence (AI)”.
are defined. These are systems consisting of algorithms and peripherals that are trained to perform specific tasks and can usually perform this task better than humans, but do not offer much application or flexibility beyond this specific task.
They do not have a general-purpose and rapidly adaptive intelligence like the human brain.
In summary, high expectations for AI systems are prejudiced and unrealistic. As noted, these systems may excel at specific tasks, but they generally cannot compete with the flexibility, general-purpose intelligence and rapid adaptability of the human brain.
WHAT ARE THE CRITERIA FOR AN INDUSTRIAL ARTIFICIAL INTELLIGENCE SYSTEM TO BE SUSTAINABLE?
The sustainability of Artificial Intelligence systems in industrial applications often depends on several factors.
- Data Streams: Artificial Intelligence systems often need continuous, quality data streams. This data directly affects system performance and can enable them to deal with new situations. Ensuring the continuity of data for maximising system efficiency means cost for businesses. However, the continuous provision of such data can be difficult and often requires significant investment.
- Continuous Maintenance and Update : Artificial Intelligence systems should be regularly updated and periodically maintained to adapt to changing situations over time. This is key to maintaining the efficiency and accuracy of the systems. Continuous maintenance and update effort may require system-specific resource transfer.
- Expertise and Support : Effectively managing and maintaining AI systems requires a certain level of expertise. This means that the industry must have the technical skills and knowledge required for a specific application. In the choice of the company that designs Artificial Intelligence systems, it is very important for the business management to be selective in terms of technical support after installation. The references of engineers specialised in this field are decisive.
- Suitable Application Areas : The sustainability of Artificial Intelligence technologies is largely directly related to the areas where they are used. Special situations in the production line or application-specific criteria may not always be suitable for Artificial Intelligence application. In such cases, traditional systems may be more appropriate. Mutual trust is essential in decisions at this point.
As a result, guaranteeing data flow, continuous maintenance, updates, appropriate expertise and support are indispensable prerequisites for Artificial Intelligence systems to be sustainable in industrial applications. Therefore, these factors should be taken into account when making investments in Artificial Intelligence and resources should be utilised within this framework.
WHAT ARE THE ADVANTAGES/DISADVANTAGES BETWEEN CONVENTIONAL SYSTEMS AND ARTIFICIAL INTELLIGENCE SYSTEMS
There are a number of important differences between Artificial Intelligence (AI) and traditional (rule-based) systems. Both offer advantages for specific situations and applications. But in general
As such, AI systems are better able to cope with more complex tasks and variable situations, while traditional (standard automation) systems are generally better suited for simpler and more predictable situations.
Advantages of Artificial Intelligence Systems:
One of the most important features of Artificial Intelligence systems is the attention to control/action/operation that is standardised and guaranteed to be sustainable under unchanging conditions.
is that it provides.
- Adaptation : Artificial Intelligence systems have the ability to learn to cope with variable conditions and situations. This is important for dealing with uncertain and variable situations.
- Learning Capability: AI, especially deep learning and machine learning, uses flexible techniques with the ability to learn from new data inputs and improve the processing model over time.
- Big Data Analysis: Artificial Intelligence systems can process large and complex data sets more easily and analyse them without errors. As a result of more advanced analyses, it can make much more accurate predictions based on data.
- Automation: Artificial Intelligence’s ability to quickly process and analyse large data sets can lead to a higher (advanced) level of automation in the business. This situation is fully compatible with the basic structure of today’s End 4.0 concept.
Disadvantages of Artificial Intelligence Systems:
- Cost: Artificial Intelligence systems usually require licensing and are therefore expensive to start with (relatively speaking these days) and require large amounts of data (Big Data). Increased data collection from the environment leads to the need for smart sensors, which is one of the reasons for high costs.
- Maintenance: Artificial Intelligence systems need constant maintenance and updating, which is another factor that increases costs and complexity.
- Understandability: Decisions of Artificial Intelligence systems are usually made in the black box in the background, defined as “black box”. It is impossible for end users to understand the logic of the new decisions and how they are made. The copyrights (Know-How) of the algorithm stored in the black box should be considered within the framework of the designer’s rights. Training the decision mechanism, producing result output from the decisions taken according to the database are the functions of the deep learning infrastructure
Advantages of Traditional Systems
- Predictability: Traditional systems are more predictable and understandable. This is because decisions are based on design decisions made by the system engineer based on established rules. The system offers a flexible structure in a narrow area. Changes to be made when desired are relatively easier.
- Cost: These systems are generally less costly and require less maintenance. PLC based control and automation is the indispensable choice especially for mass production enterprises.
- Simpler Applications: Traditional systems are mostly suitable for simpler situations and tasks. In particular, they are sufficient for situations where situations and requirements are clear and unchanging.
Disadvantages of Traditional Systems:
- Flexibility: Traditional systems are less flexible in adapting to changing conditions and situations.
- Learning Capability : These systems are generally not capable of learning from new data or situations. This means “limited adaptability”.
- Complex Situations: Traditional based systems often struggle to deal with more complex situations and tasks.
Sedat Sami Ömeroğlu Ali Sami GÖZÜKIRMIZI
Electrical and Electronics Engineer, Physics Engineer
E3TAMMechatronics Enginee Phd Candidate E3TAM
PART 2
WHAT IS THE ROLE OF ARTIFICIAL INTELLIGENCE (AI) IN INDUSTRIAL APPLICATIONS?
Artificial intelligence (AI) refers to the ability of machines to imitate human intelligence. Machines and/or software that can imitate the characteristics of human behavior and intelligence, such as learning (extracting rules from information), reasoning (achieving goals or solving problems by using rules), perception (making sense of data and interacting), and understanding language, based on the designed algorithm of information received from peripheral units. aims to create.
When it comes to industrial applications, artificial intelligence has the ability to create alternative solutions for different purposes in a wide range of areas. Some of these applications are as mentioned in the description below.
1. Quality Control:
Using image processing and machine learning techniques, artificial intelligence can automatically detect errors, deficiencies or quality problems (Anomaly Detection) in production lines. In this way, faster and more effective quality control processes are provided, preventing production-related errors from passing to the end user, and business risks (loss of prestige, brand value, legal processes, etc.) are minimized.
2. Predictive maintenance:
Artificial intelligence can predict equipment failures (Gear, vibration, friction, cabling, etc.) by analyzing machine data. In this way, companies’ production interruptions due to maintenance periods are prevented and general costs are reduced.
3. Supply Chain Optimization in Manufacturing:
Artificial intelligence can help make operations more efficient by analyzing manufacturing and supply chain data, which is often large and complex. Through their human-independent automation, better demand forecasts, inventory management and logistics planning are achieved.
4. Worker Safety:
Artificial intelligence can be used to detect potential safety hazards regarding the safety of employees in the workplace. For example, an AI system can automatically detect dangerous behavior or conditions by analyzing video surveillance data and prevent possible negative developments by making independent decisions to take action.
Artificial intelligence has found many applications in industry. For many businesses, this technology can significantly improve their ability to increase efficiencies, reduce costs, and make more accurate and effective decisions.
WHAT ARE THE DIFFERENCES OF ARTIFICIAL INTELLIGENCE-BASED PRODUCTION TECHNOLOGIES COMPARED TO TRADITIONAL (RULE-BASED) SYSTEMS?
There are a number of important differences between artificial intelligence-based systems and traditional “rule-based” systems. Some of these differences can be briefly described as follows:
1. Learning Ability:
Artificial intelligence-based systems have the ability to learn from data. This ability means that they can adapt more quickly and accurately than humans to deal with new situations. As it is known, traditional systems are generally programmed according to certain rule sets (PLC-based standard automation) and cannot go beyond these rules.
2. Adaptability and Flexibility:
Artificial intelligence-based systems can create new results by drawing logical conclusions from constantly incoming new data and quickly update their systems according to this new information. This feature is invaluable for rapidly changing environments. Traditional systems often do not allow for such adaptations.
3. Complex Decision Making:
Artificial intelligence-based systems can easily handle complex decision-making processes and take into account many variables together. Traditional systems adhere to simpler decision structures and are limited in their ability to manage complex situations.
For example, let’s say an artificial intelligence-based ‘Machine Vision’ system with deep learning algorithms is used to detect faulty products on a production line. This system learns to recognize errors through careful error identification (labeling) and develops the ability to recognize and interpret more error types over time. However, when traditional systems are used for the same purposes, the system can only detect certain predetermined errors.
As a result, artificial intelligence-based systems are generally more flexible and have the ability to adapt more quickly and manage more complex situations. However, it should be taken into consideration that it will require a higher initial investment and continuous engineering service. Therefore, the application requirements and use cases must be carefully considered when determining which system is more suitable.
CRITERIA FOR THE SUCCESS OF ARTIFICIAL INTELLIGENCE-BASED INDUSTRIAL CONTROL SYSTEM
In order for artificial intelligence-based image processing systems to be used effectively in production enterprises, several important stages must be passed.
The first stage is the correct labeling stage. These steps are an absolute prerequisite for the artificial intelligence model to learn correctly and make correct generalization. The ability to accurately analyze the data taken from photographs of objects in the background (in the Black box) is directly proportional to the attention in labeling. Correct labeling of objects is the most critical factor in the success of the model.
The second phase is the model creation phase. The ability of the system to learn from the data and produce the correct output depends on the ability to correctly establish the connection between good and bad, that is, on the ability of the algorithm running in the background. In this stage, the system learns by analyzing inputs. This process occurs in a similar way to how a young child learns from experience and examples when learning something new. Your model produces output by making sense of the relationship between inputs and outputs.
In order for these processes to occur quickly and effectively, the presence of a team experienced in managing the application based on understanding the logic of artificial intelligence is critical. This team should manage and supervise all processes such as data collection, labeling and training the model. In addition, this team must monitor the performance of the system and identify abnormal situations and quickly perform debugging in the algorithm when necessary.
As a result, an artificial intelligence-based image processing system, under the management of an experienced team with accurate labeling and model building processes, can provide much more precise and faster results compared to traditional image processing methods. However, these processes take a relatively long time. However, errors are inevitable at the beginning of the installation. Therefore, it is important to be patient and make continuous improvements throughout the process. In this way, the system approaches the perfect possible result step by step over time. The results will significantly increase efficiency for error-free production.
At this point, unfortunately, it is possible that some points of disagreement may arise as a result of situations such as high expectations and limited knowledge of business officials about the operation of the system. The most common disagreements are that labeling for learning is expected entirely from the designer. However, this is a wrong expectation. The truth is, after the designer establishes the algorithm, it is necessary to make the labels at a reasonable level and convey how to do this to the engineer in charge of the business (Labeling training). The limited ability to introduce errors is due to the scarcity of initial error examples. When unforeseen errors occur over time, labeling them is a short and practical way to handle them by the trained person. If more error types are taught to the system, the system produces excellent results and the performance of the system increases. This is the logic of deep learning. Otherwise, it pushes the reasonable time limit for the delivery of the system and may cause unnecessary loss of time for the designer (Integrator) or even create undesirable discussions by causing additional charges.
THE REASON FOR THE HIGH EXPECTATIONS IN ARTIFICIAL INTELLIGENCE SYSTEMS
Many of the high expectations for artificial intelligence systems are due to the popular awareness of the potential of these systems to imitate and in some cases even exceed human intelligence. However, it would be useful to examine some details about the flexibility and adaptation ability of the human brain. In this context, there is no technological development yet equivalent to this feature, but this does not seem like a very remote possibility. Keeping this point in mind may at least prevent some debates and confusion from occurring today. In this context, the work
Addressing the underlying differences between the brain and the computer may facilitate the acceptance of artificial intelligence. According to this:
- While computers have perhaps hundreds of processor units, the brain has trillions of processors.
- While the computer performs billions of operations per second, the brain only performs thousands of operations.
- The computer may crash, but the brain is fault-tolerant.
- While the computer uses binary signals, the brain uses analog signals.
- A computer is programmed by someone, but the human brain learns on its own. The computer only implements what its programmer says. This includes the “Deep Learning” algorithm.
- While the computer performs sequential operations, the brain performs largely parallel operations.
- The computer is limited to being logical, whereas the brain can be intuitive.
Many of the differences between these distinctions seem to be increasingly narrowing following the dazzling advances in today’s technology and science. Some researchers think that the level of comparisons will integrate with each other in the near future. Today, the brain has a large number of nerve cells (100 billion neurons) The level of comparison has been reached in the context of axons, dendrites and synapse connections, which are the components of the neuron) and the sequential operations of the computer based on the Neumann funnel (reading, processing and writing of information bits) carried out by circuits with a large number of transistors. At a time when there are discussions about creating a new human species by combining the human brain with silicon chips, the words to be said will be limited.
What is valid today is the unlimited capacity of the human brain. The human brain has a unique ability to cope and adapt to changing situations almost instantly. This situation is defined as “Plasticity”. A human eye and brain can perceive and quickly interpret images in many different light and variable environmental conditions. However, an artificial intelligence system cannot easily achieve this type of adaptation; Especially if the algorithm errors it runs, the training data and the hardware infrastructure are not adequately suited to the new situations encountered. For example, the human eye has a visual sensitivity of 576 mpixels at a 120-degree angle under optimum conditions and can perceive 2 million different colors. A machine vision system with this sensitivity has not yet been designed. Additionally, the human brain can often make complex decisions quickly, taking into account many different factors. AI systems often struggle when trying to make such complex decisions. This is because an artificial intelligence model that is customized for a particular application or situation is often useless for another situation or application.
Today’s developed artificial intelligence systems are defined as “Narrow Artificial Intelligence”. These are systems composed of algorithms and peripherals that are trained to perform specific tasks and can often do that task better than a human, but do not offer much application or flexibility beyond that specific task. Therefore, they do not have a general-purpose and quickly adaptable intelligence like the human brain.
In summary, high expectations for artificial intelligence systems are based on preconceptions and are unrealistic. As noted, these systems may excel at certain tasks, but they generally cannot compete with the flexibility, general-purpose intelligence, and rapid adaptability of the human brain.
WHAT CRITERIA APPLY FOR AN INDUSTRIAL ARTIFICIAL INTELLIGENCE SYSTEM TO BE SUSTAINABLE?
The sustainability of artificial intelligence systems in industrial applications generally depends on several factors.
1. Data Flow:
Artificial intelligence systems often require constant, quality data streams. This data directly affects system performance and can enable them to cope with new situations. Ensuring the continuity of data to maximize system efficiency means costs for businesses. However, providing such data on an ongoing basis can be difficult and often requires a significant investment.
2. Continuous Maintenance and Update:
Artificial intelligence systems should be regularly updated and periodically maintained to adapt to changing situations over time. This is key to maintaining the efficiency and accuracy of systems. Continuous maintenance and update efforts may require system-specific resource transfer.
3. Expertise and Support:
Effectively managing and maintaining artificial intelligence systems requires a certain level of expertise. This means that the industry must have the technical skills and knowledge required for a particular application. When choosing the company that designs artificial intelligence systems, it is very important for the business management to be selective in terms of technical support after installation. References of expert engineers are decisive in this regard.
4. Suitable Application Areas:
The sustainability of artificial intelligence technologies is largely directly related to the areas in which they are used. Special situations in the production line or application-specific criteria may not always be suitable for artificial intelligence application. In such cases, traditional systems may be more suitable. Mutual trust is essential in decisions at this point.
As a result, guaranteeing data flow, continuous maintenance, updates, appropriate expertise and support are indispensable prerequisites for artificial intelligence systems to be sustainable in industrial applications. Therefore, these factors should be taken into account when investing in artificial intelligence and resources should be used within this framework.
WHAT ARE THE ADVANTAGES / DISADVANTAGES BETWEEN CONVENTIONAL SYSTEMS AND ARTIFICIAL INTELLIGENCE SYSTEMS?
There are a number of important differences between artificial intelligence (AI) and traditional (rule-based) systems. Both offer advantages for certain situations and applications. In general, AI systems are better able to deal with more complex tasks and variable situations, while traditional (standard automation) systems are generally suitable for simpler and more predictable situations.
Advantages of Artificial Intelligence Systems:
One of the most important features of artificial intelligence systems is that they provide control, action and transactional attention that are guaranteed to be sustainable under standardized and unchangeable conditions.
1. Adaptation:
Artificial intelligence systems have the ability to learn to deal with changing conditions and situations. This is important for dealing with uncertain and changing situations.
2. Learning Ability:
AI, especially deep learning and machine learning, uses flexible techniques that have the ability to learn from new data inputs and improve the processing model over time.
3. Big Data Analysis:
Artificial intelligence systems can process large and complex data sets more easily and analyze them without errors. As a result of more advanced analysis, it can make more accurate predictions based on data.
4. Automation:
Thanks to artificial intelligence’s ability to quickly process and analyze large data sets, a more advanced level of automation can be achieved in the business. This situation is fully compatible with the basic structure of today’s Industry 4.0 concept.
Disadvantages of Artificial Intelligence Systems:
1. Cost:
Artificial intelligence systems usually require licensing and are therefore expensive to start with and require large amounts of data (Big Data). Higher data collection from the environment creates the need for smart sensors. This is one of the reasons for high costs.
2. Maintenance:
Artificial intelligence systems need constant maintenance and updating. This is another factor that increases costs and complexity.
3. Understandability:
Decisions of artificial intelligence systems are often made in the background black box defined as the “Black Box”. It is impossible for end users to understand the logic of new decisions and how they are made. The copyright (Know-How) of the algorithm stored in the black box should be considered within the framework of the designer’s rights. Training the decision mechanism and producing results from the decisions taken according to the database are the functions of the deep learning infrastructure.
Advantages of Traditional Systems
- Predictability: Traditional systems are more predictable and understandable. Because the decisions are based on the design decisions made by the system engineer based on established rules. The system offers a flexible structure in narrow spaces. Changes to be made when desired are relatively easier.
- Cost: These systems generally cost less and require less maintenance. PLC-based control and automation is an indispensable choice, especially for businesses that engage in mass production.
- Simpler Applications: Traditional systems are mostly suitable for simpler situations and tasks. It is especially sufficient for situations where situations and requirements are clear and unchangeable.
Disadvantages of Traditional Systems:
- Flexibility: Traditional systems are less flexible in adapting to changing conditions and situations.
- Learning Ability: These systems generally do not have the ability to learn from new data or situations. This means “limited adaptability”.
- Complex Situations: Traditionally based systems often struggle to deal with more complex situations and tasks.
Ali Sami GÖZÜKIRMIZI – E3TAM
Physics Engineer Mechatronics Master Engineer PHD Candidate