Computational Intelligence
Where artificial intelligence allows a computer to mimic human intelligence by following logic rules which dictate how the computer will react, computational intelligence allows the computer to grow and learn from experience and develop its own logic.
Computational Intelligence Techniques
Common uses of computational intelligence include: pattern recognition, speech recognition, and handwriting recognition. This is because these types of recognition software have a lot of differences depending on who is using the software. For instance, the system would need to follow different rules to understand different dialects. Due to the huge amount of variance a computer program cannot be written to differentiate between all types of variation. These programs work by having a set of training data which instructs the computer on how to learn to interpret new data. The program attempts to match using visual pattern recognition letters or speech to examples within the system using a confidence interval which gives the system an idea of how accurate it is. The system can minimize the error based on how many training images are used and how many examples the system can compare to.
Natural Language Processing
Natural Language Processing is the ability of a computer to understand human speech. This language could either be the input or the output (computer could understand voice commands or communicate using voice). Machine Translation, the way a computer translated the voice into understandable computer commands, but there are several reasons why this is difficult for a machine.
- Many words have different meanings that could be interpreted differently by a computer
- One word may have may be translated differently based on the context
- There are complicated rules which govern grammar
- Some languages have idioms which do not have literal translations
- Some words do not have direct translations and could mean completely different things in context
As the system improves its ability to translate, some systems use predictive text to attempt to determine what the user will type given what has been typed in the past. Other systems can try to predict what the rest of the word is after the first few letters are typed.
In addition to just knowing the meaning of each word, the system needs to know the use of each word and the class (adjective, noun, verb) and how it is used in a sentence. This changes depending on the language as well.
In addition to just knowing the meaning of each word, the system needs to know the use of each word and the class (adjective, noun, verb) and how it is used in a sentence. This changes depending on the language as well.
Representing Knowledge
One way of representing knowledge is by using nodes which are connected by links. The links are weighted so that strength of the association between the words is measured by the computer to determine likely links. This allows the less likely associations to still be predicted by the computer and not alter the way the system thinks because the system still recognizes that unusual associations are less likely.
This can be used to recognize important words in the sentence and find a likely conclusion based on those important words. For instance, when asked What color is the sky? the program would recognize color and sky as the important words and form a conclusion which would be blue, the most likely description of the color of the sky.
This can be used to recognize important words in the sentence and find a likely conclusion based on those important words. For instance, when asked What color is the sky? the program would recognize color and sky as the important words and form a conclusion which would be blue, the most likely description of the color of the sky.
Neutral Networks
Artificial neutral networks, or ANNs, attempt to make computers learn in a similar way to humans by representing and mimicking the neurons inside of the human brain and the electrical impulses. This is done using many connections of nodes with the importance being noted by the weight of the link. Each node takes input from several input nodes and uses a transfer function or an activation function to determine the output. The transfer function pays more attention to inputs with higher weights. This output is then passed to the next node.
The neutral network must be trained before it can be used, during this phase the program calculates the output and this is compared to the expected output. The difference between the two is called an error. A process called back propagation occurs where the weights of the nodes are adjusted to more accurately reflect the expected output. After training, the network can be given real data and classify it based on what it has already learned.
The neutral network must be trained before it can be used, during this phase the program calculates the output and this is compared to the expected output. The difference between the two is called an error. A process called back propagation occurs where the weights of the nodes are adjusted to more accurately reflect the expected output. After training, the network can be given real data and classify it based on what it has already learned.
Robotics
A robot is a computer controlled system that preforms physical tasks. Robots can be autonomous, meaning they rely on technology such as artificial and computer intelligence to navigate and preform tasks. Robots are used to preform tasks which would be difficult or impossible for humans alone to do, such as cleaning nuclear waste, boring or repetitive jobs, or exploring inaccessible environments such as the sea floor.
Carrier robots carry heavy loads for the military. The BEAR robot is designed to rescue soldiers, reducing the need for soldiers to put their lives at risk. Exploration robots help increase scientific knowledge by exploring areas where humans cannot go. Search and rescue robots are used to find survivors after earthquakes or natural disasters. Domestic robots are used to assist around the house for cleaning. Robots such as this excel where there is little variation in the action which the robot does, making them ideal for manufacturing jobs. This causes problems because it reduces the available jobs because a robot can replace people.
Robots function and learn about the world by incorporating sensors which are a part of the robot's hardware. This is called computer vision, or the robot knowing what is around it based on what the sensors tell it. This could be accomplished by using a proximity sensor to determine where the objects nearby are in relation to the robot. This could be done with an infrared (IR) sensor. Lasers are more powerful ways of determining the proximity of the nearby objects and radar can determine where objects are by emitting radio waves. Video cameras could be attached to a robot, but because this produces a 2D image it is useless to a robot for determining what is around the robot. These also have the same limitations as the human eye such as the inability to see in the dark.
Other sensors that could be used are pressure sensors, heat sensors, magnetism sensors, pH sensors, sound sensors, and humidity sensors. Robots also have output sensors such as robotic arms and clamps which are controlled by relay circuits and motors. In environments where humans work alongside robots, lights, sirens, and speakers are used to alert people of the actions of the robot.
Carrier robots carry heavy loads for the military. The BEAR robot is designed to rescue soldiers, reducing the need for soldiers to put their lives at risk. Exploration robots help increase scientific knowledge by exploring areas where humans cannot go. Search and rescue robots are used to find survivors after earthquakes or natural disasters. Domestic robots are used to assist around the house for cleaning. Robots such as this excel where there is little variation in the action which the robot does, making them ideal for manufacturing jobs. This causes problems because it reduces the available jobs because a robot can replace people.
Robots function and learn about the world by incorporating sensors which are a part of the robot's hardware. This is called computer vision, or the robot knowing what is around it based on what the sensors tell it. This could be accomplished by using a proximity sensor to determine where the objects nearby are in relation to the robot. This could be done with an infrared (IR) sensor. Lasers are more powerful ways of determining the proximity of the nearby objects and radar can determine where objects are by emitting radio waves. Video cameras could be attached to a robot, but because this produces a 2D image it is useless to a robot for determining what is around the robot. These also have the same limitations as the human eye such as the inability to see in the dark.
Other sensors that could be used are pressure sensors, heat sensors, magnetism sensors, pH sensors, sound sensors, and humidity sensors. Robots also have output sensors such as robotic arms and clamps which are controlled by relay circuits and motors. In environments where humans work alongside robots, lights, sirens, and speakers are used to alert people of the actions of the robot.