Artificial intelligence


Artificial intelligence AI is intelligence demonstrated by machines, as opposed to a natural intelligence displayed by animals including humans. AI research has been defined as a field of study of intelligent agents, which allocated to any system that perceives its environment in addition to takes actions that maximize its chance of achieving its goals.

The term "artificial intelligence" had ago been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such(a) as "learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does non limit how intelligence can be articulated.

AI application include sophisticated ] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon invited as the AI effect. For instance, optical acknowledgment recognition is frequently excluded from things considered to be AI, having become a routine technology.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has professionals such as lawyers and surveyors several waves of optimism, followed by disappointment and the waste of funding requested as an "modeling human problem solving, formal logic, large databases of cognition and imitating animal behavior. In the first decades of the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.

The various sub-fields of AI research are centered around specific goals and the use of particular tools. The traditional goals of AI research increase reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to keep on and manipulate objects. General intelligence the ability to solve an arbitrary problem is among the field's long-term goals. To solve these problems, AI researchers hit adapted and integrated a wide range of problem-solving techniques—including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.

The field was founded on the condition that human intelligence "can be so precisely returned that a machine can be produced to simulate it". This raised philosophical arguments approximately the mind and the ethical consequences of devloping artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. Science fiction writers and futurologists pretend since suggested that AI may become an existential risk to humanity whether its rational capacities are not overseen.

Goals


The general problem of simulating or creating intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the near attention.

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing picture from probability and economics.

Many of these algorithms proved to be insufficient for solving large reasoning problems because they able a "combinatorial explosion": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve near of their problems using fast, intuitive judgments.

Knowledge report and knowledge engineering allow AI entry toquestions intelligently and make deductions approximately real-world facts.

A version of "what exists" is an ontology: the family of objects, relations, concepts, and properties formally described so that software agents can interpret them. The most general ontologies are called upper ontologies, which attempt to give a foundation for all other knowledge and act as mediators between domain ontologies that conduct specific knowledge about a particular knowledge domain field of interest or area of concern. A truly intelligent code would also need access to commonsense knowledge; the vintage of facts that an average person knows. The semantics of an ontology is typically represented in description logic, such as the Web Ontology Language.

AI research has developed tools to symbolize specific domains, such as objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge what we know about what other people know;.

  • default reasoning
  • matters that humans assume are true until they are told differently and will remain true even when other facts are changing; as living as other domains. Among the most unoriented problems in AI are: the breadth of commonsense knowledge the number of atomic facts that the average grown-up knows is enormous; and the sub-symbolic form of most commonsense knowledge much of what people know is not represented as "facts" or "statements" that they could express verbally.

    Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery mining "interesting" and actionable inferences from large databases, and other areas.

    An intelligent agent that can plan allowed a representation of the state of the world, helps predictions about how their actions will conform it and make choices that maximize the utility or "value" of the usable choices. In classical planning problems, the agent can assume that it is for the only system acting in the world, allowing the agent to beof the consequences of its actions. However, if the agent is not the only actor, then it requires that the agent reason under uncertainty, and continuously re-assess its environment and adapt.

  • Multi-agent planning
  • uses the cooperation and competition of many agents toa condition goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.

    Machine learning ML, a fundamental concept of AI research since the field's inception, is the explore of data processor algorithms that upgrading automatically through experience.

    Unsupervised learning finds patterns in a stream of input. Supervised learning requires a human to designation the input data first, and comes in two main varieties: classification and numerical regression. Classification is used to develop what category something belongs in—the script sees a number of examples of things from several categories and will memorize to classify new inputs. Regression is the effort to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should modify as the inputs change. Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown possibly implicit function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". In reinforcement learning the agent is rewarded for usefulness responses and punished for bad ones. The agent classifies its responses to form a strategy for operating in its problem space.

  • Transfer learning
  • is when the knowledge gained from one problem is applied to a new problem.

    Computational learning theory can assess learners by computational complexity, by sample complexity how much data is required, or by other notions of optimization.

    Natural language processing NLP allows machines to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward a formal request to be considered for a position or to be allowed to do or have something. of NLP add information retrieval, question answering and machine translation.

    Symbolic AI used formal syntax to translate the deep structure of sentences into logic. This failed to produce useful applications, due to the intractability of logical system and the breadth of commonsense knowledge. modern statistical techniques include co-occurrence frequencies how often one word appears near another, "Keyword spotting" searching for a particular word to retrieve information, transformer-based deep learning which finds patterns in text, and others. They have achieved acceptable accuracy at the page or paragraph level, and, by 2019, could generate coherent text.

    Machine perception is the ability to use input from sensors such as cameras, microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors to deduce aspects of the world. applications include speech recognition,

  • facial recognition
  • , and object recognition.

    Computer vision is the ability to analyze visual input.

    AI is heavily used in robotics.

  • Localization
  • is how a robot knows its location and maps its environment. When given a small, static, and visible environment, this is easy; however, dynamic environments, such as in endoscopy the interior of a patient's breathing body, pose a greater challenge.