AI Without Math

AI Without Math

Making AI and ML comprehensible

Rational agent

In AI, an agent is something that acts (such as a software “bot” or a robot). A rational agent is an agent that tries to achieve the best outcome. Agents are programmed to view some outcomes as better than others. The “best outcome,” as understood by the agent, is sometimes called the agent’s objective function.

The idea that an agent tries to achieve the best outcome is often stated in technical vocabulary (or jargon). For example, some might say that the agent tries to maximize utility or maximize expected utility. (“Utility” is a synonym for things the agent values.) Thus, the following sentences are equivalent: “an agent attempts to maximize its objective function”; “an agent does what will give it the best chance to achieve its goals”; “an agent does what it expects will maximize its utility”; “an agent tries to achieve the best expected outcome.”

Beyond AI

Some philosophers argue that human beings try (or should try) to act in a way that maximizes their own utility, or that laws and social rules should be designed to maximize overall utility. This position is called utilitarianism. The discipline of economics is largely based on the assumption that people are utility-maximizers.