Photo by rawpixel on Unsplash

An AI agent operates in an environment. For example, for a self-driving car, the environment is the road and for a chess-playing agent, the environment is the chessboard. Further, an environment might also have other agents operating on it. Like other autonomous vehicles in the example of the self-driving car. The nature of environment also varies based on the problem that the AI agent is intended to solve.

The previous blog post covered the definition of an artificial intelligence agent. In this post, I am going to explain in brief the classification of environments.

1. Fully Observable vs. Partially Observable

Consider the example of Chess where each player has access to the complete board information. Every decision is made considering the state of the board at that time and the possible moves by the other player. This is a ‘fully observable’ environment. Contrast this with Poker where players cannot anticipate the opponent’s game as they do not have access to the opponent’s cards. Such an environment is ‘Partially Observable’.

The environment of an autonomous vehicle is Partially Observable since the car cannot predict what the other cars might do.

2. Deterministic vs. Stochastic

In a Deterministic environment, the outcome is certain and can be determined based on a specific state. However, in a Stochastic environment, the next state cannot be predicted with certainty. Chess can be considered as having a Deterministic environment. The autonomous vehicle, however, has a Stochastic environment. Here, the decision taken by the autonomous vehicle is based on the probability of action of other vehicles.

3. Competitive vs. Collaborative

In Competitive environments, AI agents face other AI agents. Two AI agents playing chess against each other is an example of a Competitive environment. In Collaborative environments, AI agents work with each other to achieve a specific objective. Self-driving vehicles can be said to operate in a Collaborative environment as they coordinate their actions to avoid collisions.

4. Static vs. Dynamic

A Dynamic environment might change while the agent is processing a response. However, in a Static environment, the environment does not change during the course of decision making. An autonomous car has to deal with a Dynamic environment whereas, in Chess, the environment remains unchanged (Static) during the period a player is a contemplation the next move.

5. Episodic vs. Sequential

In a Sequential environment, the agent has to take decisions considering its previous decisions. However, in an Episodic environment, the agent has to consider only the current state of the environment.

You have reached the end :). Let me know what you feel about this post in the comments below.


1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.