Core Concepts
Agent
The agent is the decision maker. It makes decisions based on the current state of the agent and the information provided by the environment such as rewards.
In reinforcement learning, the agent is also the solution to the problem.
Environment
The environment is the surroundings of the agent. It interacts with the agent by providing information and rewards. An environment can change by itself or the change can be caused by an action from the agent.
An environment has its state and changes in the environment can be viewed as state transitions. However, this state may not be fully accessible to the agent. For example, suppose we put a video camera in a meeting room and in this case the environment is the meeting room. If we have an agent inside the camera that is tracking the movement of people in the meeting room. In this case, the agent can only access visual information, and the audio information in the environment is not accessible to the camera.
The problem that the reinforcement learning algorithm aims to solve can be formulated in terms of the environment. Conceptually, an environment provides the following two services:
- It provides rewards to the agent.
- It reacts to the agent and "performs" the (environment) state transition. This is the behavior of the environment.
The second service is in the form of \(P(s',r|s,a,)\). This probability is not always available to the agent though.
Interaction between Environment and Agent

Reward
Rewards are feedback provided by the environment to the agent. Rewards are
- sequential (as opposed to one-shot)
- evaluative (as opposed to supervised)
- sampled (as opposed to exhaustive)
Overall, it's a hint of the goodness of the action taken by the agent.
Discount Rate
Future rewards are less attractive than the near-term reward. Therefore, we need to apply a discount rate to calculate its present value. This is similar to the discount rate used in the cash flow calculation in finance.
State Value Function
The state value function is also called V-function. It's the value of a given agent state.
/**
* Interface of the state-value function (V-function).
*
* @param <STATE> The state of the agent.
*/
public interface StateValueFunction<STATE> {
/**
* Gets the value of the state.
*
* @param s The agent state.
* @return The value of the state.
*/
double getValue(STATE s);
}
Mathmatically,
Action Value Function
The action value function is also called Q-function. It's the value of taking action \(a\) given the current state \(s\).
/**
* Interface for the action value function (Q-function).
*
* @param <STATE> The state of the agent.
* @param <ACTION> The action of the agent.
*/
public interface ActionValueFunction<STATE, ACTION> {
/**
* Gets the value of taking the specific action given the state.
*
* @param s The state of the agent.
* @param a The action to take.
* @return The value of the action given the state.
*/
double getValue(STATE s, ACTION a);
}
Mathmatically,
If we have the state value function V, we can calculate the action value function Q using its definition. If we have the action value function, we can rewrite
Policy
A policy prescirbes actions to take for a given non-terminal state.
/**
* Interface for policies.
*
* @param <STATE> The state of the agent.
* @param <ACTION> The action of the agent.
*/
public interface Policy<STATE, ACTION> {
/**
* Selects an action to take.
*
* @param s The state of the agent.
* @return The action to take.
*/
ACTION selectAction(STATE s);
}
Algorithm
In this section, we present two algorithms to calculate the state value function.
Policy Iteration
There are two steps in policy iteration:
- policy evaluation
- policy improvement
Policy evaluation computes the state value function. As we mentioned earlier, the state value function and the action value function are equivalent. So after the policy evaluation step, we have the action value function too. From an action value function, we can build a new greedy policy: for each state, we select the optimal action.
The math involved in policy evaluation is given as follows:
Value Iteration
The value iteration method is based on the following formula:
Here is a simple implementation:
public class ValueIteration<STATE, ACTION> {
private final List<STATE> states;
private final List<ACTION> actions;
public Pair<IStateValueFunction<STATE>, IPolicy<STATE, ACTION>> run(
final EnvironmentProxy<STATE, ACTION> envProxy,
double discountRate,
double convergenceThreshold) {
// Initialize the state value function
final IStateValueFunction stateValueFunction = new StateValueFunction<>();
this.states.forEach(s -> stateValueFunction.setValue(s, 0));
final IActionValueFunction actionValueFunction = new ActionValueFunction<>();
for (;;) {
actionValueFunction.reset();
for (var state : this.states) {
for (var action : this.actions) {
for (final StateTransitionRecord<STATE> record : envProxy.generateScenarios(state, action)) {
final double delta = record.getProbability()
* (record.getReward() + discountRate * stateValueFunction.getValue(record.getToState()));
actionValueFunction.setValue(state, action,
actionValueFunction.getValue(state, action) + delta);
}
}
}
if (computeDifference(stateValueFunction, actionValueFunction) < convergenceThreshold) {
break;
}
stateValueFunction.copyFrom(this.deriveStateValueFunction(actionValueFunction));
}
return Pair.of(stateValueFunction, derivePolicy(actionValueFunction));
}
private static <STATE, ACTION> IStateValueFunction<STATE> deriveStateValueFunction(
final IActionValueFunction<STATE, ACTION> q) {
final IStateValueFunction<STATE> v = new StateValueFunction<>();
for (final var state : q.getStates()) {
double maxValue = Double.MIN_VALUE;
for (final var action : q.getActions()) {
maxValue = Math.max(maxValue, q.getValue(state, action));
}
v.setValue(state, maxValue);
}
return v;
}
private static <STATE, ACTION> IPolicy<STATE, ACTION> derivePolicy(
final IActionValueFunction<STATE, ACTION> q) {
final Policy<STATE, ACTION> policy = new Policy<>();
for (final var state : q.getStates()) {
double maxValue = Double.MIN_VALUE;
ACTION optimalAction = null;
for (final var action : q.getActions()) {
final double value = q.getValue(state, action);
if (value > maxValue) {
maxValue = value;
optimalAction = action;
}
}
policy.setAction(state, optimalAction);
}
return policy;
}
private static <STATE, ACTION> double computeDifference(
final IStateValueFunction<STATE> v,
final IActionValueFunction<STATE, ACTION> q) {
final var w = deriveStateValueFunction(q);
double maxValue = Double.MIN_VALUE;
for (final var state : v.getStates()) {
maxValue = Math.max(maxValue, Math.abs(v.getValue(state) - w.getValue(state)));
}
return maxValue;
}
}
----- END -----
©2019 - 2023 all rights reserved