posts, we explored Half I of the seminal e-book Reinforcement Studying by Sutton and Barto [1] (*). In that part, we delved into the three basic strategies underlying practically each fashionable Reinforcement Studying (RL) algorithm: Dynamic Programming (DP), Monte Carlo strategies (MC), and Temporal Distinction Studying (TD). We not solely mentioned algorithms from every area in depth but in addition applied every one in Python.
Half I of the e-book focuses on tabular answer strategies — approaches suited to issues sufficiently small to be represented in desk kind. For example, with Q-learning, we are able to compute and retailer a whole Q-table containing each doable state-action pair. In distinction, Half II of Sutton’s e-book tackles approximate answer strategies. When state and motion areas grow to be too massive — and even infinite — we should generalize. Take into account the problem of taking part in Atari video games: the state area is just too huge to mannequin exhaustively. As an alternative, deep neural networks are used to compress the state right into a latent vector, which then serves as the premise for an approximated worth perform [2].
Whereas we’ll enterprise into Half II in upcoming posts, I’m excited to announce a brand new collection: we are going to benchmark all of the algorithms launched in Half I in opposition to each other. This publish serves each as a abstract and an introduction to our benchmarking framework. We’ll consider every algorithm primarily based on how rapidly and successfully it might remedy more and more bigger Gridworld environments. In future posts, we plan to increase our experiments to tougher eventualities, comparable to two-player video games, the place the variations between these strategies might be much more obvious.
Summarized, on this publish, we are going to:
- Introduce the benchmark job and focus on our comparability standards.
- Present a chapter-by-chapter abstract of the strategies launched in Sutton’s e-book together with preliminary benchmarking outcomes.
- Determine the best-performing strategies from every group and deploy them in a larger-scale benchmarking experiment.
Desk of Contents
Introducing the Benchmark Job and Experiment Planning
On this publish we’ll work with the Gymnasium [3] atmosphere “Gridworld”. It’s primarily a maze-finding job, during which the agent has to get from the top-left nook of the maze to the bottom-right tile (the current) — with out falling into any of the icy lakes:
The state area is a quantity between 0 and N — 1, the place N is the maximal variety of tiles (16 in our case). There are 4 actions the agent can execute in each step: going left, proper, up or down. Reaching the purpose yields reward 1, falling into the lake ends the episode with none reward.
The great factor about this atmosphere is that one can generate random worlds of arbitrary measurement. Thus, what we’ll do with all strategies, is plot the variety of steps / updates wanted to unravel the atmosphere versus the atmosphere measurement. In actual fact, Sutton does this in some elements of the e-book, too, so we are able to consult with that.
Preliminaries
I’d like to begin with some normal notes — hoping these will information you thru my thought course of.
It isn’t straightforward to check algorithms in a “truthful” manner. When implementing the algorithms I primarily regarded for correctness, but in addition simplicity — that’s, I needed readers to simply be capable to join the Python code with pseudocode from Sutton’s e-book. For “actual” use instances, one would absolutely optimize the code extra, and likewise use a big array of tips frequent in RL, comparable to utilizing decaying exploration, optimistic initialization, studying charge tuning, and extra. Additional, one would take nice care to tune the hyperparameters of the deployed algorithm.
Utilized RL Methods
As a result of massive variety of algorithms underneath investigation, I can not do that right here. As an alternative, I recognized two necessary mechanisms, and measured their effectiveness on an algorithm identified to work fairly effectively: Q-learning. These are:
- Intermediate rewards: as a substitute of solely rewarding the agent for reaching the purpose, we reward it for progress alongside the maze. We quantify this by the (normalized) distinction in x and y coordinates between present and former state. This makes use of the truth that the purpose in every Gridworld atmosphere is at all times on the backside proper, and thus larger x / y coordinates normally are higher (though one may nonetheless get “caught”, in case an icy lake is between the agent and the purpose). Since this distinction is normalized by the variety of states, its contribution is small, s.t. it doesn’t overshadow the reward of reaching the purpose.
- Decaying exploration: all through this publish collection, the exploration-exploration dilemma got here up regularly. It describes the trade-off of exploiting states / actions already identified to be good, and exploring much less explored states — doubtlessly discovering even higher options, on the threat of losing time in much less optimum areas. One frequent approach addressing that is to decay exploration: beginning with a excessive quantity of exploration early, and slowly decaying this all the way down to decrease ranges. We do that by linearly decaying ε from 1 (100% random exploration) to 0.05 over 1000 steps.
Let’s take a look at how Q-learning performs with these strategies:

As we are able to see, within the baseline setup the variety of steps wanted rapidly grows, and reaches the maximal variety of allowed steps (100.000 ) — which means the algorithm didn’t remedy the atmosphere within the allotted variety of steps. Additionally decaying ε alone didn’t contribute a lot. Nonetheless, including intermediate rewards proved to be extraordinarily efficient — and the mix of this and decaying ε carried out finest.
Thus, for many strategies to come back we begin with the “naïve” atmosphere, the baseline implementation. Later we present outcomes for the “improved” atmosphere consisting of intermediate rewards and decaying exploration.
Comparability Standards
As was seen within the earlier part I selected the variety of steps wanted till the discovered coverage solves the Gridworld atmosphere because the default manner of evaluating strategies. This appears a bit extra truthful than simply measuring elapsed time, since time relies on the concrete implementation (much like the idea of Large O notation)— and, as talked about above, I didn’t optimize for pace. Nonetheless, you will need to notice that additionally steps may be deceptive, as e.g. one step in DP strategies comprises a loop over all states, whereas one step in MC and TD strategies is the era in a single episode (really for TD strategies we normally depend one step as one worth replace, i.e. an episode era consists of a number of steps — nonetheless I made this extra corresponding to MC strategies on objective). Because of this we additionally present elapsed time typically.
Experiment Construction
To cut back the variance, for every Gridworld measurement we run every methodology thrice, after which save the bottom variety of steps wanted.
The code required to run all following benchmarking may be discovered on GitHub.
Recap and Benchmarking of All Algorithms
With the basics out of the way in which, let’s correctly get began. On this part we are going to recap all launched algorithms from Half I of Sutton’s e-book. Additional, we are going to benchmark them in opposition to one another on the beforehand launched Gridworld job.
Dynamic Programming
We begin with Chapter 4 of Sutton’s e-book, describing strategies from DP. These may be utilized to all kinds of issues, at all times constructing on the precept of iteratively constructing bigger options from smaller subproblems. Within the context of RL, DP strategies preserve a Q-table which is stuffed out incrementally. For this, we require a mannequin of the atmosphere, after which, utilizing this mannequin, replace the anticipated worth of states or state-action pairs relying on the doable successor states. Sutton introduces two strategies we picked up in our corresponding publish: coverage and worth iteration.
Let’s begin with coverage iteration. This consists of two interleaved steps, particularly coverage analysis and coverage enchancment. Coverage analysis makes use of DP to — because the title says — consider the present coverage: we incrementally replace the state estimates by utilizing the mannequin and coverage. Subsequent comes coverage enchancment, which employs a basic idea of RL: in line with the coverage enchancment theorem, any coverage will get higher when altering the anticipated motion in a single state to a greater motion. Following this, we assemble the brand new coverage from the Q-table in grasping style. That is repeated, till the coverage has converged.
The corresponding pseudocode appears to be like as follows:

Let’s come to worth iteration. That is similar to coverage iteration, however with a easy, but essential modification: in each loop, just one step of coverage analysis is run. It may be proven that this nonetheless converges to the optimum coverage, and total does so quicker than coverage iteration:

For extra particulars, right here’s my corresponding publish about DP strategies.
Outcomes
Now it’s time to see what these first two algorithms are actually product of. We run the experiment sketched above, and get the next plot:

Each strategies are in a position to remedy all created Gridworld sizes within the minimal variety of steps, 100. Shocking? Properly, this really reveals each a energy and in addition to a weak point of DP strategies, as concurrently of our methodology: DP strategies are “thorough”, they require a whole mannequin of the world, after which iterate over all states — yielding answer with just some passes over all states. Nonetheless, which means all states should be estimated until convergence — though a few of them is likely to be a lot much less attention-grabbing — and this scales fairly badly with the atmosphere measurement. In actual fact, one measured step right here comprises a full run over all states — indicating that for these strategies time is a greater measure.
For this, we get the next graph:

Now, we are able to see enhance in compute wanted w.r.t. to the variety of states. And, we are able to additionally see that, as claimed, worth iteration converges a lot quicker, and scales significantly better. Notice that the x-axis labels denote n, with the Gridworld having a measurement of n x n.
Monte Carlo Strategies
Subsequent in our publish collection on RL we lined MC strategies. These can be taught from expertise alone, i.e. one can run them in any sort of atmosphere, with out having a mannequin of it — which is a surprising realization, and really helpful: usually, we don’t have this mannequin, different instances, it could be too complicated and impractical to make use of. Take into account the sport of Blackjack: whereas we are able to definitely mannequin all doable outcomes and corresponding possibilities, it’s a very tedious job — and studying to play by simply doing that could be a very tempting thought. Because of not utilizing a mannequin, MC strategies are unbiased — however on the draw back their expectation has a excessive variance.
One problem when implementing these strategies is ensuring that each one state-action pairs are constantly visited, and thus up to date. Because of not having a mannequin, we can not merely iterate over all doable combos (evaluate e.g. DP strategies), however (in a manner) randomly discover the atmosphere. If on account of this we missed some states completely, we’d free theoretical convergence ensures, which might translate into follow.
A method of satisfying that is the exploring begins assumption (ES): we begin every episode in a random state and select the primary motion randomly, too. Other than that, MC strategies may be applied quite merely: we merely play out full episodes, and set the anticipated worth of state-action pairs to the common obtained reward.
MC with ES appears to be like as follows:

To take away the idea of ES, we are able to resort to 2 lessons of algorithms: on- and off-policy strategies. Let’s begin with the on-policy one.
That is really not too totally different from the ES algorithm, we merely use an ε-greedy coverage for producing episodes. That’s, we take away the idea of ES and use a “mushy” as a substitute of a “onerous” coverage for producing episodes: the used coverage in each iteration will not be absolutely grasping, however ε-greedy — which ensures that within the restrict we see all doable state-action pairs:

Off-policy strategies comply with the concept of splitting exploration and studying in two insurance policies. We preserve a coverage π, which we need to optimize, and a conduct coverage, b.
Nonetheless, we are able to’t merely use b all over the place in our algorithm. When producing an episode and computing returns, we acquire:

I.e., the ensuing worth is the anticipated worth of b, not π.
That is the place significance sampling is available in. We are able to repair this expectation with the appropriate ratio:

This ratio is outlined by:

In our case, we acquire the next formulation:

(Notice that this makes use of weighted significance sampling, as a substitute of “naïve” significance sampling.)
We may after all compute these ratios naively in each step. Nonetheless, Sutton introduces a intelligent scheme updating these values (denoted by W
) incrementally, which is far more environment friendly. In actual fact, in my unique publish I confirmed the naive model, too — I imagine this helps with understanding. Nonetheless, since right here we primarily care about benchmarking, and the “naïve” and the “incremental” model are equivalent, as much as efficiency — we right here solely record the marginally extra complicated incremental model.
In pseudocode the corresponding algorithm appears to be like as follows:

Notice that, against our preliminary publish introducing these strategies, the place the conduct coverage was merely randomly, right here we decide a greater one — particularly an ε-greedy one w.r.t. to the present Q-table.
For extra particulars right here’s my corresponding publish on MC strategies.
Outcomes
With that, let’s evaluate these three algorithms on small Gridworld environments. Notice that one step right here denotes one full episode generated:

We observe off-policy MC to already day out at a Gridword measurement of 5×5, and, though MC with ES and on-policy MC carry out higher, additionally they begin to wrestle with bigger sizes.
This is likely to be considerably shocking, and disappointing for MC followers. Don’t fear, we are going to handle to spice up this — nonetheless it reveals a weak point of those algorithms: in “massive” environments with sparse rewards, MC strategies mainly need to hope to come upon the purpose by likelihood — which decreases exponentially with the dimensions of the atmosphere.
Thus, let’s attempt to make the duty simpler for the mannequin, and use the beforehand launched tips empirically discovered to assist efficiency of TD-learning: including intermediate rewards, and ε-decay — our “improved” setup.
In actual fact, with this all strategies fare significantly better:

Nonetheless, now MC ES is inflicting issues. Thus, let’s put this apart and proceed with out it: ES in any case was a theoretical idea on the way in which of growing MC strategies, and clunky to make use of / implement (some would possibly bear in mind how I applied having the atmosphere begin in random states …):

Right here, not less than we get near the outcomes of DP. Notice that I capped the maximal variety of steps to 100.000, so each time this quantity reveals up within the graph it signifies that the algorithm couldn’t remedy this atmosphere within the given step restrict. On-policy MC really appears to carry out rather well, the variety of steps wanted barely will increase— however off-policy MC appears to carry out worse.
Dialogue
To me, MC strategies carry out surprisingly effectively — since they primarily stumble across the atmosphere randomly to start with, hoping to search out the purpose by exploration alone. Nonetheless, after all this isn’t absolutely true — their efficiency (talking of on-policy MC) turns into actually good solely after enabling intermediate rewards — which information the mannequin in the direction of the purpose. On this setup it appears MC strategies carry out rather well — one potential cause being that they’re unbiased — and fewer delicate to hyperparameter tuning and co.
Temporal-Distinction Studying
Let’s come to TD strategies. These may be seen as combining the strengths of each approaches beforehand launched: much like MC, they don’t want a mannequin of the atmosphere — however nonetheless they construct upon earlier estimates, they bootstrap — as in DP.
Let’s recap DP and MC fashions:
DP strategies flip the Bellman equation into an replace rule, and compute the worth of a state primarily based on the estimated values of its successor states:

MC strategies, then again, play out full episodes after which replace their worth estimates primarily based on the noticed return:

TD strategies mix these two concepts. They play out full episodes, however after each step replace worth estimates with the noticed return, and the earlier estimate:

A few of the most basic RL algorithms stem from this area — and we are going to focus on them within the following.
Let’s start with Sarsa. First, we modify above launched replace rule to work with state-action pairs:

With this, Sarsa is definitely launched quite rapidly: we play episodes, and replace values following our present coverage. The title comes from the tuples used within the updates:

In pseudocode this appears to be like as follows:

Subsequent up we have now Q-learning. That is similar to Sarsa, with one key distinction: it’s an off-policy algorithm. As an alternative of merely following the executed transition throughout the replace, we take the utmost Q-value of all successor states:

You’ll be able to image this as making a conduct coverage b, which is the same as π, besides being grasping within the transitions underneath query.
The pseudocode appears to be like like this:

One other algorithm is Anticipated Sarsa, which (you guessed it) — is an extension of Sarsa. As an alternative of following the one transition executed by the coverage, we account for all doable successor states, and weigh them by how possible they’re given the present coverage:

The final algorithm on this chapter is an extension of Q-learning. Q-learning suffers from an issue generally known as maximization bias: because it makes use of a most over anticipated values, the ensuing estimate can have constructive bias. We are able to handle this by utilizing two Q-tables: for every replace we use one for choosing a value-maximizing motion, and the opposite for computing the replace goal. Which is used the place is set by a coin flip. The algorithm is called Double Q-learning:

Outcomes
Let’s take a look on the outcomes, beginning with the naïve atmosphere:

We are able to see that each Q-learning strategies begin to get issues with Gridworld sizes of 11 x 11.
Thus let’s apply our identified tips, yielding the “improved” setup:

All strategies can now discover options considerably faster — simply Anticipated Sarsa falls out. This might very effectively be — it’s considerably much less used than Q-learning or Sarsa, and possibly extra a theoretical idea.
Thus, let’s proceed with out this methodology and see how massive world sizes we are able to remedy:

Q-learning can now additionally remedy grid sizes of 25 x 25 with out issues — however Sarsa and Double Q-learning begin to degrade.
Extra particulars may be present in my introductory publish about TD strategies.
Dialogue
Within the improved setup, TD strategies usually carry out effectively. We solely eradicated Anticipated Sarsa early, which in any case will not be such a typical algorithm.
“Easy” Sarsa and Double Q-learning wrestle for bigger atmosphere sizes, whereas Q-learning performs effectively total. The latter is considerably shocking, since Double Q-learning ought to handle a number of the shortcomings of normal Q-learning, particularly the excessive variance. Doubtlessly, we already scale back the variance by operating every experiment n instances. One other speculation may very well be that Double Q-learning takes longer to converge, because the variety of parameters has additionally doubled — which might point out that the ability of Double Q-learning reveals higher for extra complicated issues with extra time.
As talked about performs Q-learning higher than Sarsa. This mirrors what can see in analysis / literature, particularly that Q-learning is considerably extra well-liked. This could in all probability defined by it being off-policy, which normally yields extra highly effective answer strategies. Sarsa then again performs higher for stochastic or “harmful” duties: since in Sarsa the precise chosen motion is taken into consideration within the worth replace, it higher understands the consequences of its actions, which is useful for stochastic environments and / or environments the place one can, e.g., fall off a cliff. Regardless of the latter being the case right here, the atmosphere might be not complicated or massive sufficient, that this impact comes into play.
TD-n
TD-n strategies in a manner marry classical TD studying and MC strategies. As Sutton so properly places it, they “free us from the tyranny of the timestep” [1]. In MC strategies, we’re compelled to attend a full episode earlier than making any updates. In TD strategies, we replace estimates in each step — however are additionally compelled to solely look one step sooner or later.
Thus, it is smart to introduce n-step returns:

With that, we are able to merely introduce Sarsa-n:

We play episodes following the present coverage, after which replace the worth estimates with the n-step return.
In my corresponding publish, we additionally introduce an off-policy model of this. Nonetheless, to not blow up this publish too lengthy, and detrimental expertise with off-policy MC strategies, we deal with the “classics” — comparable to Sarsa-n — and tree-n tree backup, which we introduce subsequent.
n-step tree backup is an extension of the beforehand seen Anticipated Sarsa. When computing the n-step return, the corresponding transition tree appears to be like as follows:

I.e., there’s a single path down the tree comparable to the precise motion taken. Simply as in Anticipated Sarsa, we now need to weigh actions in line with their chance decided by the coverage. However since now we have now a tree of depth > 1, the cumulative worth of later ranges is weighted by the chance of the motion taken to achieve these ranges:

The pseudocode appears to be like as follows:

Right here’s my corresponding publish on n-step TD strategies.
Outcomes
As common, we begin with the “naïve” setting, and acquire the next outcomes:

Sarsa-n begins to wrestle already with smaller grid world sizes. Let’s see if the improved setup adjustments this:

Now certainly Sarsa-n performs significantly better, however n-step tree backup doesn’t.
Dialogue
I discovered this discovery surprising and considerably onerous to clarify. I’d love to listen to your ideas on this — however within the meantime I used to be chatting with my chat agent of alternative, and got here to this speculation: intermediate rewards doubtlessly confuse the tree algorithm, because it must be taught an identical return distribution over all doable actions. Additional, the extra ε decays, the extra the anticipated distribution would possibly differ from the conduct coverage.
Mannequin-Based mostly Reinforcement Studying / Planning
Within the earlier chapter we mentioned the subject “planning” — within the RL context with this we primarily consult with model-based strategies. That’s: we have now (or construct) a mannequin of the atmosphere, and use this mannequin to discover additional “nearly”, and particularly use these explorations for extra and higher updates / learnings of the worth perform. The next picture shows the combination of planning into studying very effectively:

Within the top-right nook we see the “classical” RL coaching loop (additionally dubbed “direct” RL): beginning with some worth perform / coverage we act within the (actual) atmosphere, and use this expertise to replace our price perform (or coverage within the case of policy-gradient strategies). When incorporating planning, we moreover additionally be taught a mannequin of the world from this expertise, after which use this mannequin to generate additional (digital) expertise, and replace our price or coverage perform from this.
This really is strictly the Dyna-Q algorithm, which appears to be like as follows in pseudocode:

Steps (a) — (d) are our classical Q-learning, whereas the remainder of the algorithm provides the novel planning performance, particularly the world mannequin studying.
One other associated algorithm is Prioritized Sweeping, which adjustments how we pattern states for the “planning loop”: we discover and play in the actual atmosphere, whereas studying the mannequin, and save state-action pairs with massive anticipated worth adjustments to a queue. Solely with this queue we begin the “planning loop”, i.e. one thing to the steps (e) and (f) above:

Extra particulars may be present in my earlier publish on model-based RL strategies.
Outcomes
Let’s begin with the naïve setting:

Dyna Q performs fairly effectively, whereas Prioritized Sweeping struggles early on.
Within the improved setting we see an analogous factor:

Dialogue
Prioritized sweeping already carried out poorly within the corresponding introductory publish — I believe there both is a few problem, or extra possible this merely is a “tuning” factor — i.e. utilizing a flawed sampling distribution.
Dyna-Q yields stable outcomes.
Benchmarking the Greatest Algorithms
We’ve now seen the efficiency of all algorithms from Half I of Sutton’s e-book by benchmarking them per chapter and on Gridworlds of as much as measurement 25 x 25. Already right here we noticed higher and worse performing algorithms, and particularly already discarded a couple of candidates not suited to bigger environments.
Now we need to benchmark the remaining ones — one of the best ones from every chapter — in opposition to each other, on Gridworlds as much as measurement 50 x 50.

These algorithms are:
- worth iteration
- on-policy MC
- Q-learning
- Sarsa-n
- Dyna-Q
Outcomes
Right here’s how they carry out on Gridworld, this time with a maximal step restrict of 200.000:

Let’s additionally plot the corresponding time wanted (notice that I plot unsuccessful runs — runs reaching the maximal variety of steps with out producing a possible coverage — at 500s):

We are able to observe a number of attention-grabbing information from these figures:
- The variety of steps vs. time wanted is very correlated.
- Worth iteration performs exceptionally effectively, fixing even Gridworlds of measurement 50 x 50 with ease, and doing so magnitudes quicker than the next-best algorithm.
- The rating for the remaining algorithms is (higher to worse): On-policy MC, Dyna-Q, Q-learning, Sarsa-n.
Within the subsequent part we focus on these in additional particulars.
Dialogue
1. Steps vs. Time
We began this publish with a dialogue on which metrics / measurement to make use of, and — particularly — whether or not to make use of variety of steps or time wanted to unravel the issue. Trying again, we are able to say that this dialogue was not so related in any case, and — considerably surprisingly — these two numbers are extremely correlated. That’s, even though, as initially described, one “step” can differ relying on the algorithm.
2. Worth Iteration Dominates
Worth Iteration carried out remarkably effectively, fixing even massive Gridworlds (as much as 50×50) with ease—outpacing all different algorithms by a large margin. This is likely to be shocking, contemplating that DP strategies are sometimes thought of theoretical instruments, not often utilized in follow. Actual-world purposes are likely to favor strategies like Q-learning [2], PPO [4], or MCTS [5].
So why does such a “textbook” methodology dominate right here? As a result of this atmosphere is tailored for it:
- The mannequin is absolutely identified.
- The dynamics are easy and deterministic.
- The state area is comparatively small.
These are precisely the circumstances underneath which DP thrives. In distinction, model-free strategies like Q-learning are designed for settings the place such data is not obtainable. Their energy lies in generality and scalability, not in exploiting small, well-defined issues. Q-learning incurs excessive variance and requires many episodes to converge—disadvantages which are magnified in small-scale environments. Briefly, there’s a transparent trade-off between effectivity and generality. We’ll revisit this level in a future publish once we introduce perform approximation, the place Q-learning has extra room to shine.
3. A Rating Emerges
Past Worth Iteration, we noticed the next efficiency rating: On-policy MC > Dyna-Q > Q-learning > Sarsa-n
On-policy Monte Carlo emerged because the best-performing model-free algorithm. This suits with our earlier reasoning: MC strategies are easy, unbiased, and well-suited to issues with deterministic objectives—particularly when episodes are comparatively quick. Whereas not scalable to massive or steady issues, MC strategies appear to be fairly efficient in small to medium-sized duties like Gridworld.
Dyna-Q comes subsequent. This end result reinforces our expectations: Dyna-Q blends model-based planning with model-free studying. Though the mannequin is realized (not given, as in Worth Iteration), it’s nonetheless easy and deterministic right here—making the realized mannequin helpful. This boosts efficiency considerably over pure model-free approaches.
Q-learning, whereas nonetheless highly effective, underperforms on this context for the explanations mentioned above: it’s a general-purpose algorithm that isn’t in a position to absolutely leverage the construction of easy environments.
Sarsa-n landed in final place. A probable rationalization is the added bias launched by way of bootstrapping in its multi-step updates. Not like Monte Carlo strategies, which estimate returns from full trajectories (unbiased), Sarsa-n makes use of bootstrapped estimates of future rewards. In small environments, this bias can outweigh the advantages of decreased variance.
Lastly, let’s evaluate our outcomes vs. those from Sutton:

Notice that Sutton lists the entire variety of steps on the x-axis, whereas we record n, with the entire variety of states being n x n. For 376 states, Sutton report ~100k steps earlier than the optimum answer is discovered, whereas we report 75k for 400 states (20 x 20), contemplating Dyna-Q. The numbers are extremely comparable and supply a reassuring validation of our setup and implementation.
Conclusion
This publish served each as a recap of our collection on Half I of Sutton and Barto’s Reinforcement Studying [1]and as an extension past the e-book’s scope—by benchmarking all launched algorithms on more and more bigger Gridworld environments.
We started by outlining our benchmarking setup, then revisited the core chapters of Half I: Dynamic Programming, Monte Carlo strategies, Temporal-Distinction studying, and Mannequin-Based mostly RL / Planning. In every part, we launched key algorithms comparable to Q-learning, offered full Python implementations, and evaluated their efficiency on Gridworlds as much as measurement 25×25. The purpose of this preliminary spherical was to determine prime performers from every algorithmic household. Based mostly on our experiments, the standouts have been:
Worth Iteration, On-policy MC, Q-learning, Sarsa-n, and Dyna-Q. Python code to breed these outcomes, and particularly implementations of all mentioned strategies, is on the market on GitHub.
Subsequent, we stress-tested these high-performers on bigger environments (as much as 50×50) and noticed the next rating:
Worth Iteration > On-policy MC > Dyna-Q > Q-learning > Sarsa-n
Whereas this end result could also be shocking—given the widespread use of Q-learning and the comparatively uncommon software of Worth Iteration and MC strategies—it is smart in context. Easy, fully-known, deterministic environments are perfect for Worth Iteration and MC strategies. In distinction, Q-learning is designed for extra complicated, unknown, and high-variance environments the place perform approximation turns into vital. As we mentioned, there’s a trade-off between effectivity in structured duties and generality in complicated ones.
That brings us to what’s subsequent. In upcoming posts, we’ll push the boundaries additional:
- First, by benchmarking these strategies in tougher environments comparable to two-player video games, the place direct competitors will expose their variations extra starkly.
- Then, we’ll dive into Half II of Sutton’s e-book, the place perform approximation is launched. This unlocks the flexibility to scale reinforcement studying to environments far past what tabular strategies can deal with.
When you’ve made it this far—thanks for studying! I hope you loved this deep dive, and I’d like to have you ever again for the following installment within the collection.
Different Posts on this Sequence
References
[1] http://incompleteideas.web/e-book/RLbook2020.pdf
[2] https://arxiv.org/abs/1312.5602
[3] https://gymnasium.farama.org/index.html
[4] https://arxiv.org/abs/1707.06347
[5] https://arxiv.org/abs/1911.08265
(*) Photos from [1] used with permission from the authors.