video games rising up was positively Minecraft. To at the present time, I nonetheless keep in mind assembly up with a few pals after faculty and determining what new, odd red-stone contraption we might construct subsequent. That’s why, when Oasis, an mechanically generated open AI world mannequin, was launched in October 2024, I used to be flabbergasted! Constructing reactive world fashions appeared lastly in attain utilizing present applied sciences, and shortly sufficient, we would have totally AI-generated environments.
World fashions[3], launched again in 2018 by David HA et al, are machine studying fashions able to each simulating and interacting with a completely digital atmosphere. Their foremost limitation has been computational inefficiency, which made real-time interplay with the mannequin a big problem.
On this weblog submit, we’ll introduce the primary open-source Minecraft world mannequin developed by Microsoft, Mineworld[1], which is able to quick real-time interactions and excessive controllability, whereas utilizing fewer assets in comparison with its closed-source counterpart, Oasis [2]. Their contribution lies in three details:
- Mineworld: An actual-time, interactive world mannequin with excessive controllability , and it’s open supply.
- A parallel decoding algorithm that hurries up the era course of, growing the variety of frames generated per second.
- A novel analysis metric designed to measure a world mannequin’s controllability.
Paper hyperlink: https://arxiv.org/abs/2504.08388
Code: https://github.com/microsoft/mineworld
Launched: eleventh of April 2025
Mineworld, Simplified
To precisely clarify Mineworld and its method, we’ll divide this part into three subsections:
- Downside Formulation: the place we outline the issue and set up some floor guidelines for each coaching and inference
- Mannequin Structure: An outline of the fashions used for producing tokens and output photographs.
- Parallel Decoding: A glance into how the authors tripled the variety of frames generated per second utilizing a novel diagonal decoding algorithm [8].
Downside Formulation
There are two varieties of enter to the world mannequin: online game footage and participant actions taken throughout gameplay. Every of those requires a special kind of tokenization to be accurately utilized.
Given a clip of Minecraft video 𝑥, containing 𝑛 states/frames, picture tokenization might be formulated as follows:
$$x=(x_{1},…,x_{n})$$
$$t= (t_{1},…,t_{c},t_{c+1},…,t_{2c},t_{2c+1},…,t_{N})$$
Every body 𝑥(i) comprises c patches, and every patch might be represented by a token t(j). Which means a single body 𝑥(i) might be additional described because the set of quantized tokens {t(1),t(2),…,t(c)}, the place every t(j) ∈ t is a definite patch, capturing its personal set of pixels.
Since each body comprises c tokens, the full quantity of tokens over one video clip is N =n.c.

Along with tokenizing video enter, participant actions should even be tokenized. These tokens have to seize variations comparable to modifications in digicam perspective, keyboard enter, and mouse actions. That is achieved utilizing 11 distinct tokens that signify the complete vary of enter options:
- 7 tokens for seven unique motion teams. Associated actions are grouped into the identical class (grouping of actions is represented in Desk 1).
- 2 tokens to encode digicam angles following [5]
- 2 tokens capturing the starting and finish of the motion sequence:
and .
Thus, a flat sequence capturing all sport states and actions might be represented as follows:
$$t= (t_{i*c+1},…,t_{(i+1)*c},[aBOS],t_{1}^{a_{i}},…,t_{9}^{a_{i}},[aEOS])$$
We start with a listing of quantized IDs for every patch, ranging from t(1) to t(N) (as proven within the earlier equation), adopted by a beginning-of-sequence token
Mannequin Structure
Two foremost fashions have been used on this work: a Vector Quantized Variational Autoencoder (VQ-VAE)[6] and a Transformer decoder based mostly on the LLaMA structure[7].
Though conventional Variational Autoencoders (VAEs) have been as soon as the go-to structure for picture era (particularly earlier than the broad adoption of diffusion fashions), they’d some limitations. VAEs struggled in instances with information that was extra discrete in nature ( comparable to phrases or tokens) or required excessive realism and certainty. VQ-VAEs, alternatively, tackle these shortcomings by shifting from a steady latent house to a discrete one, making them extra structured and enhancing the mannequin’s suitability for downstream duties.
On this paper, VQ-VAE was used because the visible tokenizer, changing every picture body 𝑥 into its quantized ID illustration t. Photographs of measurement 224×384 have been used as enter, with every picture divided additional into 16 totally different patches of measurement 14×24. This leads to a sequence of 336 discrete tokens representing the visible info in a single body.
However, a LLaMA transformer decoder was employed to foretell every token conditioned on all earlier tokens.
$$f_{theta}(t)=prod_{i=1}^{N} pleft( t_{i}|t_{lt i} proper) $$
The Transformer perform processes not solely visual-based tokens but additionally motion tokens. This permits modeling of the connection between the 2 modalities, permitting it for use as each a world mannequin (as supposed within the paper) and as a coverage mannequin able to predicting actions based mostly on previous tokens.
Parallel Decoding

The authors had a transparent requirement to think about a sport “playable” below regular settings: it should generate sufficient frames per second for the participant to comfortably carry out a mean quantity of actions per minute (APM). Based mostly on their evaluation, a mean participant performs 150 APM. To accommodate such wants, the atmosphere would wish to run not less than 2~3 frames per second.
To satisfy this requirement, the authors needed to transfer away from typical raster scan era (producing from left to proper, prime to backside, every token individually) and as a substitute make the most of mixed diagonal decoding.
Diagonal decoding works by executing a number of picture patches in parallel throughout a single run. For instance, if patch x(i,j) was processed on step t, each patches x(i+1,j) and x(i,j+1) are processed on step t+1. This technique leverages the spatial and temporal connections between consecutive frames, enabling quicker era. This impact is also seen in additional element in Determine 2.
Nevertheless, switching from sequential to parallel era introduces some efficiency degradation. This is because of a mismatch between the coaching and inference processes (as parallel era is critical throughout inference) and to the sequential nature of LLaMA’s causal consideration masks. The authors mitigate this concern by fine-tuning utilizing a modified consideration masks that’s extra appropriate for his or her parallel decoding technique.
Key Findings & Evaluation
For analysis, Mineworld utilized the VPT dataset [5], which consists of recorded gaming clips paired with their corresponding actions. VPT consists of 10M video clips, every comprising 16 frames. As beforehand talked about, every body( 224×384 pixels) is break up into 336 patches, every patch represented by a separate token t(i). Alongside the 11 motion tokens, this leads to a complete of as much as 347 tokens per body, summing as much as 55B tokens for your complete dataset.
Quantitative outcomes
Mineworld primarily in contrast its outcomes to Oasis utilizing two classes of metrics: visible high quality and controllability.
To precisely measure controllability, the authors launched a novel method by coaching an Inverse Dynamics Mannequin (IDM) [5], tasked with predicting the motion occurring between two consecutive frames. Along with reaching 90.6% accuracy, the mannequin was additional examined by supplying 20 sport clips with IDM’s predicted actions to five skilled gamers. After scoring every motion from 1 to five and calculating the Pearson correlation coefficient, they obtained a p-value of 0.56, which signifies a big optimistic correlation.
With the Inverse Dynamics Mannequin offering dependable outcomes, it may be used to calculate metrics comparable to accuracy, F1 rating, or L1 loss by treating the enter motion as the bottom fact and the IDM’s predicted motion because the motion produced by the world mannequin. On account of variations within the varieties of actions taken, this analysis might be additional divided into two classes:
- Discrete Motion Classification: Precision, Recall, and F1 scores for the 7 motion lessons described in Determine 1.
- Digicam Motion: By dividing rotation across the X and Y axes into 11 discrete bins, an L1 rating might be calculated utilizing the IDM predictions.

Analyzing the leads to Desk 2, we observe that Mineworld, regardless of having solely 300M parameters, outperforms Oasis on all given metrics, whether or not associated to controllability or visible high quality. Essentially the most fascinating metric is frames per second, the place Mineworld delivers greater than twice as many frames, enabling a smoother interactive expertise that may deal with 354 APM, far exceeding the 150 APM onerous restrict.
Whereas scaling Mineworld to 700M or 1.2B parameters improves picture high quality, it sadly comes at the price of a slowdown, with the FPS dropping to three.01. This discount in pace can negatively affect consumer expertise, although it nonetheless helps a playable 180 APM.
Qualitative Outcomes

Additional qualitative evaluation was performed to judge Mineworld’s functionality of producing high-quality particulars, following motion directions, and understanding/re-generating contextual info. The preliminary sport state was offered, together with a predefined listing of actions for the mannequin to execute.
Taking a look at Determine 3, we will draw three conclusions:
- Prime Panel: Given a picture of a participant in the home and directions to maneuver in direction of the door and open it, the mannequin efficiently generated the specified sequence of actions.
- Center Panel: In a wood-chopping situation, the mannequin demonstrated the flexibility to generate fine-grained visible particulars, accurately rendering the wooden destruction animation.
- Backside Panel: A case of excessive constancy and context consciousness. On shifting the digicam left and proper, we discover the home being out of sight, then again once more totally with the identical particulars.
These three instances present the ability of Mineworld not solely in producing high-quality gameplay content material however in following the specified actions and re-generating contextual info persistently, a function that Oasis struggles with.

In a second set of outcomes, the authors targeted on evaluating the controllability of the mannequin by offering the very same enter scene alongside three totally different units of actions. The mannequin efficiently generated three distinct output sequences, each resulting in a very totally different closing state.
Conclusion
On this weblog submit, we explored MineWorld, the primary open-source world mannequin for Minecraft. We’ve mentioned their method to tokenizing every body/state into a number of tokens and mixing them with 11 further tokens representing each discrete actions and digicam motion. We’ve additionally highlighted their revolutionary use of an Inverse Dynamics Mannequin to compute controllability metrics, alongside their novel parallel decoding algorithm that triples inference pace, reaching a mean of three frames per second.
Sooner or later, it could possibly be invaluable to increase the testing operating time past a 16-frame window. Such a very long time can precisely take a look at Mineworld’s capability to regenerate particular objects, a problem that, for my part, will stay a serious impediment to adapting such fashions extensively.
Thanks for studying!
Fascinated by making an attempt a Minecraft world mannequin in your browser? Attempt Oasis[2] right here.
References
[1] J. Guo, Y. Ye, T. He, H. Wu, Y. Jiang, T. Pearce and J. Bian, MineWorld: a Actual-Time and Open-Supply Interactive World Mannequin on Minecraft (2025), arXiv preprint arXiv:2504.08388v1
[2] R. Wachen and D. Leitersdorf, Oasis (2024), https://oasis-ai.org/
[3] D. Ha and J. Schmidhuber, World Fashions (2018), arXiv preprint arXiv:1803.10122
[4] J. Guo, Y. Ye, T. He, H. Wu, Y. Jiang, T. Pearce and J. Bian, MineWorld (2025), GitHub repository: https://github.com/microsoft/mineworld
[5] B. Baker, I. Akkaya, P. Zhokhov, J. Huizinga, J. Tang, A. Ecoffet, B. Houghton, R. Sampedro and J. Clune, Video PreTraining (VPT): Studying to Act by Watching Unlabeled On-line Movies (2022), arXiv preprint arXiv:2206.11795
[6] A. van den Oord, O. Vinyals and Okay. Kavukcuoglu, Neural Discrete Illustration Studying (2017), arXiv preprint arXiv:1711.00937
[7] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Joulin, E. Grave and G. Lample, LLaMA: Open and Environment friendly Basis Language Fashions (2023), arXiv preprint arXiv:2302.13971
[8] Y. Ye, J. Guo, H. Wu, T. He, T. Pearce, T. Rashid, Okay. Hofmann and J. Bian, Quick Autoregressive Video Era with Diagonal Decoding (2025), arXiv preprint arXiv:2503.14070