SkillAgentSearch skills...

GDMC

The Generative Design in Minecraft (GDMC) is an AI Settlement Generation Competition in Minecraft. As a team, we used different procedural content generation algorithms to create a settlement on an unknown Minecraft map. Our aim is to produce an algorithm that is adaptive towards the provided map, creates a settlement that satisfies a range of functional requirements.

Install / Use

/learn @williamcwi/GDMC
About this skill

Quality Score

0/100

Category

Design

Supported Platforms

Universal

README

Generative Design in Minecraft (GDMC)

The objective of the Generative Design in Minecraft Challenge (GDMC) is to create a convincing AI-generated settlement in the video game Minecraft. The result is evaluated via the metrics of adaptability to the terrain, functionality as a settlement in the game itself, reflection of narrative ideas, and pure aesthetics. This technical report explains the approaches taken throughout each stage of the development, describes the difficulties encountered during the process, and finally reflects on what has been achieved, the future direction of the project, and some potential improvements that could be made.

Introduction

GDMC Competition

The GDMC competition tasks participants with generating a convincing settlement in Minecraft (2011), a procedurally-generated open-world sandbox game. Minecraft’s primary feature is the ability of the player to interact with the game’s world by breaking and placing the blocks that it is composed of, thus allowing them to construct, modify or destroy various structures.

Auto-generated villages already exist in Minecraft, but they have a number of limitations and bugs. For example, a house may generate at a high elevation, making it difficult to reach. The game attempts to generate a path connecting each building in the village, but these paths do not account for the height of the terrain, which can result in them being interrupted, resulting in an inaccessible building. Moreover, the number of building types is rather limited, meaning there is little variation across different settlements (Salge et al, 2018).

Our solution takes the form of a filter script in MCEdit Unified, a third-party, community-driven Minecraft map editor. This allows us to make use of libraries built for the software, which can modify the terrain of the Minecraft world, to construct our algorithm. In the competition itself, the filter script will be run on three different maps, and the resulting settlement would be evaluated by the four criteria of adaptability, functionality, narrative, and aesthetics. The GDMC judges will rate the performance with a score from 0-10 in each criteria (GDMC, 2021). A score of 0 indicates no consideration has been given to the criteria, a score of 5 is “comparable to a naive human”, and a score of 10 demonstrates “superhuman performance” beyond even a group of dedicated experts (Salge et al, 2018).

The key concept of this project is procedural generation, wherein the settlement takes shape as the algorithm adds further assets to it progressively and algorithmically. Due to computational randomness, the generated content varies, even under the same selection and settings. This maintains the uniqueness of each settlement but allows consistent rules to be applied for each new generation. For the purposes of this project, we make use of a combination of hand-made structures stored in schematic files which describe the positions of blocks, and algorithm-generated structures like paths and adjustments to the terrain. This allows us to control the viability and quality of the settlement while still ensuring it is generated with a degree of randomness.

Past Submissions

In order to gain a better understanding of the competition, we researched entries from previous years (Salge et al, 2018) (GDMC, 2018). From this we were able to look at the kinds of terrain they were using to evaluate their algorithms, such as islands and mountain ranges, as well as discover the standard that we were competing against. We specifically investigated the highest and lowest scoring entries according to each individual criteria. This allowed us to look at interesting concepts in their algorithms that we may also want to implement or build upon, as well as understand why certain submissions under delivered or failed (corpus/Researches/Past Submission).

We found that the well-performed entries tended to adapt a variety of algorithms in to solve various issues programmatically, such as minimum spanning tree to calculate the smallest number of paths needed and A* algorithm to discover the shortest path between two points. This ensured consistency between each settlement and maintained a standard procedure for applying the generation. As expected, entries with lower scores consisted limited to none consideration in the area of evaluation. For example, they failed to adapt to the terrain, circumventing problems entirely rather than devising solutions. From this, we learned that all of the evaluation criteria are important to the settlement, and that the use of algorithms enhances the quality of the settlement and reduces the workload.

Objectives

Our primary objectives were to maximise adaptability, functionality, narrative and aesthetics. These four aspects were specified in the competition’s evaluation criteria and acted as a cornerstone and guideline for us during development, informing the decisions we made. In terms of adaptability, our algorithm needed to adjust to varied landscapes like uneven terrain, islands, mountain ranges, etc., ensuring that structures are placed on even ground. For functionality, we looked into creating a settlement that would make sense in the real world while considering Minecraft problems such as defence from mobs and food supply. We also wanted our settlement to evoke an intriguing narrative that would make users want to visit and learn more about it simply by looking at the area. Finally, our settlement needed to have a consistent theme and be aesthetically pleasing to look at. Combined, these four objectives should result in a well-executed and successful settlement.

The competition also outlined hardware and runtime requirements and limitations, so we also needed to ensure our algorithm would be capable of executing successfully in line with these requirements while still fulfilling the objectives.

Development

Project Management

Agile project management methodology was utilized for the project, meaning that the entire development progress was divided into multiple life cycles, each containing stages like analysis, development, integration and deployment. Agile is responsive to changes and has a strong focus on interaction between team members, enabling us to alter or add requirements and tasks as needed throughout the development process. For the project, we opted for weekly sprints, as most of the tasks can be fragmented into one week worth of story points and finished within a week. Incorporating techniques like periodic stand-up meetings, short-term sprints and peer development, we were able to interact frequently, ensuring compatibility between different aspects of the algorithm, resulting in a faster development process. Throughout the duration of the project, we experienced a major change to the requirements, when the host introduced additional evaluation criteria for the 2021 entries. Due to the short sprint length and strong communication provided by Agile, we were able to quickly review and evaluate these changes and make decisions on how to accommodate them.

To monitor and log our progress, we also adapted Agile artifacts like the product backlog and sprint backlog, using project management software Jira to record our contributions and manage tasks (corpus/Artifacts/600_Jira.csv). This allowed us to track the progress from different members and for different areas of the project, helping us estimate the development time and resources needed for future sprints. Moreover, it gave us the opportunity to reflect on the progression of each sprint and adjust our plans for the next sprint based on past performance and any roadblocks that were encountered.

To help us visualise our thought process we found it helpful to create diagrams. This enabled each of us to give feedback to other team members, but also helps the individual more easily talk through their own thoughts (corpus/Proposal). While someone was explaining their logic via an illustration or diagram other members were able to ask questions and suggest alternative solutions to make their proposed approach more efficient. As a result, even though we were individually allocated specific tasks, others were still able to have input.

At the halfway point of our project we critically evaluated the progress we had made, reflecting on the positives and negatives (corpus/Artifacts/600_CtcEva.pdf). From this we identified that the biggest obstacle to our progress was having too many options. After discussing this as a group we concluded that instead of being overwhelmed and slowing down progress by constantly researching, we would narrow down our options and task certain individuals with investigating specific topics further within a given time frame. For example, while looking at how to generate structures we considered schematics, JSON, chunk slices, and several other methods. We eventually narrowed this down to just schematics and JSON, but on reflection we would have saved time by reducing the amount spent researching these.

In order to check the quality of our algorithm, we created an online form to collect user feedback during our project showcase (corpus/Artifacts/600_Feedback.pdf). We invited users to view our settlement and give it a rating from 1 to 5 in the four criteria given by the competition, and comment on their reasoning behind their score. This form allowed us to establish which areas our settlement succeeded in and which it was still lacking in. As a result, we re-prioritized the tasks in our product backlog based on what aspects they were related to, and how frequently those aspects were mentioned in the feedback. For example, at the time of the poster fair certain features were incomplete and not shown in the demonstration. This enabled us to identify what key features visitors spotted that we could add before submission.

Methodology

To successfully carry out such a large-scale project, we divided the task into several epic storie

Related Skills

View on GitHub
GitHub Stars6
CategoryDesign
Updated5mo ago
Forks1

Languages

Python

Security Score

87/100

Audited on Nov 1, 2025

No findings