Page 1 of 2

Ideas for 1st person Minecraft-like

Posted: Mon Aug 05, 2024 6:34 pm
by kliepatsch
The video about 8-bit blocks made me think ...

I like the idea, but you know, the isometric view only really works as long as you are on the surface.
As soon as you go underground, you have absolutely no idea what is happening anymore.

So I started thinking about whether it is possible to create a 1st person Minecraft-like experience on the Commander X16.

I think it's possible.
The main compromises are that you will only be able to
  • move in whole block increments (like a pawn in chess) and
  • look in a limited number of discrete directions (e.g. at 90 and 45 degree angles).


Here are my two core ideas:
  1. Pre-calculate 3d->2d projections and store them in a number of look-up tables for use during rendering.
  2. Render block surfaces from closest to farthest. Only render surfaces when one or more of its corners (vertices) aren't covered yet.

1. Pre-calculated Projections

This idea is best explained with a simple example.
Imagine you are standing on a perfectly flat world in Minecraft, and you are looking directly north.
You can see a bunch of squares on the ground in front of you, which are the upper surfaces of the blocks you are walking on. Let's focus on the block which is 5 units in front of you, at the surface.
Of this block, you can see its top surface with its four corners. These corners have some coordinates in the bottom half of the visible image.
Let's say, for example:

far-left (140, 170)
far-right (180, 170)
near-left (132, 190)
near-right (188, 190)


Now imagine you move forward by exactly one block. Now the coordinates of that block, which is now 4 units in front of you, might read

far-left (132, 190)
far-right (188, 190)

near-left (96, 220)
near-right (224, 220)

What happened? Basically, the block's former "near" vertices have become its new "far" vertices.
Conversely, the new "near" vertices are the same ones as the old "near" vertices of the block directly in front of it.
The same principle applies to movement in any direction. Vertices cover up perfectly whenever you move exactly one block or turn the camera by exactly 90 degrees.

In other words, it is possible to navigate the entire blocky world without having to re-project 3d vertices to 2d screen coordinates -- ever.
We just need to map world data (blocks) to pre-calculated 2d coordinates.

With a single lookup table like this, we could cover the four viewing angles (north, south, east, west), and also directly up and directly down.

Of course, such a lookup table is memory-intensive. We need to store the 2d coordinates of ALL block vertices that could theoretically appear on the screen. This scales with the render distance cubed.
Let's do an example for a render distance of 16 blocks, and a horizontal field of view of 90° (45° left, 45° right):
The "ground plane" at the far end of the "visible pyramid" is 32 by 24 blocks (smaller field of view in the vertical direction).
That means, there are (32+1)*(24+1) = 825 vertices in that plane. The +1 is rounding up for the vertices near the edge of the field of view.

The number of vertices needed roughly equals the colume of the "visible pyramid", which is

V = 1/3 * h * Ag

where the pyramid's height h is the render distance 16, and the surface area of the gound plane, which we've estimated to 825.
This yields 4400 vertices.

For each vertex, we would have to store 3 bytes (assuming 320x240 resolution, which means 2 bytes horizontal position, 1 byte vertical), which amounts for 13200 bytes.

Now there are symmetries.
Mirroring vertically and horizontally allows for reduction to one quarter of the size (3300 bytes). You could theoretically exploit the symmetry along one of the diagonals, too, further reducing LUT size. But that might not be worth the extra CPU spent on "unfolding" the symmetry.

The bottom line is, one or two sets of pre-calculated vertices fit in a RAM bank.
Another one of these LUTs could be used for a diagonal viewing angle of 45°.



2. Rendering Procedure

In 3d rendering, you typically want to do as little work as possible, ideally only spend rendering effort on things that are actually visible on-screen, and not covered up by other surfaces which are closer to the camera. The same, of course, is true here.

We initialize the whole screen with zero.
After that, we start drawing the surfaces, starting with the closest blocks first. Close surfaces have a good chance of quickly covering large portions of the screen already.

Before we draw a surface, we check whether all of its four corners are already covered.
This can be easily done by two lookups per corner:
  • look up the screen coordinates in the table of projected vertices
  • then look into the screen memory if the pixel has already been set (i.e. non-zero)
Now I claim that if all four corners of a block's side are already covered, it's safe to assume that it is covered completely.
Granted, it is likely possible to construct a situation where this assumption breaks down, but I am pretty sure it is good enough to be practical, and visual glitches because of it should be very rare.

Note that we don't have to do four lookups per surface.
  1. each vertex is part of up to twelve potential surfaces, some lookups could be re-used
  2. many surfaces can be excluded for different reasons (facing away from the camera, not interface between air and solid)
As soon as at least one of the corners isn't covered yet, we render the surface. This can be done with the aid of the polygon filler helper of VERA-FX. Important is that we don't draw over any non-zero pixels. This can be done efficiently using Data0 and Data1, both set to increment of 1. First, we read the screen content with Data0, and then, depending on whether the content is zero or not, we either write to Data1 or read from it to advance the Data1 pointer and keep it in sync with Data0.

There are a few more nuances to this, and I am sure, they are worth several headaches. I have thought about some more details already, but these are the core concepts.



Back-of-the-Envelope CPU budget calculation

Let's for now target 5 frames per second. This isn't very much, but if you consider you can only move in whole block increments anyway, it won't be that bad.
At 8 MHz that means we have 1600k CPU cycles for each frame.
Now let's say we want to fill 320 by 240 pixels, which is just under 80k pixels, we have a little over 20 cycles per pixel.
The inner rendering loop I outlined above might look something like this:

Code: Select all

    ; render line of pixels
    ldx color
    ldy num_pixels
    bra @2 ; jump into the middle
@1:
    sta DATA1 ; write pixel data
    dey
    beq @end
@2:
    lda DATA0
    beq @1    ; if pixel is zero, write pixel data
    stx DATA1 ; pixel was non-zero: read and discard, advance DATA1 pointer
    dey
    bne @2
@end:
The weird layout with the entrypoint in the middle is to not waste any cycles on unnecessary (unconditional) jump instructions. This loop needs 15 cycles per pixel, so that would leave some headroom for other things. And other things there are plenty of! So I would conclude that 320x240 resolution might be a bit too high to get to a usable frame rate. Any reduction in the number of pixels will get us closer to it.




What I haven't addressed so far is the data structure of the world and how to efficiently read from it at the correct locations during rendering (how to quickly traverse it in different directions and between chunks). I think 16x16 chunks as in the original Minecraft is too large, 8x8 may be more suitable.

Just wanted to share. I currently don't feel like trying this out myself, so if anyone else is interested to have a go, I am totally fine with that. I might in the future, I might not, who knows ;)

Re: Ideas for 1st person Minecraft-like

Posted: Mon Aug 05, 2024 7:44 pm
by mgkaiser
Start REAL easy. You can only move one block at a time and you can only turn 90 degrees. Render all of the textures as a 16 x 16 square. Use the affine helper to scale the textures. No example is given, but by changing the scale factor on each line, you can also use affine "scale" to "3d rotate" the textures. Affine does most of the work. You won't get a really high frame rate, but you should get a playable "turn based" 3d first person game out of the deal.

Re: Ideas for 1st person Minecraft-like

Posted: Tue Aug 06, 2024 4:29 pm
by SunSailor
Maybe consider start with a small Terraria clone first. Voxel rendering is much more costly on the CPU side than one may think and you will need to start RAM banking very early on an 8 bit system like the X16.

Re: Ideas for 1st person Minecraft-like

Posted: Tue Aug 06, 2024 6:10 pm
by funkheld
You have 128 sprites and 256 characters to help you. You can change the sprites and characters directly in the program flow for movements etc.

you have two screens, one for graphics and one for text. the screen for text is 128 bytes wide....wow...
what a nice thing.

The x16 is ideal for this.

Greetings

Re: Ideas for 1st person Minecraft-like

Posted: Tue Aug 06, 2024 8:01 pm
by DragWx
The pre-calculated 3D rendering technique you mentioned is very similar to the technique many programmers used to create 3D mazes in Megazeux, and there are many examples of early dungeon crawlers that did this to create a 3D view of the dungeon.

If you or anyone else is absolutely determined to try creating a similar first person 3D renderer on the X16, it would be helpful to start with something very simple, like a 3D maze you can walk around in; don't even worry about gameplay elements at that point, and certainly don't get bogged down trying to turn it into Minecraft at that point. :P

Once the absolute basics of "I can see where I am and I can walk around" are achieved, that's when gameplay can be added, but even then, start simple, like doors that can open and close, keys that unlock doors, that kind of thing.

Re: Ideas for 1st person Minecraft-like

Posted: Wed Aug 07, 2024 1:09 am
by cosmicr
Unfortunately the polygon filler helper doesn't work the way you describe. It uses one data port for tracking the line, and the second for the column. So you can't just set the increment to 1 on each and go ahead. It also writes 4 bytes at a time, so the increment needs to be 4 (or 1) on one port and the screen width on the other (160 or 320 for 4 or 8 bit).

However, it can draw transparent pixels, so not all is lost. The other thing is I'm not sure you could do textured polygons with it fast enough.

That said, I reckon the x16 could be fast enough to render a couple hundred polygons a frame at maybe 20fps.

Re: Ideas for 1st person Minecraft-like

Posted: Wed Aug 07, 2024 8:30 pm
by mgkaiser
cosmicr wrote: Wed Aug 07, 2024 1:09 am Unfortunately the polygon filler helper doesn't work the way you describe. It uses one data port for tracking the line, and the second for the column. So you can't just set the increment to 1 on each and go ahead. It also writes 4 bytes at a time, so the increment needs to be 4 (or 1) on one port and the screen width on the other (160 or 320 for 4 or 8 bit).

However, it can draw transparent pixels, so not all is lost. The other thing is I'm not sure you could do textured polygons with it fast enough.

That said, I reckon the x16 could be fast enough to render a couple hundred polygons a frame at maybe 20fps.
I wasn't suggesting the polygon filler. I was suggesting using the affine helper to resize textures and "rotate" them. In this context "rotate" really means sizing the "far" edge of it smaller than the "near" edge. So basically scaling while changing the scale factor every line.

Re: Ideas for 1st person Minecraft-like

Posted: Wed Aug 07, 2024 11:06 pm
by Ed Minchau
I've been working on a 3d first person shooter as well (on the back burner while I rewrite the META/L editor). My advice is to do as much precalculation as possible, and as few calculations at runtime as possible. For instance, I have 8 banks set aside for a single lookup table to convert a 16 bit dot product into wall height at that distance, to avoid doing division at runtime.

Re: Ideas for 1st person Minecraft-like

Posted: Fri Aug 09, 2024 2:07 am
by cosmicr
mgkaiser wrote: Wed Aug 07, 2024 8:30 pm I wasn't suggesting the polygon filler. I was suggesting using the affine helper to resize textures and "rotate" them. In this context "rotate" really means sizing the "far" edge of it smaller than the "near" edge. So basically scaling while changing the scale factor every line.
Sorry, my response was to the op kliepatsch. Apologies for the confusion!

Re: Ideas for 1st person Minecraft-like

Posted: Fri Aug 09, 2024 4:56 pm
by kliepatsch
mgkaiser wrote: Mon Aug 05, 2024 7:44 pm Start REAL easy. You can only move one block at a time and you can only turn 90 degrees.
I like that approach. I think this allows for some significant simplifications, mostly because the front facing sides of each "layer" of blocks form a perfect square lattice (layer being all the blocks at the same distance along the depth axis from the camera). I plan to elaborate on the advantages of that later.

Having a lot to do with projections by profession, this doesn't go without saying, we will only get a square lattice in the pinhole projection, which we are all used to. I have, in fact, thought about if cylindric projection would make sense, too. The one advantage of cylindric projection would be the direct relationship between x coordinate and yaw angle. You could rotate the camera seamlessly by simply shifting the image left and right, much like a panorama image. It's as simple as adding/subtracting a constant offset from every x coordinate!
SunSailor wrote: Tue Aug 06, 2024 4:29 pm Voxel rendering is much more costly on the CPU side than one may think and you will need to start RAM banking very early on an 8 bit system like the X16.
Yes, world data belongs into the RAM BANKS no question (from my side, anyway ;)). And yes, actual voxels are very CPU intensive. My gut feeling is that the part of traversing the world data and deciding what to render would take a comparable amount of CPU to the actual rendering.

As for the advice to start simple (from most of you) ... thanks! I think there's no way around it. Although I, for one, am here for the challenge of voxels, so that would be a central part in the case I try something in the future.
But if someone wants to take ideas from this thread and make something different, awesome! :)
cosmicr wrote: Wed Aug 07, 2024 1:09 am Unfortunately the polygon filler helper doesn't work the way you describe. It uses one data port for tracking the line, and the second for the column.
Good hint! That's a bummer.
What we need is the start pixel and the number of pixels to draw on each line. If nothing better comes to mind one would have to determine start and end point of each line by Bresenham's algo. I haven't found a good way to use lookup tables here yet. Not sure if the line draw helper of Vera FX could help with that? (Note that we have to know the position before we draw anything)
Drawing "transparent pixels" won't help much. It's for when we know beforehand which pixels will be drawn (think of a sprite), but in my proposed algo that would depend on the content of the buffer we are writing to. So we would still need a Vera Data port to read from.
If there is an efficient way to utilize transparent pixels, I am happily proven wrong, though :D