## News:

11 March 2016 - Forum Rules

## How does the PSX's 3d graphics work

Started by UltimateUrinater, May 08, 2016, 01:49:33 PM

#### UltimateUrinater

After reading a few documents on the gpu&gte of the playstation 1, I wasnt fully able to wrap my mind around how the psx's 3d graphics works. I could only get a vague understanding. Maybe because of my inexperience with 3d consoles of this nature (or maybe my little brain just cant comprehend).Either way, I was hoping that someone would be able to show me a routine to display a 3d object and also explain how the gte's instructions and the gpu's work together to display 3d images (if they do at all).

#### Gemini

It doesn't work too differently than most consoles from the early days of 3d. The GTE takes a bunch of coordinates as input (usually one or three vertices, which can be combined to make a projected quad) and spits out screen coordinates which you can map onto primitives. The main difference compared to modern systems is how you need to communicate more or less directly with the hardware.

#### UltimateUrinater

Oh!..ok that makes sense. I now understand the purpose of the rtpt/rtps and the mvma gte instructions. But what about lighting/shading and all that. How would one utilize those instructions to maybe display a cube that looks darker the further away it is and also has a light shining on it.

#### Gemini

You set two matrices for lights: one used for positioning of the lights, the other for colors. In both cases you populate the 3 rows of a matrix, where each row corresponds to a light source (it's a column for colors, but let's pretend it's always rows). Position matrices are usually calculated by taking the light position, then you normalize it, and assign the result to the corresponding row. Color matrices simply contain the colors for each channel shifted left by 8 bits. When both are assigned, you would render a polygon as usual, then add dc(c)t/s commands to retrieve rgb values that can be mapped again onto a primitive of choice. As usual, you can decide to use a triangle or single point of light calculations, which can be combined for a number of effects, say hard lights in case you're using POLY_FT3/4 primitives.

#### UltimateUrinater

#4

Appreciate it.

May 11, 2016, 10:02:50 PM - (Auto Merged - Double Posts are not allowed before 7 days.)

Just to clarify.
--------------------------------------------------------------------------
DCPL     8        Depth Cue Color light
Fields:  none
Opcode:  cop2 \$0680029
In:     RGB                           Primary color.         R,G,B,CODE   [0,8,0]
IR0                             interpolation value.                [1,3,12]
[IR1,IR2,IR3]             Local color vector.                 [1,3,12]
CODE                         Code value from RGB.           CODE [0,8,0]
FC                               Far color.                          [1,27,4]
Out:  RGBn                          RGB fifo               Rn,Gn,Bn,CDn [0,8,0]
[IR1,IR2,IR3]             Color vector                        [1,11,4]
[MAC1,MAC2,MAC3]  Color vector                        [1,27,4]
--------------------------------------------------------------------------
I dont really understand what the inputs of this instruction mean. Like...what does the primary color represent in this case...as well as the local color vector and Code value from RGB. I'd ask the same question for alot of the other instructions as well.

May 13, 2016, 01:27:09 AM - (Auto Merged - Double Posts are not allowed before 7 days.)

Or even better question...
NCDS     19       Normal color depth cue single vector
Fields:  none
Opcode:  cop2 \$0e80413
In:      V0                Normal vector                       [1,3,12]
BK                Background color       RBK,GBK,BBK  [1,19,12]
RGB               Primary color          R,G,B,CODE   [0,8,0]
LLM               Light matrix                        [1,3,12]
LCM               Color matrix                        [1,3,12]
IR0               Interpolation value                 [1,3,12]
Out:     RGBn              RGB fifo.              Rn,Gn,Bn,CDn [0,8,0]
[IR1,IR2,IR3]     Color vector                        [1,11,4]
[MAC1,MAC2,MAC3]  Color vector                        [1,27,4]
what does the light matrix and the color matrix represent in the input of this instruction.

#### UltimateUrinater

#5
Quote from: Gemini on May 08, 2016, 07:21:10 PM
You set two matrices for lights: one used for positioning of the lights, the other for colors.
By colors, do you mean the color of the light source?

Quote from: Gemini on May 08, 2016, 07:21:10 PM
In both cases you populate the 3 rows of a matrix, where each row corresponds to a light source
Im assuming these matrices are both 3X3? And that the columns of a row stores the XYZ values of the light source and that each row stores a different light source? So does that mean that one matrix can store 3 different light sources?

Quote from: Gemini on May 08, 2016, 07:21:10 PM
Position matrices are usually calculated by taking the light position, then you normalize it, and assign the result to the corresponding row. Color matrices simply contain the colors for each channel shifted left by 8 bits.
So will our position matrix look something like this?
|X0(normalized),Y0(normalized),Z0(normalized)|
|X1(normalized),Y1(normalized),Z1(normalized)|
|X2(normalized),Y2(normalized),Z2(normalized)|

And will the color matrix look like this?
|R0,G0,B0|
|R1,G1,B1|
|R2,G2,B2|

In other words....what is the exact layout/organization of the elements within the light matrix and light color matrix registers.

#### Gemini

#6
Quote from: UltimateUrinater on May 09, 2016, 07:25:23 PMOr even better question...
NCDS     19       Normal color depth cue single vector
Fields:  none
Opcode:  cop2 \$0e80413
In:      V0                Normal vector                       [1,3,12]
BK                Background color       RBK,GBK,BBK  [1,19,12]
RGB               Primary color          R,G,B,CODE   [0,8,0]
LLM               Light matrix                        [1,3,12]
LCM               Color matrix                        [1,3,12]
IR0               Interpolation value                 [1,3,12]
Out:     RGBn              RGB fifo.              Rn,Gn,Bn,CDn [0,8,0]
[IR1,IR2,IR3]     Color vector                        [1,11,4]
[MAC1,MAC2,MAC3]  Color vector                        [1,27,4]
what does the light matrix and the color matrix represent in the input of this instruction.
The input parameters are what you need to load into the GTE before executing those commands, while the output corresponds to which registers you need to read to gather the result. So in this case you have:
- V0, a normal vector for applying light direction.
- BK, the "background" color, aka ambient color, a simple color vector of 32 bits per channel (shift 4 bits left for an 8 bit color to obtain the actual value, GTE registers for this are 13-15, one per channel);
- RGB, again a color vector using 8 bits per channel, with CODE being the primitive identifier. This is a color modifier, with 128,128,128 meaning it's applying no color math, while 0,0,0 would make the resulting primitive always black (i.e. polygon diffuse attribute, GTE register 6; results might change depending on blending effects in use). You can fill CODE with, say, the POLY_GT3 code value and whatever you grab from the GTE can be written as a 32 bit value to rgb variables in a POLY_GT3 without the need to define the code attribute itself, since it's permanent for the following operations and will be assigned to all color calculations.
- LLM and LLC are the position and color matrices, set via SetColorMatrix (GTE registers 16-20) and SetLightMatrix commands (GTE registers 8-12). These will be used for all color calculation instructions.
- IR0 should be the intensity of the colors applied to your primitives. I don't think this is even needed to calculate the intensity since you pretty much always need that to use a standard value (i.e. ONE = 4096). None of my code ever touches this value and leaves it alone to have unscaled results.

As for output values, its where your result will be stored onto the GTE. In this case, you get a number of color vectors (GTE registers 20-22), which also include the primitive code you should have previously set in GTE register 6. Those values inside square brackets are which GTE registers are involved for the calculation, so if you had anything stored up there, those values have been changed and cannot be used anymore.

Quote from: UltimateUrinater on May 25, 2016, 12:14:03 AM
By colors, do you mean the color of the light source?
Im assuming these matrices are both 3X3? And that the columns of a row stores the XYZ values of the light source and that each row stores a different light source? So does that mean that one matrix can store 3 different light sources?
So will our position matrix look something like this?
|X0(normalized),Y0(normalized),Z0(normalized)|
|X1(normalized),Y1(normalized),Z1(normalized)|
|X2(normalized),Y2(normalized),Z2(normalized)|

And will the color matrix look like this?
|R0,G0,B0|
|R1,G1,B1|
|R2,G2,B2|
This is how you would populate both matrices, taken from the official function used to set a flat light (why its called flat kinda eludes me):
`void Set_flat_light(const int index, VECTOR *lpos, const CVECTOR *col, const int Mag){ int len; len=SquareRoot0(lpos->vx*lpos->vx + lpos->vy*lpos->vy + lpos->vz*lpos->vz); if(len==0) return; // set position row M_ll.m[index][0]=((-lpos->vx)<<12)/len*Mag; M_ll.m[index][1]=((-lpos->vy)<<12)/len*Mag; M_ll.m[index][2]=((-lpos->vz)<<12)/len*Mag; // set color column M_lc.m[0][index]=(col->r*ONE)/0xFF; M_lc.m[1][index]=(col->g*ONE)/0xFF; M_lc.m[2][index]=(col->b*ONE)/0xFF;}`
M_ll is the local light matrix, while M_lc is the local color matrix. So yes, both are to be treated as 3x3 matrices (i.e. 3 sources of lights), with the position vector to be ignored since it cannot be loaded onto the GTE registers. In other words, you got the light matrix layout, but the color matrix is more like this:
|R0,R1,R2|
|G0,G1,G2|
|B0,B1,B2|

#### UltimateUrinater

hmmmm. So if v0 represents the direction of the light and llm represents the location of the light and lcm represents the color of the light, then what will represent direction/ location of the normal of the plane that the light is hitting. Or do I have the wrong idea as to what the normal instruction does. Cuz the way I thought of this instruction was that it would take a normal of a flat shape and it's location. Then take a light source, it's color, location, and direction. From there, it will calculate the monochrome color that would appear on the surface of the flat shape due to the direction it's facing,it's location in space(prob doesn't matter much assuming the intensity of light doesn't degrade the farther away it is), the color of the light, the color of the shape e.t.c.. Thanks so much for the help thus far. Really appreciate it

#### Gemini

Quote from: UltimateUrinater on May 26, 2016, 03:15:45 PM
hmmmm. So if v0 represents the direction of the light and llm represents the location of the light and lcm represents the color of the light, then what will represent direction/ location of the normal of the plane that the light is hitting. Or do I have the wrong idea as to what the normal instruction does. Cuz the way I thought of this instruction was that it would take a normal of a flat shape and it's location.
Normals are normals, there isn't really any other way around them. You assign one normal for flat lighting (with ncds), more if you want to give the model a smooth look (ncdt). NCSD and NCDT are used in combination to build lighting values for quadrilaterals, since you need to calculate 3 points first and then the last point.

#### UltimateUrinater

Quote from: Gemini on May 25, 2016, 06:32:25 AM
- V0, a normal vector for applying light direction.
So is v0 the normal of the plane that the light is hitting?

#### Gemini

More like the direction of each vertex if you have smooth lighting, but that's pretty much it, yes.