We've argued about the palette issue many times already. There are two talented guys willing to develop the software, so we should just hold back and let them do their thing.
Personally if I was making this I'd drop the idea that it has to have low requirements and be able to run on a PS2 and go all out. Hell, make it 3D, cel shaded, with a bajillion particle effects at 1920x1200 resolution :P
Probably for the best you aren't making it then, seeing as there's not a chance in HELL that would run on my computer (or a lot of people's). I know you LOVE to make engines and everything, bu-*shot*
Also I want to make it clear that I'm not against the current way of doing things at all; I think that any HD Sonic 2 is awesome. I'm just making a suggestion and I don't want to come off on the wrong foot.
;P Eh, if it's a PC game graphics options are pretty much a must anyway, I was talking more about the maximums as I think in terms of programming a game, you should make the best version during development, then scale back as much as you can for lower-end systems. And while developing, don't stick to a strict hard-coded number of anything. The best way to do it is to make everything data driven - read maximum number of particles from a file, read teh resolution from a file, etc. The effort will pay off when you've got a 100% flexible engine that can run at 320x240 or 16000x10000 resolution, have no particles or 40 million, etc. It really would help make the game future proof as well. But anyway, just my 2 cents :P
My concern is that the limit shouldn't be forced in the engine but just on the accepted style. The engine should be able to handle more than just what we are going to use it for with this remake. The reason being, at the end of the day someone is going to want to push it further and it would just be a waste to have to rewrite the engine for, say, an HD Sonic 3. Also, is the score, etc. staying at the same scale as the other graphics? I ask because when other games have been scaled up I've always thought the ones that didn't scale the HUD as much as the rest of the graphics enhanced the feeling that the game had scaled up. Just my 2 cents really, but I think it'd be worth looking at. Something like: Level Art, etc.: 4x - HUD: 2x Level Art, etc.: 3x - HUD: 2x Level Art, etc.: 2x - HUD: 1x
Just a quick question. Will the engine be able to handle transparent pixels? Because if it will be able to, that will make everything blend a whole hell of a lot better. As far as I can see, if we have transparent pixel support the sprites will be able to blend like this: And if we don't they'll blend like this: Transparent pixels would give the vector artists the freedom to not have to worry about eventually editing their objects pixel by pixel in a raster program to make sure they don't look aliased when mixed. Having Transparent pixels might make it so the pallet limitations we've set can't be met. But with them we'll end up with an overall nicer image.
Oh, Wow! That reminded me there is a "DIFFICULT 8 FRAMES ANIMATION" to do asap! :D I don't know if it'll be possible to both use the "transparent pngs"feature with a palette system. Sincerly I hope so, but it's Cooljerk that will answer.
Hey all, This post may very well be better placed in the engineering/reverse engineering section of the forums, but since it's directly related to Sonic 2 HD and its engine, I thought I might post here first. I've been experimenting with a different collision detection/physics scheme than the one used in Sonic 2 for the project, and I was looking for some guidance on the Sonic 2 asm code. I glanced through the forums and didn't find a clear explanation, but it's possible I'm just thick and missed it. In Sonic 2, programmers kept track of three main values related to Sonic's speed: His maximum velocity, his acceleration, and his deceleration. The problem is that although I found these numbers fairly easily (Many thanks to Xenowhirl for his disassembly & comments), I couldn't figure out how they actually translate into pixels/frame or some other logical unit of measure. The values, for those who are curious, can be found here: Code (Text): move.w #$600,(Sonic_top_speed).w move.w #$C,(Sonic_acceleration).w move.w #$80,(Sonic_deceleration).w These are of course the values when sonic is normal and out of the water. The following values are for the speed shoes: Code (Text): move.w #$C00,(Sonic_top_speed).w move.w #$18,(Sonic_acceleration).w move.w #$80,(Sonic_deceleration).w Sonic's speed & acceleration double and his deceleration stays the same (but you all knew that given the earlier discussion). Each time the movement subroutines are called, Sonic's acceleration is added to his current inertia, until his inertia reaches its maximum value. The question I have is how often these methods are called. Is it once per frame? If so, how does the inertia actually translate to the movement of pixels on the screen? Thanks in advance for any help/guidance. I'll be the first to admit I'm more of a C guy than an ASM guy... -LS
Yeah, all movements in the game are calculated once a frame. In the case of the Inertia value, it gets split up into XVelocity and YVelocity according to the angle of the ground (when moving on the ground). The X and Y Velocities then result in actual pixel movement on the screen, although the position may be different after collision checks etc.
I have a question about the palette programming. Instead of coding partially transparent colors in the regular palette, can we just code the alpha maps directly into the engine and add them to the colors when rendering? Treating alpha values as though they are the fourth palette color seems like a huge waste of the colorspace to me. Since there are only 255 possible alpha values, it wouldn't waste any more space to hard code them than it would to reference them via a palette, but if they are linked to the other color values then it'll be a huge waste of palette slots because different colors with the same level of transparency will require different slots.
Yeah, that is way better than including alpha values in the palette colors. So the bottom line is we shouldn't have to consider transparency when considering color limitations.
Would have been nice to know this from the start as it means I now have to redraw a large number for EH tiles and the Balkiry and then draw a second version for the alpha map... -_- Infact, almost all the art drawn so far needs to be redrawn...
Acaeris? How so? The alphamaps were to be done last anyways And programs can be easily written to generate alpha maps with edge detection + edge blending So most of you won't even be doing any alpha maps - most of them will be generated by programs. The only time an alpha map may need to be edited/done by a human, would be for the palm tree or the bush
Edges would need to overlap the area covered by the alpha map so that no background colour/index transparency is showing when the alpha map is applied. At this moment in time all artwork has been created with the alpha built in, so it would look horrible if the alpha removed then re-applied. I've tried this very thing when making graphics for N64 HD packs, it doesn't work, the only work around it draw to the edge of the tile covering the sprite.
Proof of Concept Only: I tested this idea using PaintShop Pro. It works by expanding the edges of the sprite (removing the background color around the edges) Better Acaeris? The only thing the artist needs to do is to printscreen the sprite two times Once with a white background and another time with a black background -----UPDATE---- Ok, I wrote the code and it works beautifully: Algorithm Used: (Apply to each pixel) Code (Text): //Load rw,gw,bw with the rgb of the pixel from the image with the WHITE background float rw=; float gw=; float bw=; //Load rb,gb,bb with the rgb of the pixel from the image with the BLACK background float rb=; float gb=; float bb=; //Calculate the alphamap using the negative of (WHITE background image - BLACK background image) float ra=((unsigned char)~(unsigned char)(rw-rb)); float ga=((unsigned char)~(unsigned char)(gw-gb)); float ba=((unsigned char)~(unsigned char)(bw-bb)); //Save the alphamap whilst it's still in the 0-255 form //... //Convert the alphamap to 0.0-1.0 form ra/=255.0f; ga/=255.0f; ba/=255.0f; //Expand the borders using the alphamap //This is done by removing the blending from the image float ro=255.0f; //The rgb components of the final output image float go=255.0f; float bo=255.0f; //This equation removes the alpha if(ra > 0.0f) ro=(rw-(1.0f-ra)*255.0f)/ra; if(ga > 0.0f) go=(gw-(1.0f-ga)*255.0f)/ga; if(ba > 0.0f) bo=(bw-(1.0f-ba)*255.0f)/ba; //Save the output pixel colors //... Results in perfect edge-blending: Compared to having no blending: The only Input the Program needs: Top image = sprite on white background Bottom image = sprite on black background
You want to implement a system that somehow "guesses" what the alpha channel looks like? Why not just use the actual alpha channel?
Hivebrain, I'm not an artist nor do I know what the artists' tools can or can't do. I just read what the artists say and provide a solution. And also - "guessing" is the wrong word to use - unless you think solving an algebraic equation is 'guessing'...