Sonic and Sega Retro Message Board: WebSonic (WebGL, source code released) - Sonic and Sega Retro Message Board

Jump to content

Hey there, Guest!  (Log In · Register) Help
  • 5 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
    Locked
    Locked Forum

WebSonic (WebGL, source code released) Live (Playable) Version and GitHub link inside!

#31 User is offline MarkTheEchidna 

Posted 08 April 2011 - 08:41 AM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
Hey Gen,

Thanks for the code. Now I get what you mean by tangent-space vs world-space. Light in tangent space is used to avoid transforming things on the fragment shader, then.

I think I'll try to get tangents working.

About deferred rendering, I'm not sure if it's possible with WebGL currently: I don't think it supports multiple simultaneous render targets.
This post has been edited by MarkTheEchidna: 08 April 2011 - 08:42 AM

#32 User is offline Gen 

Posted 08 April 2011 - 08:54 AM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
QUOTE (MarkTheEchidna @ Apr 8 2011, 06:41 AM)
Hey Gen,

Thanks for the code. Now I get what you mean by tangent-space vs world-space. Light in tangent space is used to avoid transforming things on the fragment shader, then.

I think I'll try to get tangents working.

About deferred rendering, I'm not sure if it's possible with WebGL currently: I don't think it supports multiple simultaneous render targets.



AFAIK, WebGL does support multiple render targets at once. You'd more or less create multiple framebuffer objects, and output render buffers to each of them. Then from there, begin compositing via shaders.
This post has been edited by Gen: 08 April 2011 - 08:57 AM

#33 User is offline MarkTheEchidna 

Posted 08 April 2011 - 09:14 AM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
From http://www.khronos.org/registry/webgl/specs/latest/#5.13

CODE
const GLenum COLOR_ATTACHMENT0              = 0x8CE0;
const GLenum DEPTH_ATTACHMENT               = 0x8D00;
const GLenum STENCIL_ATTACHMENT             = 0x8D20;
const GLenum DEPTH_STENCIL_ATTACHMENT       = 0x821A;


WebGL only supports one renderbuffer color attachment per framebuffer... And I don't think you can have more than one framebuffer at the same time...

This guy is complaining about it to Khronos: https://www.khronos.org/webgl/public-mailin...2/msg00100.html

QUOTE
gl_FragData[I] is accepted only if I is zero, as the specs say that there is only a COLOR0_ATTACHMENT. But can we actually relax these limitations whenever the system is capable?


The other guy then states that this was actually a design decision:

QUOTE
The issue I see is, and this applies to the whole extension registry
thing, is that many webgl applications will fail to run on all
implementations. This is because not all developers will provide
fall-backs, due to lack of knowledge, the sheer complexity of dealing
with an increasingly large number of extensions, laziness, 'thoughts
like "I'm just making this to amuse myself" and similar reasons.

In the early days of WebGL this could be a death knell. I think the
extension registry is premature. We should wait until WebGL establishes
a reputation for 'just working' before we create opportunities for
fragmentation.

This post has been edited by MarkTheEchidna: 08 April 2011 - 09:16 AM

#34 User is offline Gen 

Posted 08 April 2011 - 09:47 AM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
hm. Could split it up into stages I guess. Something along the lines of a single FBO that has the following format:
R: Normal X, G: Normal Y, B: Specular, A: Alpha

Normal Z can be reconstructed realtively easily.

Alternatively, we could possibly daisy chain the rendering like so:

Step 1: Render depth and normals into our render buffer
Step 2: Feed FBO to pixel shader for light computation
Step 3: Store result into the same FBO (RGB = light buffer, A = specular buffer)
Step 4: Multiply albedo and specular against their respective channels, and output to the render buffer

This way, we only ever use one FBO, and hopefully manage to stay within SM2 constraints.

#35 User is offline MarkTheEchidna 

Posted 08 April 2011 - 10:10 AM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
We can "pack" the normal on a single value while losing some precision if we use some sort of parametric spiral. This is how MD3 stores normals, if I recall correctly.

Perhaps we could even calculate the normals using the camera's transform and partial derivatives of the depth buffer. But that would suck on sharp edges, I guess.

There's also the stencil buffer. Can you write directly to it from the shaders?

#36 User is offline Gen 

Posted 08 April 2011 - 10:51 AM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
QUOTE (MarkTheEchidna @ Apr 8 2011, 07:10 AM)
We can "pack" the normal on a single value while losing some precision if we use some sort of parametric spiral. This is how MD3 stores normals, if I recall correctly.

It could be possible yeah. I know that *some* engines will store the X and Y screen space normal across 4 channels, and dynamically reconstructing Z (which in screen space, is just depth). But since we can only handle one FBO and Renderbuffer, I think it'd be best if we just stored normal, depth, and specular exponent if we're going to be reusing the same buffer over and over until we have the final result.
more or less:

Red: Normal X
Green: Normal Y
Blue: Specular Exponent
Alpha: Depth

Stored across whatever the highest common bit depth we can (primarily due to depth information likely needing more than 8BPP for decent quality)

Specular exponent is easy to store in a deferred renderer since it's always a 0 to 1 constant that later gets multiplied by 128 by the deferred shader it's self. I'm interested in finding better ways to pack normals while retaining as high quality normals as possible, but keep in mind partial derivatives can eat into ALUs quite a bit. F..
QUOTE
There's also the stencil buffer. Can you write directly to it from the shaders?

...That's actually a very good question.

Edit: It would actually be interesting to see if we could almost do a surface shader approach, where the light buffer's RGB components are simply multiplied against each mesh's albedo, and the alpha is multiplied against an object's specular map, and added.
This post has been edited by Gen: 08 April 2011 - 11:09 AM

#37 User is offline MarkTheEchidna 

Posted 08 April 2011 - 04:59 PM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
Hmmm.. Having things premultiplied is not a bad idea. But still, how would we fit the color*albedo + alpha*specular info along with the nromals?

Just had another idea: We could render on a buffer twice as large as we need vertically. We could then check for the current row when rendering each fragment.

CODE
if (row % 2 == 0) {
   // calculate and render color info + albedo
} else {
   // calculate and render normals + specular
}


It would look like this:

Attached File  Untitled.png (42.13K)
Number of downloads: 20

And we would still have a free channel.

Then, on the final rendering, we could read the sampler twice per pixel to get the full data.

#38 User is offline Chimpo 

Posted 08 April 2011 - 05:06 PM

  • Posts: 7172
  • Joined: 26-July 06
  • Gender:Not Telling
Works lovely but these are some of the most awkward controls and control placements I've ever dealt with in a video game. I know the FAQ addresses this already but everything else is hard to impress when you can't even control the thing comfortably.

I received this error when I tried to do what DustArma did.

Error: Cannot call method 'transpose' of nullTypeError: Cannot call method 'transpose' of null at http://achene.co/WebSonic/js/WorldEngine.js:300:42 at Object.normalModelView (http://achene.co/Web...Engine.js:342:5) at Object.render (http://achene.co/Web...layer.js:757:41) at http://achene.co/WebSonic/js/WorldEngine.js:137:17 at http://achene.co/WebSonic/js/WorldEngine.js:184:4 at Array.forEach (native) at Object.render (http://achene.co/Web...ngine.js:182:11) at http://achene.co/WebSonic/js/main.js:82:22

#39 User is offline Gen 

Posted 08 April 2011 - 05:26 PM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
QUOTE (MarkTheEchidna @ Apr 8 2011, 02:59 PM)
Hmmm.. Having things premultiplied is not a bad idea. But still, how would we fit the color*albedo + alpha*specular info along with the nromals?

Just had another idea: We could render on a buffer twice as large as we need vertically. We could then check for the current row when rendering each fragment.

CODE
if (row % 2 == 0) {
   // calculate and render color info + albedo
} else {
   // calculate and render normals + specular
}


It would look like this:

Attached File  Untitled.png (42.13K)
Number of downloads: 20

And we would still have a free channel.

Then, on the final rendering, we could read the sampler twice per pixel to get the full data.

Well, what we would do to ensure that we only use the minimum that we absolutely need, is to basically feed the rendered light buffer into a fragment shader that simply multiplies it against the albedo. An example function of this would be:
CODE
vec4 lightPrePassFinal(vec4 lightBuffer, vec3 albedo, vec3 specular) {
    albedo *= lightBuffer.xyz;
    albedo += lightBuffer.a * lightBuffer.xyz * specular;
    return albedo;
}


Where lightBuffer would basically a uniform sampler2D or even sampler2DRect transformed into a vec4.

Unity kinda does something nifty like this; basically all the deferred renderer does is output a light buffer with all of the usual shadows and such that can be used for various effects.
This post has been edited by Gen: 08 April 2011 - 05:52 PM

#40 User is offline MarkTheEchidna 

Posted 08 April 2011 - 07:13 PM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
@DustArma: Whoah, that's quite severe. Hadn't seen the video.

@Chimpo: That's most likely the game trying to invert a singular matrix. Sylvester (the math lib I used) returns null on errors like this, hence the error "Cannot call method 'transpose' of null". The singular matrix was generated likely due to numeric instability inside Sylvester itself. I'll add a sanity check to make sure this doesn't happen again.

@Gen Wait, so specular and albedo are vec3? I thought they were float, and had r,g,b components stored along with them. Makes much more sense now.

#41 User is offline Gen 

Posted 08 April 2011 - 07:21 PM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
QUOTE (MarkTheEchidna @ Apr 8 2011, 04:13 PM)
@DustArma: Whoah, that's quite severe. Hadn't seen the video.

@Chimpo: That's most likely the game trying to invert a singular matrix. Sylvester (the math lib I used) returns null on errors like this, hence the error "Cannot call method 'transpose' of null". The singular matrix was generated likely due to numeric instability inside Sylvester itself. I'll add a sanity check to make sure this doesn't happen again.

@Gen Wait, so specular and albedo are vec3? I thought they were float, and had r,g,b components stored along with them. Makes much more sense now.

Our light buffer consists of this:
RGB: Light
A: Specular highlight

Storing our specular highlight in our alpha channel makes sense, since in a traditional forward renderer, the specular highlight would be a float multiplied against a specular map at a later stage anyways.

So we basically just multiply the light (RGB) against the albedo (which we'll assume is vec3), and the specular highlight (A) against our specular map (which can either be float or vec3, depending on if you want to use an extra texture or the alpha of an existing texture), then add the two resulting specular and diffuse components together.

Another nifty trick you could do, since we're storing our specular exponent for the deferred renderer in its own channel, then later multiplying by 128 within the fragment shader, you could actually use the alpha of your specular map to influence the specular highlight's exponent; therefore giving you a specular exponent map.

#42 User is offline MarkTheEchidna 

Posted 08 April 2011 - 09:30 PM

  • Posts: 39
  • Joined: 10-September 10
  • Gender:Male
  • Location:Belo Horizonte, Brazil
  • Project:WebSonic
Ha, that would allow for some kickass galvanized metal and ice effects.

Update: I've implemented layers of objects, so skyboxes are now a possibility. I've added an animated sky shader, simulating cloud formation and movement. Check it out:



Looks better in motion, so give it a try.

#43 User is offline Azu 

Posted 09 April 2011 - 07:26 AM

  • I must be stupid.
  • Posts: 1463
  • Joined: 23-February 08
  • Gender:Male
  • Location:Home
This is awesome. I've been wondering what happened to this when you suddenly disappear. I wish I could make levels for this though.


#44 User is offline Gen 

Posted 09 April 2011 - 08:43 AM

  • This is halloween! This is halloween!
  • Posts: 309
  • Joined: 03-August 06
  • Gender:Male
  • Project:The Mobius Engine
I went ahead and forked your repo. I'm gonna see if I can get some fancypants rendering stuff going on.

#45 User is offline Dr. Kylstein 

Posted 14 April 2011 - 02:20 PM

  • Posts: 84
  • Joined: 05-June 08
  • Gender:Not Telling
This still doesn't work for me on Firefox 3 or Chrome 11. Chrome gives this error, but it seems to carry on only to give pages full of warnings and still not work.
QUOTE (Google Chrome)
Error: Nonexistent or unused uniform in the description of shader shader/sky.jsonshader: u_normalCameraViewError: Nonexistent or unused uniform in the description of shader shader/sky.jsonshader: u_normalCameraView at http://achene.co/WebSonic/js/GraphicsEngine.js:96:11 at Array.forEach (native) at new (http://achene.co/Web...Engine.js:93:23) at Object.success (http://achene.co/Web...anager.js:80:20) at success (http://achene.co/Web...uery.js:5267:15) at XMLHttpRequest. (http://achene.co/Web...query.js:5207:7)

Ubuntu 10.10 x64
GTX280

  • 5 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
    Locked
    Locked Forum

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users