don't click here

WebSonic (WebGL, source code released)

Discussion in 'Fangaming Discussion' started by MarkTheEchidna, Apr 7, 2011.

  1. Aerosol

    Aerosol

    Not here. Moderator
    11,163
    573
    93
    Not where I want to be.
    Sonic (?): Coming summer of 2055...?
    Still the same error for me, but it may still just be my old ass videocard.
     
  2. Scripten

    Scripten

    Member
    85
    0
    0
    Eh, I've got a fairly new GeForce 440, and it's still not going for me, either. Tried reloading several times, and waited about five minutes each time. Unfortunately, nothing yet.

    Very excited to see how this is turning out though!
     
  3. DustArma

    DustArma

    Member
    1,338
    10
    18
    Santiago, Chile
    Learning Python.
    Still not working for me, it doesn't even show an error or something.
     
  4. @AerosolSP: :( Unfortunately in your case there isn't much I can do with on the JS side to get it running there...

    If you have the drivers that ship along with Windows, downloading the ones from NVIDIA may or may not get it to work. You can also try it on Firefox 4 if you tried it in Chrome, or vice versa.

    EDIT: Gosh, I'm surprised with the failure rate of this... Either I'm doing something really wrong or WebGL sucks badly on Windows.
     
  5. DustArma

    DustArma

    Member
    1,338
    10
    18
    Santiago, Chile
    Learning Python.
    So, I just tried with Chrome, same black screen, and no error message, it just stops after loading the sonic.jpg texture.


    *EDIT* Chrome just got this:
    It didn't stop the loading process however, that still stopped after loading sonic.jpg.
     
  6. Yay! thanks DustArma. Now I know which shader was the culprit. (shader/metal.jsonshader)

    I was doing something really stupid. (Assigning a value to gl_FragColor right at the end) The GLSL compiler of your driver was optimizing away all my uniforms, since they weren't being used at all. I should work now. Ctrl+F5 and see if it works.

    I added back the logs/errors to the hud, just so they're more visible.
     
  7. DustArma

    DustArma

    Member
    1,338
    10
    18
    Santiago, Chile
    Learning Python.
    And it works, great job :eng101: , gonna play around with it now.

    *EDIT* Works much better on chrome than firefox, it's all stuttery on the fox. *EDIT*

    *EDIT2* Noticed the slope physics, however Sonic can run up any slope, even a completely vertical wall if you hold the button, though he just barely moves.*EDIT2*

    *EDIT3* *EDIT3*

    Noticed that slope quirk.
     
  8. Aerosol

    Aerosol

    Not here. Moderator
    11,163
    573
    93
    Not where I want to be.
    Sonic (?): Coming summer of 2055...?
    Hurm. It works in Firefox 4. Choppy though.

    Good stuff Mark :D
     
  9. DustArma

    DustArma

    Member
    1,338
    10
    18
    Santiago, Chile
    Learning Python.
    Try it on Chrome, I had 15 FPS and a lot of stuttering on FF and 50+FPS smooth on Chrome.
     
  10. Gen

    Gen

    This is halloween! This is halloween! Member
    309
    0
    0
    The Mobius Engine
    @MarkTheEchidna: Tangent-space normal mapping is mostly a simple operation. The hardest part about it, is effectively writing everything to function in tangent space instead of world space. The basic math in GLSL speak using the per-vertex light direction (assuming model space coordinates) is:
    Code (Text):
    1. mat3 rotation = mat3(a_tangnet, cross(a_tangnet, gl_normal), a_normal);
    2. vec3 rotatedVal = rotation * u_lightDirection;
    GLSL will take care of the matrix multiplication required to rotate your vector into tangent-space. For world space to tangent space, the only difference is instead of an object space normal, you use a world space normal. Please note however, it's best that you only use either world space or object space normals with the accompanying world or object space vectors. Otherwise, you'll get incorrect results every time.

    Another solution, is to transform your tangent space normal map into world space in your fragment program, and just apply all of your operations from there:

    Fragment program
    Code (Text):
    1. varying vec3 v_worldNormal;
    2. varying vec3 v_tangent;
    3. varying vec3 v_bitangent;
    4. //assume the usual varies here such as texcoords and the like
    5. uniform sampler2D u_normalMap;
    6.  
    7. void main() {
    8.     vec3 normal = texture2D(u_normalMap, v_texcoord.xy) * 2 - 1; //need to unpack our normal map to the -1 to 1 range before we can use it
    9.     normal = v_tangent * normal.x + v_bitangent * normal.y + v_worldNormal * normal.z; //convert our tangent space normal to world space
    10.     gl_FragColor = dot(normal, vec3(0.0, 1.0, 0.0)); //accumulate lighting from above in world space
    11. }
    Something to be wary of however, conversion of tangent space normals to world space isn't the most performance savvy operation in the world, and given that most people in the community are likely still stuck with only Shader Model 2 capable hardware, you're likely to want to avoid converting to world space normals too much.

    EDIT: Come to think of it, you could actually use render targets to effectively composite the scene together a lá deferred rendering, while staying well within SM2 constraints (which for reference, SM2 supports about 64 fragment instructions, vs. SM3's roughly 512).
     
  11. Hey Gen,

    Thanks for the code. Now I get what you mean by tangent-space vs world-space. Light in tangent space is used to avoid transforming things on the fragment shader, then.

    I think I'll try to get tangents working.

    About deferred rendering, I'm not sure if it's possible with WebGL currently: I don't think it supports multiple simultaneous render targets.
     
  12. Gen

    Gen

    This is halloween! This is halloween! Member
    309
    0
    0
    The Mobius Engine

    AFAIK, WebGL does support multiple render targets at once. You'd more or less create multiple framebuffer objects, and output render buffers to each of them. Then from there, begin compositing via shaders.
     
  13. From http://www.khronos.org/registry/webgl/specs/latest/#5.13

    Code (Text):
    1. const GLenum COLOR_ATTACHMENT0              = 0x8CE0;
    2. const GLenum DEPTH_ATTACHMENT               = 0x8D00;
    3. const GLenum STENCIL_ATTACHMENT             = 0x8D20;
    4. const GLenum DEPTH_STENCIL_ATTACHMENT       = 0x821A;
    WebGL only supports one renderbuffer color attachment per framebuffer... And I don't think you can have more than one framebuffer at the same time...

    This guy is complaining about it to Khronos: https://www.khronos.org/webgl/public-mailin...2/msg00100.html



    The other guy then states that this was actually a design decision:

     
  14. Gen

    Gen

    This is halloween! This is halloween! Member
    309
    0
    0
    The Mobius Engine
    hm. Could split it up into stages I guess. Something along the lines of a single FBO that has the following format:
    R: Normal X, G: Normal Y, B: Specular, A: Alpha

    Normal Z can be reconstructed realtively easily.

    Alternatively, we could possibly daisy chain the rendering like so:

    Step 1: Render depth and normals into our render buffer
    Step 2: Feed FBO to pixel shader for light computation
    Step 3: Store result into the same FBO (RGB = light buffer, A = specular buffer)
    Step 4: Multiply albedo and specular against their respective channels, and output to the render buffer

    This way, we only ever use one FBO, and hopefully manage to stay within SM2 constraints.
     
  15. We can "pack" the normal on a single value while losing some precision if we use some sort of parametric spiral. This is how MD3 stores normals, if I recall correctly.

    Perhaps we could even calculate the normals using the camera's transform and partial derivatives of the depth buffer. But that would suck on sharp edges, I guess.

    There's also the stencil buffer. Can you write directly to it from the shaders?
     
  16. Gen

    Gen

    This is halloween! This is halloween! Member
    309
    0
    0
    The Mobius Engine
    It could be possible yeah. I know that *some* engines will store the X and Y screen space normal across 4 channels, and dynamically reconstructing Z (which in screen space, is just depth). But since we can only handle one FBO and Renderbuffer, I think it'd be best if we just stored normal, depth, and specular exponent if we're going to be reusing the same buffer over and over until we have the final result.
    more or less:

    Red: Normal X
    Green: Normal Y
    Blue: Specular Exponent
    Alpha: Depth

    Stored across whatever the highest common bit depth we can (primarily due to depth information likely needing more than 8BPP for decent quality)

    Specular exponent is easy to store in a deferred renderer since it's always a 0 to 1 constant that later gets multiplied by 128 by the deferred shader it's self. I'm interested in finding better ways to pack normals while retaining as high quality normals as possible, but keep in mind partial derivatives can eat into ALUs quite a bit. F..
    ...That's actually a very good question.

    Edit: It would actually be interesting to see if we could almost do a surface shader approach, where the light buffer's RGB components are simply multiplied against each mesh's albedo, and the alpha is multiplied against an object's specular map, and added.
     
  17. Hmmm.. Having things premultiplied is not a bad idea. But still, how would we fit the color*albedo + alpha*specular info along with the nromals?

    Just had another idea: We could render on a buffer twice as large as we need vertically. We could then check for the current row when rendering each fragment.

    Code (Text):
    1. if (row % 2 == 0) {
    2.    // calculate and render color info + albedo
    3. } else {
    4.    // calculate and render normals + specular
    5. }
    It would look like this:

    Untitled.png

    And we would still have a free channel.

    Then, on the final rendering, we could read the sampler twice per pixel to get the full data.
     
  18. Chimpo

    Chimpo

    I Gotta Be Me Member
    8,645
    1,506
    93
    Los Angeles, 2029
    Don't Forget! Try Your Best!
    Works lovely but these are some of the most awkward controls and control placements I've ever dealt with in a video game. I know the FAQ addresses this already but everything else is hard to impress when you can't even control the thing comfortably.

    I received this error when I tried to do what DustArma did.

    Error: Cannot call method 'transpose' of nullTypeError: Cannot call method 'transpose' of null at http://achene.co/WebSonic/js/WorldEngine.js:300:42 at Object.normalModelView (http://achene.co/WebSonic/js/WorldEngine.js:342:5) at Object.render (http://achene.co/WebSonic/js/Player.js:757:41) at http://achene.co/WebSonic/js/WorldEngine.js:137:17 at http://achene.co/WebSonic/js/WorldEngine.js:184:4 at Array.forEach (native) at Object.render (http://achene.co/WebSonic/js/WorldEngine.js:182:11) at http://achene.co/WebSonic/js/main.js:82:22
     
  19. Gen

    Gen

    This is halloween! This is halloween! Member
    309
    0
    0
    The Mobius Engine
    Well, what we would do to ensure that we only use the minimum that we absolutely need, is to basically feed the rendered light buffer into a fragment shader that simply multiplies it against the albedo. An example function of this would be:
    Code (Text):
    1. vec4 lightPrePassFinal(vec4 lightBuffer, vec3 albedo, vec3 specular) {
    2.     albedo *= lightBuffer.xyz;
    3.     albedo += lightBuffer.a * lightBuffer.xyz * specular;
    4.     return albedo;
    5. }
    Where lightBuffer would basically a uniform sampler2D or even sampler2DRect transformed into a vec4.

    Unity kinda does something nifty like this; basically all the deferred renderer does is output a light buffer with all of the usual shadows and such that can be used for various effects.
     
  20. @DustArma: Whoah, that's quite severe. Hadn't seen the video.

    @Chimpo: That's most likely the game trying to invert a singular matrix. Sylvester (the math lib I used) returns null on errors like this, hence the error "Cannot call method 'transpose' of null". The singular matrix was generated likely due to numeric instability inside Sylvester itself. I'll add a sanity check to make sure this doesn't happen again.

    @Gen Wait, so specular and albedo are vec3? I thought they were float, and had r,g,b components stored along with them. Makes much more sense now.