Mobius Engine Project General Discussion Formerly known as: ITT: Engine Project Proposal Discussion
#61
Posted 21 February 2012 - 11:48 PM
#62
Posted 21 February 2012 - 11:54 PM
Twilightzoney, on 21 February 2012 - 11:48 PM, said:
Having material access to a second UV set would be doable. Would just need to devise a means to encode a dynamic range exceeding 0 to 1 for light maps in max. This is why most engines have their own light map baking built-in.
EDIT: Keep the name ideas coming. I'll add the poll on Friday.
#65
Posted 22 February 2012 - 09:58 AM
Also, I believe someone at DICE had explained the methods they used to achieve real-time GI in Battlefield 3, though to my knowledge the console versions of that game don't use real-time GI lighting.
Edit: Ah, here it is. Wolfire's David Rosen gave a summary on it.
#66
Posted 22 February 2012 - 10:07 AM
Candescence, on 22 February 2012 - 09:58 AM, said:
Also, I believe someone at DICE had explained the methods they used to achieve real-time GI in Battlefield 3, though to my knowledge the console versions of that game don't use real-time GI lighting.
BF3 uses a middleware called Enlighten. Basically, Enlighten precomputes the vast majority of possible GI solutions that can then be reused to speed up computation of new GI solutions later on (such as if a hole in a wall is created, or if a large object occludes a light source all of a sudden). The process takes hours (in a colleague of mine's analysis, up to 18 hours on one scene), and for character lights they sample lighting from light probe data.
It's not really true real time global illumnation. It's more like glorified light mapping that only really works well when it's updated in small batches on objects that are close to the camera.
#67
Posted 22 February 2012 - 05:03 PM
Gen, on 21 February 2012 - 11:54 PM, said:
I haven't searched the thread to see if any of these were suggested before, but my ides were:
Needlemouse Engine
Ericius Engine -- Which is Hedgehog in Laitn according to Google translate; I liked the double E's going on there.
#68
Posted 22 February 2012 - 05:40 PM
#69
Posted 22 February 2012 - 06:23 PM
Mr Lange, on 22 February 2012 - 05:40 PM, said:
Frankly, I'm more for just deferred direct illumination with shadow maps (they're typically going to be the easiest to implement in a modern pipeline now days) with some kind of light mapping process later on. The deferred part I can already do, same goes for shadow mapping. The worst part of rendering in general will be deciding how the drawpools should be designed, and how the API abstraction should work.
For the most part, light discussion of general ideas would be a good thing at this point in time, since to properly establish the overall scope of the project we need an idea of what people want out of it. It'd be more beneficial if there were a sub-forum though since trying to consolidate every single little idea in a single thread would get chaotic pretty fast.
#70
Posted 23 February 2012 - 05:12 PM
Edit:
In regards to prebaked, non-realtime lighting:
I have not seen any open source lightmappers that do GI at all. An open-source, pre-baked solution would really be cool if you wanted to ever code something like that, Gen.
Direct lighting for realtime is completely fine for the scope of a project like this, extra GI effects could be baked in with a closed source tool like 3DS max.
I know it's an unrealistic request - A free and open-source solution with advanced lighting technology (similar to Hedgehog Engine). *dreams*
#71
Posted 23 February 2012 - 05:21 PM
James K, on 23 February 2012 - 05:12 PM, said:
Still, in regards to prebaked lighting, I have not seen any open source lightmappers that do GI, which is what would really be cool.
Baked GI isn't *too* hard to do. A lot of it is just the amount of resources required to do it with high quality.
Could have a relatively efficient final gathering solution (meaning it'd take fewer resources to bake indirect lighting), but I'm not sure how well that'd work for incorporating something like Directional Light Mapping.
There's also the possibility of leveraging OpenCL to help speed it up, but really this would be something for a much later release down the road (think a 1.x release).
EDIT: It just hit me that an instant radiosity-like solution could be incorporated here. Instant Radiosity tends to have spotty results in a completely dynamic environment, but can look quite nice when used for static lighting.
#72
Posted 23 February 2012 - 07:06 PM
Gen, on 23 February 2012 - 05:21 PM, said:
It's interesting you mention that, I've been looking at that technique recently and I really wish I understood the math more, because the concept itself seems simple.
http://graphicsrunne...-optix-and.html
Even though his cornell box didn't converge to the rendering equation, he got good results in the sponza scene. But most levels for this engine would be outdoorsy and colorful so a biased GI solution would probably be favorable anyway.
-------
I'm working on my own humble test of instant radiosity - Unfortunately I'm really bad at reading mathematical notation, do you know what the algorithm is to determine the radius and color of a virtual point light at a given intersection, in layman's terms?
What I have right now for a GI attempt is literally guess work:

So yeah, brainstorming tech ideas!
#73
Posted 23 February 2012 - 07:37 PM
James K, on 23 February 2012 - 07:06 PM, said:
The results tend to be pretty nice, and adding shadows is easy enough. Could even be hardware accelerated (though I wouldn't dare use these in a real-time environment, the number of VPLs required to make it look nice on all kinds of objects can be a bit strenuous, and throwing shadows into the mix would border on insanity).
EDIT: Also, some one make this guy a full member.
#75
Posted 24 February 2012 - 03:26 PM
Right now, I'm thinking the renderer can be split up into a few systems:
- Draw Pools
- -- Opaque Drawpool
- ---- Renders all opaque objects to a geometry buffer
- ---- Special considerations for static geometry for light mapping (later on down the road)
- -- Alpha Drawpool
- ---- Handles pretty much any kind of alpha-sorted (back to front) geometry
- ---- Would handle lighting using "forward" rendering; could be a viable fallback basis for hardware that can't support deferred
- ---- Same special considerations for static geometry for light mapping as Opaque
- -- Effects Drawpool
- ---- Handles things that may not need to be rendered in the correct order
- ---- Good for particles, and other effects
- ---- Computationally cheaper than proper alpha blending
- Material Manager
- -- Uniform management
- -- Shader management
- ---- Manages what shaders need to be loaded at runtime
- ---- Interprets the material format to assemble shaders that are capable of executing faster at runtime (to mitigate runtime compilation overhead)
- ---- Assigns shader Uniforms as needed to linked shaders
- Post Manager
- -- MRT management
- ---- Would (eventually) assist with geometry buffer size
- ---- Could be used to manage more complex post processes that require multiple sources of data
- -- Effect management
- ---- More or less something to simplify managing what kind of effects should be enabled or disabled and when
- Capabilities Manager
- -- Checks which extensions are available on a system
- -- Enables or disables features as needed depending on hardware support, and known driver issues (I.e., mixed FBO color attachment formats on some known older drivers doesn't work, or uniform buffer objects being disabled on drivers that don't have complete support for them)
- ---- Would be used as a fallback system of sorts in some cases where a particular piece of hardware doesn't support a particular OpenGL feature for a feature
- ---- Could also possibly lower settings on some systems that just can't perform very well with some features
This is just a really rough outline of my idea for a rendering system.
The very beginnings of it will focus on getting a working deferred renderer going with OpenGL. The deferred renderer will be following a minimal geometry buffer design, opting for light-prepass instead of full blown deferred shading.
The buffers for all of this will look like this:
+------------------------------------------------------+ |RED | GREEN | BLUE | ALPHA | |Normal X | Normal Y | Normal Z | Spec. Exponent | Geometry Buffer - 32-bit render target |Red Light | Green Light | Blue Light | Specular | Light Buffer - 32-bit render target on some hardware, 64-bit render target on capable hardware +------------------------------------------------------+
The light buffer will either be compressed into a range of 0..1 likely using exp2/log10 compression on hardware that can't support mixed FBO formats, or be stored in a 64-bit floating point render target (16bpp). It will then be multiplied against an object's diffuse texture, producing the proper lighting across a surface without the need to use a larger G-Buffer to store geometry information. This approach will work a bit better for hardware where memory bandwidth is an issue.
The specular technique used will likely be blinn-phong, possibly even energy conserving blinn-phong. Gamma correction will be taken into account from the very beginning for diffuse and specular textures.
You can effectively consider me the Graphics Programming Lead at this point.

03