Thursday, August 11, 2011

Quantum Rush: [redux]

Possibly the most important article about computer graphics you’ll ever read.

 

In the prior article “Quantum Rush” I took a look at possible methodologies concerning the Euclideon “Unlimited Detail” system and how such may be more plausible than we’re giving credit for. I touched on some very basic ideas that were considered my “first pass” approach to understanding this type of methodology, and I received quite a lot of feedback in the comments.

 

After a bit of clarification, both from my standpoint and coming from the very talented commenters on the article, I’ve further refined what I believe is a better understanding on how such an “Unlimited Detail” system could exist. Let’s start simply by breaking this monumental task into parts in order that we could better identify the processes at work under the hood.

 

 

Lady

 

 

True Geometry

 

As I previously conjectured, Euclideon is using a specialized method of representing point cloud data for the high detail 3D objects they employ. In much the same manner as we have the basis for what constitutes True Color, as in the upper limit of color representation that the human eye can discern before further refinement goes unnoticed, Euclideon likely employ the same train of thought to the representation of the 3D models themselves. There must be an upper limit to how much detail can be resolved before resolving any further would go unnoticed by the human eye, regardless if the system can resolve unlimited detail further. We’ll refer to this as a “True Geometry” method.

 

 

bitdepth_examples_large

 

 

Indeed, they are using a type of point cloud data representation for the models themselves, often times laser scanned into the system with nearly a million polygon equivalent, however this comparison is not exactly accurate when we’re dealing with point cloud data, or furthermore the Unlimited Detail scenario. While we can say the file originally was 1,000,000 polygon equivalent, when it is transformed into the procedural point cloud dataset, that equivalent remains the same while reducing drastically the filesize and computational requirements for rendering the same type of fidelity. However, the key to enabling this type of filesize reduction coupled with the fractional computation for much higher fidelity isn’t due to a single innovation, and in effect depends on a number of things happening at the same time and working in concert to achieve the desired effects shown in the videos.

 

 

 

Point_cloud_city

 

 

The Searchable Point Cloud Model

 

One of the important things to consider on our list of innovations is the assertion by Euclideon that they are also employing an advanced search algorithm within the context of this unlimited detail system. This is highly important in the fundamental understanding of how this type of system is enabled, in that instead of having to access the data contained within the model type files in entirety, it is likely broken up into a type of restricted database format whereby the cells in the database contain parts of the geometry cloud data with a possibility of relative positioning for that particular part of the geometry in relation to the overall model representation. This relative positioning metadata would then be used to figure out ahead of time the occlusion of the potential geometry in relation to global position within the model, in the world and accounting for the camera space of the user.

 

This methodology would remove the linear barrier that is cost of rendering, although even this alone is just a single step in the methods used overall in concert, however impressive. We’ll label this as “Non-Linear Data”.

 

Procedural Methods

 

Now that we have a beginning to our understanding, we continue on to exactly what those database lines are likely storing. Normally the geometry for something like a Collada (DAE) format would look like this:

 

<?xml version="1.0" encoding="utf-8"?>
<COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1">
    <asset>
        <contributor>
            <authoring_tool>Softimage|XSI version 7</authoring_tool>
        </contributor>
        <created>2008-08-21T17:18:12Z</created>
        <modified>2008-08-21T17:18:12Z</modified>
        <revision>1.4.1</revision>
        <unit meter="0.1" name="decimetre"></unit>
    </asset>
    <library_cameras>
        <camera id="cameras_0">
            <optics>
                <technique_common>
                    <perspective>
                        <yfov>53.638000</yfov>
                        <znear>0.100000</znear>
                        <zfar>32768.000000</zfar>
                    </perspective>
                </technique_common>
            </optics>
            <extra>
                <technique profile="XSI">
                        <XSI_Camera>
                            <xsi_param sid="std">9 </xsi_param>
                            <xsi_param sid="aspect">1.333333 </xsi_param>
                            <xsi_param sid="fov">0.000000 </xsi_param>
                            <xsi_param sid="fovtype">1 </xsi_param>
                            <xsi_param sid="proj">1 </xsi_param>
                            <xsi_param sid="orthoheight">0.100000 </xsi_param>
                            <xsi_param sid="interestdist">20.099751 </xsi_param>
                            <xsi_param sid="near">0.000000 </xsi_param>
                            <xsi_param sid="far">0.000000 </xsi_param>
                            <xsi_param sid="projplane">FALSE </xsi_param>
                            <xsi_param sid="projplanewidth">0.188976 </xsi_param>
                            <xsi_param sid="projplaneheight">0.141732 </xsi_param>
                            <xsi_param sid="projplaneoffx">0.000000 </xsi_param>
                            <xsi_param sid="projplaneoffy">0.000000</xsi_param>
                        </XSI_Camera>
                        <XSI_CameraFocalLength>
                            <xsi_param sid="projplanedist">4.747280</xsi_param>
                        </XSI_CameraFocalLength>
                </technique>
            </extra>
        </camera>
    </library_cameras>

 

 

The prior is taken (with great pains) from a COLLADA mesh of a Rigged Man, weighing in at around 15MB of data. In order to get the raw data (XML) I attempted to load the file in Notepad, which locked up for awhile during the process of loading the entire 15MB of XML text. Clearly I’m not posting the entire XML data here on the blog, because that would be kind of asinine.

 

The point here is that de facto standards like COLLADA are horribly wasteful and bloated as file formats to represent model data. The issue isn’t the geometry or the properties involved, it’s literally down to how they are represented in the file and how a system would need to access that individual data within the file.

 

In our linear method of thinking, we assume that every point and detail need be written out in semantics in order to have what we need to reconstruct the model in 3D. This is also supposing that we need to load the entire model in entirety before we can run some sort of calculations on level of detail and occlusion. After all, how can be suppose to know what geometries in our model will be occluded if we can’t load the entire file up front to analyze it?

 

The answer may come from our original thinking in the prior section, coupled with the file format itself, and then we put them together with intelligent methods for further optimization.

 

Firstly, being a search engine algorithm base, the file format would likely be represented like a database and not a bloated XML text of all geometry that has to be preloaded into memory. Instead we now have access to individual lines or sections of the file without having to load the entire file into memory up front as a result, effectively streaming the required geometry from the file in real time while actively ignoring any lines that don’t match the search requirement. This is a big step forward on its own, but we aren’t quite there with a total solution. How that geometry and property data is represented on those lines within the database are also of concern for addressing the computational requirements.

 

Sure, we no longer would be restrained to linear methods and not having to load an entire bloated model file into memory to work with it, but if the geometry lines themselves in the database are still 1:1 and bloated, then the total filesize remains very large. So we’ve solved the computational side and access, but we’re left with the filesize aspect to solve.

 

Coincidentally, this is where we add the possibility of procedural methodologies to store the point cloud data lines within the database file structure for easy access and indexing by the search algorithm in the engine itself. On its own, procedural methods can produce the type of fidelity we see with Euclideon, but at a terrible rendering expense for computation in order to effectively solve equations to get our results. In a linear fashion this would instantly be prohibitive to accomplish because we’d have to load possibly a 15MB file which then would need to be systematically “inflated” in entirety, or at a very high resolution, before being able to process occlusion or geometry in the 3D space.

 

However, we’re not dealing with a linear methodology, and as such are freed from the constraints of having to solve an entire geometry of the model  prior to further occlusion calculations and rendering. Instead, we can now selectively stream only the geometries in specific lines within that database and ignore the majority of the file as a whole in the process.This comes as a surprise to traditional thinking in that we are no longer having to load a model up front prior to work with it, and instead are now dealing with fractional file segments in the rendering pipeline. The inclusion of procedural methods of representing that geometry on a per-line basis only adds to the reduction of requirements for rendering in this case because complex calculations need not apply as a whole (which would be computationally expensive) but instead only very small snippets of calculations that are easily solved in real-time are in use.

 

But, of course, there is even more than this going on under the hood.

 

Screen Space Limiting

 

An addition to this line of thinking comes in the guise that even though we may have solved a potential bottleneck with filesize requirements (Procedural Point Cloud Data) and computational requirements (Search Algorithm and Database style selective streaming) we are still left with a hefty chunk of data to represent in a 3D space. This, too, is reduced even further through the implementation of search criteria prior to loading the index and returning values. The screen space itself becomes an important factor to making yet another drastic improvement to the rendering cost in that only the pixels within the viewing space of the 3D environment via the 2D viewport canvas need be calculated, further reducing the overwhelming complexity normally seen in traditional methods.

 

Since there is a likelihood that procedural geometries come with some sort of relative positioning data in the case of Euclideon’s unlimited detail system, the search algorithm is pre-occluding geometry prior to actually loading it into memory via selective query. Add to this the further refinement of calculation based screen space occlusion within the confines of a 2D space of window resolution, and we have further methods for a search based algorithm to ignore further individual lines of geometry data when resolving geometry in a 3D world.

 

As stated by Euclideon recently, a majority of the requirements for their system stem from simply a computer’s ability to display a bitmapped image on screen, which arguably any computer ranging back to 1994 is capable of in spades, and in software mode alone.

 

Deconstructing Complexity

 

What we’re left with is a lot of complexity when all is said and done. Even if the individual lines of the database structure are non-linear access based on a highly complex set of search rules, we’re still left with the notion that procedural methods still come down to recursion algorithms, which still can be quite computationally expensive. However, yet again, our reasoning seems to offer an answer to this problem before it ever becomes a problem.

 

Procedural Point Cloud Data has a potential to be very large in filesize or computational cost, but that is merely potential and not application in this case. The actual equations themselves take up a tiny fraction of filesize data to store, and while resolving those equations in totality can inflate a file and computation immensely, we’re in no position to actually require that we resolve those computational algorithms stored on a per-line basis to any extensive detail to begin with. What difference would it make if those procedural algorithms were solved within 5 steps or less in recursion versus a linear approach that would require a few hundred or thousand in complexity? 

 

The answer happens to be a computational savings of a few thousand percent, and readily within the realm of software computation alone. We aren’t having to resolve more than a position of individual points in a cloud, and individually these computations are ridiculously low-end, but taken together make up a powerful system.

 

Since there is a limiting factor based on screen space and pixel resolution, those normally computationally intensive resolutions for geometry become downright trivial and even then are only on a line-by line non-linear basis with extensive pre-culling with search methods.

 

Hence, even though there is potentially high computational and filesizes, the actual output does not require that. It also explains how the only real requirement is the ability to render a simple bitmapped image in graphics within reason.

 

Epilogue

 

I’d like to remind my readers that what is presented in this article constitutes no more than outside conjecture. While I would love to validate these ideas via access to a hands-on demonstration with the Euclideon Unlimited Detail system, sadly at this point I am unable to do so. The best that I can offer in the meantime is to analyze what has been said in relation to this system and piece together plausibility through non-linear thinking.

 

Whether I am in the ballbark for how this system works or not remains inconsequential because the intention of this deconstruction is simply to show a plausible method by which this could be possible. It may constitute just one of many details for how this would work, however the intention is to create a jumping off point for thought experiments, expanding our notion of what is and is not, and maybe giving us all some insight for what we should expect going forward for the entire industry.

 

My intention here is to create a thought experiment for you to continue working on, and hopefully inspire long-tail innovation and differential analysis. Hopefully you, as my readers, can clarify further and add to this in a manner which will reveal more than even I can wrap my head around. 

No comments:

Post a Comment