When Infinite Detail Isn’t a Hoax
There’s this company, and I’m sure by now you’ve heard of them, called Euclideon who has been showing off these insanely impressive videos of a rendering engine they are working on that supposedly handles “unlimited” detail in real-time using only software methods. I’ve read the responses from major players like @Notch at Mojang (of Minecraft fame) and even he is screaming it’s a scam.
Turns out that Euclideon has themselves a variation of a sparse voxel engine, and it’s an open and shut case; I mean, how do you move around 512 petabytes of information for such a large island without requiring tons of storage space and rendering time?
This would be true if it weren’t for some related factors that people seem to have glossed over in their assessment of the situation.
The Euclideon system, aptly named Unlimited Detail, may actually be quite capable of doing exactly what they claim – however in order to possibly understand how, we first need to suspend our disbelief, start answering some important questions, and even get a little philosophical.
Overwhelming Complexity
Let’s say for a moment that this is some sort of elaborate hoax. After all, the shear amount of data involved is mind-boggling if the entire 3D space is using what amounts to atomic voxel level of detail without repeating the data, right? Similar arguments in the same manner are given by even well known game programmers like @Notch at Mojang, as quoted below:
Well, it is a scam.
They made a voxel renderer, probably based on sparse voxel octrees. That’s cool and all, but.. To quote the video, the island in the video is one km^2. Let’s assume a modest island height of just eight meters, and we end up with 0.008 km^3. At 64 atoms per cubic millimeter (four per millimeter), that is a total of 512 000 000 000 000 000 atoms. If each voxel is made up of one byte of data, that is a total of 512 petabytes of information, or about 170 000 three-terrabyte harddrives full of information. In reality, you will need way more than just one byte of data per voxel to do colors and lighting, and the island is probably way taller than just eight meters, so that estimate is very optimistic.
Assuming is a big word here, and that is exactly what the gaming industry (and Notch) seems to be doing right now concerning this type of technology. 512-petabytes of information is a mighty lot of data to be moving around and storing, but we’ve left out a really simple explanation for why Unlimited Detail probably is not storing or moving around anywhere near that much data for the level of detail they are showing. Regardless if a well known game programmer like Notch at Mojang insists this is impossible, we must understand that impossible often times means we simply do not understand or are unwilling to try, and those are two very different scenarios.
Below, I’d like to set aside my disbelief and try to wrap my head around how such a thing would be possible, and we’ll assume it is while we’re trying to figure it out, like a good magic trick. As you read on, just keep in mind that it’s all hypothetical, even if plausible. I have no actual idea how they are doing it, but I can make some very educated guesses instead of blindly dismissing it as a scam or hoax.
Non-Linear Data
When does 512 Petabytes of data not actually equal 512 Petabytes? The answer to this requires non-linear thinking, which the gaming industry seems to be lackluster at lately.
It is very likely that Euclideon actually is using a type of voxel rendering system, but nowhere near in the traditional way that we’re immediately thinking. In all likelihood, it is closer to a Procedural Voxel rendering engine than a traditional voxel rendering engine. As it is said in the videos, the screen space itself is the limiting factor and also a key to all of this.
Just like a Mandelbrot Fractal, which goes on infinitely, all it really takes to make that happen is a simple equation. In the same manner, converting models into proceduralized point cloud data would be a likely approach for “unlimited” detail aside from the nature of Voxel systems to have built in level of detail to begin with, coupled with highly aggressive point cloud culling based on algorithmic rules. It wouldn’t be storing a 1:1 representation, but instead a procedural and algorithmic method to be solved on the screen space itself in much smaller chunks than worrying about the entire 3D world. Essentially, streaming the point cloud data as required for the current screen space, with a very aggressive culling and search based algorithm discarding (or more likely not even calling into existence) anything it doesn’t need.
While a Mandelbrot Fractal uses repeatable data, and so too does Voxel methods use copies of data to minimize the requirements and memory usage for unique instances, a Procedural Voxel Point Cloud would not have that constraint. We’re dealing with, just like a procedural texture, a very small algorithmic file which when resolved to X number of steps produces the shapes and details, except in a 3D setting via Point Cloud data.
What is actually on the screen at any given point, is only the viewable data, other data ignored in real-time. It’s like asking only for 1/10th of a model because only 1/10th of it is visible at the moment, so you only need to load 1/10th of the file itself. But even further is the idea that even the data for it is algorithmic, and thus not a bloated 1:1 representation of detail, instead just the procedural algorithmic representation which can be solved as said 3D object. Again, this is quite possible and I’ve seen examples weighing in at a mere 177kb in size, but giving modern games weighing in with many gigabytes of data a run for their money.
So what we have is a Voxel Point Cloud System, which can be easily concluded, but what isn’t so obvious are the algorithmic methods employed in order to transcend the Voxel limitations normally encountered.
Another point I’d like to bring up is the statement by Euclideon that the system is working more like a search engine algorithm. Why is this important?
Well, since we’re dealing with a Voxel systems and it uses Point Cloud data for the items in the 3D space, the search type algorithm is only returning (of the fractional file as stated above) the points in the cloud data that matter based on the myriad of criteria assigned in the engine itself – camera distance, etc.
So now we can have a point cloud data for a voxel engine, that requires a fraction of the file space versus the same as represented as a linear file type, and even then only is required to load a fraction of that data for the screen space, and even then, the internal sorting algorithms are only asking for a fraction of that data to resolve and display based on the sorting and screen space requirement limiters.
To See Into Forever
Do we still have a basis for a claim of “Unlimited Detail” in this case? Well, yes and no. What is unlimited detail if only the idea that no matter how closely we inspect something it never loses resolution? Much in the same manner as we look closer into reality we see molecular structures, atomic structures, and sub-atomic structures and even theorize about the underlying energy of quantum structures.
However, I’m beginning to see a parallel between a Procedural/Algorithmic Voxel system and the fundamental and philosophical questions posed by people who study reality itself. In a quantum reality, are the underlying levels of detail actually there if we aren’t viewing them or is this all just a well organized simulation where detail and complexity resolve based on the conscious observer? Due to the nature of quantum mechanics, the latter may be the answer in that things don’t resolve unless consciously observed.
This is some seriously deep stuff to think about.
The nature of infinity may be that of infinite detail on instance, while everything else is non-existent until observed. Sort of like an on-demand reality, which might explain what’s outside the observable universe – not a damned thing until we can see that far, in which case something will be there.
Think of it like streaming the HD content of a video. While watching it, it’s high definition and moving, but clearly has no requirement of loading the entire file before you can watch it. In fact, it only needs to load a fraction of that file and keep that fraction streaming in instance. Now, if the movie was converted to a procedural method file, that file may be many orders of magnitude smaller and only have to resolve a fraction of that total file to create the buffered stream portion in play because only the portion to be displayed actually is resolved algorithmically on demand, while the rest of the movie doesn’t exist until called into a specific instance.
We’re not trying to resolve the entire file through procedural algorithm, but only 30 still frames per second, before discarding and moving on to the next fractional batch, and the reason it knows what portion of the algorithmic representation to ask for is based on the idea of the “more like a search engine” approach Euclideon mentions.
There is also the “limitation” of animating voxel data, and I’ve seen this argument already used for why a dynamic voxel scene is Euclideon’s Achilles Heel. I hate to burst that bubble, but animation of voxel point cloud data is possible, and so is the rigging, as demonstrated in a thesis by Dennis Bautembach named simply “Animated Sparse Voxel Octrees”.
Final Thoughts
Whether or not Euclideon is bluffing isn’t the point. Personally, I don’t actually know if they’ve accomplished what they say, but I do happen to know the idea of how it would be very possible to do so if somebody tried. What it takes is the ability to ignore traditional thinking and really think dynamically. Procedural methods, highly optimized point cloud searching, and intelligent methods to limit the required data to only the pixel screen space can make such a system at least feasible without breaking the laws of physics (or a typical computer) in the process.
Unlimited Detail is actually possible if you understand that you don’t need to load all of infinity at once to make it happen or even acknowledge the need to store infinity in the first place. Algorithms are elegant representations of things, much like we can represent a Pine Cone and most of nature not as bloated 1:1 geometry in a huge file, but instead as a simple equation. This equation requires only a few bytes, or even kilobytes at most to be represented. When resolved, we can scan through the number set to find the exact part we actually need for the instance in 3D, but we don’t really need to solve the whole equation in infinite detail to get our tree or pine cone, now do we? No, we only need to solve a reasonable depth of that equation before we can declare that any further detail would be pointless and non-observable for the current instance. This in itself is the basis for the idea of Level of Detail to begin with, however, actively and aggressively ignoring data and using fractions of the file itself, which may already be a procedural point cloud, would add more bang for the buck and invalidate quite a lot of arguments which say this sort of thing is impossible.
Since we aren’t being required to solve the entire equation, but only the portions that are relevant to the exact screen space at that still frame moment, the amount of CPU/GPU required would substantially drop as it is solving fractional data. So all of this talk about petabytes of data being required is actually nonsense. That is simply the old-style of thought for 3D environments, and not the new school of thought. They are both wildly at odds with each other, much like classical physics and quantum physics don’t see eye to eye.
That’s a good analogy, actually… currently, we’re using methodologies that resemble classical physics, while the next generation will be using what appears to be magic (quantum thinking) in regards to the last generation onlookers.
Arthur C. Clarke said it best
Any sufficiently advanced technology is indistinguishable from magic.
Remember this the next time somebody pulls infinity out of their hat.
No comments:
Post a Comment