Friday, December 16, 2011

3D Virtual Worlds and the Metaverse:

Current Status and Future Possibilities | #SecondLife

 

On December 1st, 2011 there arrived in my email quite astounding news from Dr. Dionisio and Dr. Gilbert that the Association for Computing Machinery had not only accepted our research paper 3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities for publication in their next  journal, but that the reviewers had given the best praise of the paper that the editors had ever seen. As I read through the critiques from each reviewer, I was humbled by their responses.

 

As Dr. Gilbert succinctly put it:

 

After my recent return from the European Summit of the Immersive Education Initiative (I'm still jet lagged as you can see from the early hour I'm writing this!), Dondi sent me the news that ACM Computing Surveys had accepted our paper with relatively minor revisions.  Even more significantly, the impressions of the editors and individual reviewers of the paper were outstanding with all the reviewers suggesting that the work be considered for a best paper award, the main reviewer viewing the work as a  "seminal" contribution to the field, and the Associate Editor calling the first round reviews the best he's ever seen! 

 

What I've also learned is that ACM Computing Surveys is one of the most cited and respected journals in the field of computer science (I had no idea that Dondi had decided to reach so high in submitting the paper). To get accepted into this journal, especially with such high praise, is a great honor and once the article is published our work will be widely read and could have a real impact on the field.  After we submit the revisions in the next few weeks you can crank up all your social media expertise and begin to distribute the article as "in press." 

 

 

LAVA Home of the Future

 

 

 

It’s definitely an honor, and quite humbling to receive that caliber of review from not one but four independent sources across the industry. My understanding of the process was that we should have expected multiple rounds of revisions before being considered for publication, so having it accepted immediately after the first round of peer review is astounding.

 

In the words of the Associate Editor at ACM:

 

These actually are the most positive first-round reviews of a journal article I have seen. If you revise this and include a document that explains how you did, and did not, address comments, I will not need to send this out for further review if it looks satisfactory to me.

 

Let’s put aside the banner waving for a moment, because despite how groundbreaking this all is, it is only a prelude to a bigger picture concerning the Metaverse as I usually provide in articles like this.

 

The Down Low

 

There is a lot of subject matter covered in the paper for ACM Journal, but the title says it all: Current Status and Future Possibilities.

 

Essentially, what we wrote was a comprehensive survey paper looking into the various areas of virtual environments, or immersive environments as some refer to it, and addressed the fundamental challenges and technologies that can further the progress. There is also discussion in the paper concerning what areas are deficient and in need of further research initiative in hopes that those areas will be focused on by some of the countless people reading and citing the paper.

 

Literary influences that led up to the current state of affairs are also outlined, in order to give a proper history of the subject while framing the further understandings as the paper continued on. As one reviewer put it, we presented the most comprehensive paper on the subject that we could short of writing an entire book about the subject.

 

As the message from Dr. Gilbert suggests, I will be making the entire paper available as a PDF when it is sent off to ACM for publication. While there are a lot of topics covered in the paper,  I’ll only be addressing one of them here in this blog.

 

The Take-Away

 

One of the biggest things that can be taken from the paper concerning the future direction of the Metaverse (in my opinion) is the implicit understanding that centralized networking is our Achilles Heel. It always has been, even before we really got into the hardcore 3D Immersive Environments like we see today. All the way back to Lessons from Habitat in 1991, Bandwidth being a scarce resource has always been a concern in something like this, and centralized networking was pegged as a culprit.

 

Yes, we can alleviate that stress by offloading across a datacenter, or buying more powerful servers. But the limitation is still there, and all we end up doing in the process is kicking the can down the road instead of actually trying to solve the problem at the root.

 

Whatever the future holds for immersive environments, it is likely that decentralized networking will be a major part of it in stark contrast to our massive datacenters today and centralized bandwidth.

 

To Boldly Go…

 

There is also a hint toward the data itself, in that the distribution of that data would likely not be the same as how we are accomplishing this task today. I’d like to say that in order for a powerful Metaverse to really evolve, the data has to be agnostic. The best way to understand this is to imagine the replicators in Star Trek and how they work.

 

When the captain walks up to a replicator and says “Tea. Earl Gray. Hot.” the replicator isn’t looking for a cup of tea in its inventory. Nowhere on the ship is there a room with thousands of cups of hot tea on a shelf waiting to be beamed to the replicator. Essentially, there is a holding tank of sorts on the ship which contains the raw molecular “soup” of materials, and the replicators are simply using a digital recipe to pull the components and “replicate” that molecular recipe.

 

 

Janeway and the Replicator

A replicator works almost like magic. Just like the plot to every Star Trek series.

 

 

If we apply this understanding to purely digital files, the same process works. Let’s say that the replicator is a software program on your computer whose sole purpose is to reconstruct digital files based on digital fingerprints (keys) that you give it. Since we’re already talking about a decentralized network as our future, the “holding tank” full of agnostic data to be used for those reconstructions of files should be spread across every user of the network.

 

Why not? We all have a cache folder sitting on our computers from virtual environments and web browsers. What if that cache folder was simply agnostic data and available to the entire network of users in a P2P manner?

 

First off, it would mean that our cache’s are meaningless to human interpretation. Completely random data not representing anything in particular until we give it the fingerprint to reconstruct the file we need. Secondly, it would mean that the data becomes multi-use data.

 

That last point is very important. It means that 1GB of multi-use data represents or contains any file that is 1GB or less. We can say that Photoshop CS5 is under 1GB for the installer, correct? So that multi-use data would contain Photoshop CS5, and anything else that is 1GB or under. The key, so to speak, is that while such a system just about invalidates the premise of current copyright law, making it impossible to prosecute somebody using such a system, it brings up something even more interesting.

 

11 Herbs and Spices

 

It’s not the data that is really copyrightable, but the means by which such data can be arranged in a unique configuration as a representation of a work. What is really copyrightable happens to be the unique fingerprints which tell the system how to reconfigure multi-use data for a specific file output that is no longer multi-use, or in layman terms – it’s the digital recipe.

 

Another interesting side effect to multi-use data is that you’re not storing a 1:1 representation of anything. Since the data is multi-use, that means (for the most part) that you don’t need the storage equivalent for a copy of every single file in the system. All you really need at the very least is enough agnostic data available in order to cover the largest file on the network.

 

If the largest file you have on the network is about 50GB (and I’m being very generous here), then on a multi-use data system, you would need no more than 50GB of data total for everything you ever store into that network. That’s the bare minimum. Yes, at that level it would take longer to reconstruct the data, but it would still work. the more agnostic data you have in storage, the faster that reconstruction goes. Think of it like over unity…

 

Needless to say that a multi-use data structure in combination with decentralized networking would mean that, in example, the entirety of the Second Life asset servers could be housed in about 100 terabytes or less (for redundancy), and the reliability of that system would mean that the odds of anything in your inventory ever disappearing would be so close to nil as to make no odds.

 

And that, is how you solve the root of a problem instead of addressing only the symptoms.

 

 

 

No comments:

Post a Comment