Monday, May 9, 2011

Sign of the Times

 

Recently I was watching the Twitter-sphere and an interesting tidbit came across about the potential acquisition of Skype tomorrow morning (May 10th 2011) by Microsoft. If I remember correctly, the amount put forward was around 8.5 billion dollars. This alone was interesting enough, whether it’s a rumor or not, and had me thinking about the 40 Days and 40 Nights post I had written awhile back about the possible acquisition of Linden Lab by Microsoft and what that meant in the grand scheme of things.

 

As I had stated before, I was really under the impression that Microsoft was likely putting out feelers as to what they could buy out and brand as their own, and that probably during that phase some sort of preliminary offer crossed the desk of Linden Lab, to probably be rejected. Despite that, I still believe the end-goal of Linden Lab is acquisition, though it’s a matter of time to reach that point.

 

There is a lot of things that need to happen in the virtual worlds arena in order to facilitate the forward movement of the industry as a whole, but the trouble is that most if not all of the current offerings are a victim of feature lock-in. Sure an acquisition might have helped a great deal to rebuild or improve the technology of Linden Lab, but as I continue to think about it, I’m beginning to believe that the only real way forward is if they rebuilt the whole thing from the ground up with better intentions.

 

For instance, could there be weather and reflections in Second Life? Well, yes… Windlight itself has a native weather system built into it, and we know that Linden Lab acquired the company responsible for Windlight. As for reflective surfaces, there was a short period of time when things like mirrors were enabled in Second Life and then hastily retracted. Even the Viewer 2 interface seems half thought out and hastily put together, in the sense that implementing web browser type familiarity without actually allowing the viewer to act like a web browser that the new users would be expecting seems like a disconnect in design understanding.

 

A lot of the issues that I see with Linden Lab stem from the second generation staff issue whereby a majority of the real visionaries from the original group who started the company no longer remain, and thus the people who are driving the company today really don’t understand what it is they have or what it should be doing going forward. I mean, is it a fair assumption that the Viewer 2 system is a half hearted attempt to familiarize a complex system like Second Life with existing paradigms like a web browser? Well, yes, of course. But despite the short sightedness of it all, there have been a number of really groundbreaking advancements as a result.

 

Also, despite the uproar I had caused about the Letter to Viewer 2 Haters, many of the things I said still hold true – in that whether we really like it or not, we’ll all be using Viewer 2 in the future. Now, when I said that, I also made it a habit to tell people that I meant more specifically that in one form or another we’ll be using Viewer 2 and it’s highly likely that it’ll be the TPVs that manage to make a very well rounded and acceptable version of Viewer 2 for the masses.

 

That’s usually how it’s always been, even with stuff like 1.23 where the TPVs made it into something far better than the original and accepted by the masses. Phoenix, Imprudence, etc are examples, and the predecessors like Emerald (despite the clusterfsk that Emerald became).

 

However, there is a middle ground. Personally I love Viewer 2, not as the official viewer but as a representation of the advancements which it brought to the community (despite the shortsighted and half assed parts). Shared media is a prime example of this, but still probably needs work. I use a Viewer 2 compatible TPV most of the time, which is to say that Kirstens Viewer is my choice, even though I tend to keep a handful of other viewers installed and updated for different uses.

 

The middle ground, though, is an interesting one to contemplate. Somewhere between the official release and the TPVs, the corporate method of doing things and the Open Source method is a viable third option. Clearly the corporate side of things has managed to botch an awful lot over the past few years, and has been playing fix-up ever since. But I’m not entirely convinced that the Open Source methodology is the magic bullet either.

 

For one, there is a lack of cohesion and unified force in the open source arena which prohibits commercialization or proper migration. Many places like Avination have a silly policy about not having Magic Boxes in their system because they want to see stores there instead. This, to me, seems like banning the Internet because nobody wants to build a store in your town. It’s really addressing the symptoms and not the roots of the problems while ignoring the core reasons why such issues exist. Of course, we also must take into account that there seems to be a lack of security protocols to ensure that content remains properly protected when crossing grids, and the simple act of migration itself is as convoluted as it can possibly get.

 

Maybe this is one of the reasons that Linden Lab is testing out the direct delivery system on Marketplace in order to eliminate the need for Magic Boxes in-world? If so, then it would also make sense that the Linden Dollar currency would be offered as a defacto base currency across the Hypergrid in order to facilitate the rapid expansion and use of content and purchases outside its own walls.

 

While this is interesting, it still stands to reason that it’s merely an expansion of a system that Linden Lab probably understands to be irrevocably busted, but we must also understand that Linden Lab is a business and has a profit margin to maintain. Working out a new version from the ground up would make the most sense, but just isn’t in the cards due to time constraints and costs involved. So they will make due with what they have and try to make it better in whatever way that they can.

 

That doesn’t mean that Second Life isn’t broken.

 

I can play a game of Minecraft and watch the random thunderstorm pour rain onto the massive terrain, and not under trees or inside houses. This brought me to wondering why Second Life seems to have an issue with implementing something similar, and to wit, how independent programmers managed to implement such a thing from scratch in a Java game while Linden Lab cannot or will not when they have one of the most advanced particle systems in the world at their disposal with the predefined abilities of weather built-in.

 

Of course, it boils down to implementing things like Zones and the ability to have users define such zones in-world, which is one of those things that are missing from Second Life which begs the question of whether or not actual game programmers built Second Life or if it was hacked together in a weekend. Zones are a staple of game programming, and you usually define at least two (Above water and below). Which, again, brings us to what happens when you dive into the vast amounts of Linden Oceans…

 

Well, nothing. you just keep on walking around like you aren’t in water. This is a lack of zones telling the software the difference between dry land and water and how the avatar should behave in either. These are things that are usually thought of from day one when designing with a game engine, and often it’s built into commercial game engines by default.

 

A lot of this can be implemented or fixed, but I still wonder about it all. Not that I expect any of it to be fixed or implemented, because clearly Linden Lab has more important things on their plate to work on, like adding a Facebook button and rearranging user profiles.

 

It’s a sign of the times, clearly.

 

The whole point of exploring XMPP chat was that it was thought to be some sort of magic bullet to solving the chat and group chat problems in-world, and while XMPP can reasonably handle millions of chat instances, I don’t really believe anyone at Linden Lab understood that the amount of instances was not a 1:1 representation but a polynomial number which brings our supposed tens of thousands of chat instances upward to a more realistic virtualized 4.8 billion chat instances when dealing with 64,000 simultaneous chatting users. Of course, this is a hypothetical estimate in order to illustrate the difference in bandwidth magnitude – so take these numbers with a grain of salt.

 

Keep in mind, when you are chatting in-world, you aren’t chatting to just one other person, but broadcasting to everyone in range. Your single chat instance then becomes the co-currency for that range as the server needs to relay to everyone in your range. Every time somebody types and hits enter in chat, it can be estimated a top delivery of maybe 50 other users that message needs to relay to, and the same goes for every time somebody else answers. Of course, we know that the real co-currency of a region is not 50 users, and we’ve all watched a virtual wedding crash a simulator in lag, but even when that number is a respectable 20 co-current per region, the chat streams are far higher than the double digit 20.

 

Even worse when we take this understanding and apply it to group chats. Hundreds and even thousands of people in a group, and a single person says “Hello”, compounding the relay system and overwhelming it. The server needs to send that simple message to thousands of other people and vice versa when any of them reply. So let’s say the group has 5,000 members online and you say “Hello”. Just for that message, 5,000 other people need to be relayed to, and again when any of them reply. That’s a lot of traffic and much more in throughput of connections.

 

But things like XMPP are supposed to handle millions of chat streams without a hitch, and this is probably true, but when we’re broadcasting en-mass like this, the true connection relay such a system is responsible for jumps up a few orders of magnitude, and well past anything it was designed to handle.

 

5,000 multiplied by 5,000 is the more appropriate number of virtualized connection streams that the chat server needs to handle for that one group chat, to keep all members online and connected to each other in the relay. In this case, that number becomes 25 million, with a margin of error for intelligent routing and relay. But still, just that alone is enough to make the mind boggle, and also understand a bit better why there is chat lag and failed message delivery so often.

 

We also apply this train of thought to all manner of data transfer across the system, such as assets, scripts and anything else you may be broadcasting around you. This isn’t a 1:1 scenario, and really never has been from day one. We’re talking orders of magnitude higher streams of connection across the entire virtual environment… and really, no centralized server was ever meant to handle it.

 

I’m not really big on total decentralization either, because I believe that approach lacks many things like security and protocol, however quite a bit can be said for decentralization of key aspects of a virtual environment, while retaining centralized gateways of protocol and authorization in order to manage the checks and balances.

 

Overall it’s a crossroads of sorts.

 

I’ve seen this crossroad a number of times, and each time the same sorts of things usually bring the industry back to it. Lots of centralized servers, static media formats, and cutting corners.

 

The Metaverse isn’t a new idea, and neither is a virtual environment. It’s not revolutionary to walk into a VC meeting and tell the future board of directors that you have a vision for something spectacular called a virtual environment. Really you’re just rehashing the past twenty years or more and acting like it’s all new and innovative.

 

But I digress, as I usually do in these blog posts.

 

Really what I’m on about is that there is a constant impression of half-assed attempts in the industry, and missed opportunities. I can single out Second Life for what it’s worth, but the same has applied for many years to every virtual environment system I’ve ever encountered.

 

I just happen to like Second Life for the time being, but that isn’t because Linden Lab has managed to do anything to sway my judgment – it’s because the community and what they offer has continually given me something more than Linden Lab has.

 

When I wanted to see the best of what Second Life could offer in graphical fidelity, I didn’t get an answer from Linden Lab… I got an answer from Kirsten’s Viewer. When I wanted to see the technical achievements available in Second Life, I got an answer from Emerald and now Phoenix viewer. When I was interested to see the versatility of Second Life technology, it was the community and OSGrid, and related types that had an answer.

 

When Linden Lab drops the ball, it’s the community that runs with it and makes their system shine.

 

It’s a sign of the times, and I think it’s high time Linden Lab discontinues keeping the community at arms length. The most critical are often the most passionate and caring. But most of all, Linden Lab needs to get their mojo back.

No comments:

Post a Comment