Thursday, May 27, 2010

1 more screen of Design Garage

There are lots of amazing screenshots from Nvidia's Design Garage demo for the new Fermi cards. I like this interior shot in particular, because it reminds me of the Cinema2.0/OTOY/Ruby demo:


From http://www.evga.com/forums/tm.aspx?m=289470&mpage=6

Wednesday, May 26, 2010

Intel pronounces Larrabee dead, how will this affect Unreal Engine 4?

You might remember this interview with Epic Games' ever so humble president Mike Capps from a few weeks ago, in which he said "if you look at what’s happening in the PC market – Larrabee and all that – it’s really taking off, and I think the jump to next generation’s going to be another really big one". Sadly, in an unexpected turn of events, Intel decided otherwise and decided to kill off the GPU that was partly Tim Sweeney's baby. In numerous occasions (e.g. Siggraph '09) Sweeney has stated that Epic's next-gen game technology Unreal Engine 4 was built specifically with Larrabee's multi-core architecture in mind.

Hopefully, this devastating revelation from Intel will not hinder Unreal Engine 4's supremacy in the next console generation, because I dare not imagine what console graphics would have looked like if it wasn't for UE3's anti-alias free dominance... oh the Humanity!

Star Wars cupcake

Finally made a Star Wars themed cupcake today. Yes just one. The above storm trooper was a pain in the A to make. It's made out of marshmallow fondant (first time making it for me) and getting everything to stick and it was small and fidgety. Plus I have 2 kids, a twitter account and my Mum on the phone to keep up with while making it. I now have a headache and am rolling out little panadol looking fondant balls to eat with my chocolate milk.

The frosting isn't suppose to be so green looking. It's more of a teal blue irl but my shocking lighting in my house and my use of my mobile's camera made it look icky. You can't even see the cool foil cupcake wrapper (space like) it's in. Darn. Shall have to try again one day and take proper photos during the day.

May the Fork be with you.
(seriously who needs a fork when it comes to cupcakes?)


Thursday, May 20, 2010

V-Ray GPU news

Yesterday a new video of V-Ray's GPU renderer surfaced on the net: http://www.spot3d.com/vray/images/rt_movies/20100514_VRayRTGPU.wmv

The rendering speed and interactivity look phenomenal, but then again it's being rendered on 3 GTX480's so no real surprise there. It's also using OpenCL and Chaos Group is the first to deliver a working commercial GPU renderer that is not CUDA-only (LuxRender's smallLuxGPU was actually first with OpenCL but is open source).

Chaos Group started the whole GPU rendering revolution 9 months ago at Siggraph 2009 (mental images probably had a working implementation first with iray, but it was not shown in public until GTC 2009). Not only did they prove that path traced rendering with high-quality global illumination was possible on GPUs, but also that the GPU was an order of magnitude faster at this kind of rendering than the CPU. Both of these amazing feats were utterly unbelievable just 10 months ago for every one but the lucky few at Nvidia, Chaos Group and mental images.

The V-Ray GPU presentation on a simple affordable PC (quad core i7 with gtx285) has inspired many other developers to start working on a GPU renderer (in contrast to iray's GTC demonstration on a uber render server consisting of 15 Teslas). If it wasn't for Chaos Group, most people probably still wouldn't have the ability to render on the GPU or even know that it was actually possible.

Wednesday, May 19, 2010

Cloud gaming a hot topic at HPG2010 and Onlive coming to Belgium!

Woot! Onlive is coming to my small, governmental-crisis-prone country :-D. Yay! http://blog.onlive.com/2010/05/16/onlive-coming-to-belgium/
Belgacom, the largest broadband operator in Belgium, has made an investment in OnLive, and has partnered with us to deliver the OnLive® Game Service to their broadband customers. Belgacom has the exclusive right to bundle the OnLive Game Service in Belgium and Luxembourg with their other broadband services, but gamers in these countries also will have the option of ordering directly from OnLive through any Internet service provider.
Too bad I hate those soul-sucking fuckers from Belgacom and I refuse to pay them a cent. OnLive should have partnered with Telenet: much better broadband service (Telenet is on cable, which is on average 3x faster than Belgacom's ADSL network) and much more popular in general than the state owned monopoly of Belgacom.

In other news, HPG2010 will feature Turner Whitted (ray tracing pioneer) and Cevat Yerli (Crytek) as keynote speakers, and both will talk about server-side rendering http://www.highperformancegraphics.org/program.html

Why Diaspora Will Win

There has been considerable news in the technology area concerning a new initiative for social networking named (aptly) diaspora. From the site, diaspora is explained as:

The privacy aware, personally controlled, do-it-all distributed open source social network.
Why would this early start-up have a fighting chance in the 21st century? Simply because these individuals are young enough to understand the nature of accelerating returns, open source tactics, empowerment to the individuals, and also (more importantly) that they understand that your information in the future should not be centralized or controlled by a single entity for which you have no say.

A conversation very recently between myself and Kat2 Kit in SecondLife raised the issue that places like Facebook have no regard for personal information or privacy, as their means of income is based entirely from the exploitation of that information to the highest bidder. It is this reason that initiatives like diaspora will quickly prevail over Facebook and other social network services as the defacto standard for personal zeitgeist.

Introducing diaspora. Power to the people.

Whether this revolution includes diaspora or another like it is irrelevant at this point. Diaspora is the tipping point that will usher in the new age.


Tuesday, May 18, 2010

State of Affairs 2010

As I gaze outside my window, I see the many droplets of rain grazing the leaves of the forest and trickling down to the ground. Through the cold and gentle cleansing, I am reflective and find wisdom in what I see.

At first glance, the rain seems like it is merely random and without purpose, but upon deeper consideration, that very randomness results in a greater purpose overall. It is with this observation today that I begin the State of Affairs address for 2010.

Another year has passed, and the Andromeda Underground has continued to push ahead. While the membership herein has fluctuated with the loss of some and the introduction of others, we remain around the same membership count as we began with two years ago. Programmers have come and gone in these two years, as the complexity of the task at hand was severely misunderstood. Those that began here as programmers have moved on to other things, while those who were not explicitly programmers have taken on parts of the component structures inherent with the over all system.

Of note, the Crossroads FTP system was started and continues to evolve over the course of the past year and going forward. With nwasells and on occasion Epsilion at the helm of development on this front, some progress continues to be made despite real life concerns and interruptions.

Also in the past year, we have seen the rise of Pixel Labs in Second Life. With members of Andromeda Media Group participating in virtual environment projects which are changing the nature of application within a virtual environment. Pixel Labs also serves as a virtual environment front for Andromeda Media Group and the Andromeda Underground, in that members who participate here also participate in Pixel Labs and Andromeda Media Group within Second Life (or are encouraged to).

From that occurrence, Pixel Labs has evolved under Andromeda Media Group as a holding, and now has financial backers in the form of Directors in order to oversee the continued progress of the group while removing the potential poison from the well which existed prior. On the Pixel Labs island in Second Life, there exists Pulse Point Marketing, a real life marketing company with Second Life experience for large universities and corporate interests. Also on the island is artwork from one of our Directors (a cast member from the treet.tv show 1st Question). Pixel Labs also contains a custom designed building for Pixel Labs members themselves to act as a showroom and meeting area.

What would we do with a showroom, you may ask? Well, in the past few months, Pixel Labs has created and released a number of in-world products; some of which are highly sought after and have attracted the attention of large corporate interests and smaller business alike. For example, the book Tablet (as designed and created by nwasells and myself) continues to leave people in awe at the usefulness and modular design it encompasses. The koios Presentation system left an audience at ViO business center completely ecstatic when we used a prototype for our presentation.

But the news isn't entirely good, as the past year has also seen the severe degradation of health for one of our most intelligent individuals, Strapples (Alin). Unfortunately his medical condition continues to worsen, and doctors are unable to truly understand the cause of it. It is disheartening to see his body continue to worsen over such short periods of time, but we continue to send our thoughts and prayers to him - and revel in the courage he has to continue unabated despite his condition. He continues to be an inspiration to all, and if anything, should give us hope to face the biggest challenges in life.

Over the fall of 2009, I have contributed to an academic book to be released this July, entitled Virtual Worlds and E-commerce: Technologies and Applications for Building Customer Relationships in which I had the honor of writing the final chapter entitled The Future of Virtual Worlds and E-Commerce.

All of this adds up to a very busy year for Andromeda Underground and Andromeda Media Group.

Over the past year, Andromeda Underground has made a number of predictions about the state of virtual environments - many of which are beginning to come true today. From discussions concerning how best to integrate the web with virtual environments, down to how Andromeda3D as an interface should be designed, those ideas and discussions have been mimicked outside of our domain here in the virtual environment industry. In that we were discussing and designing them well before the mainstream industry mimicked them only shows us today that we are still well ahead of the entire industry with what we know and can see as the future.

Time and again, over the past few years and even reaching back to 2005, those associated with either VR5 Online, Nidus and later Andromeda Media Group have made countless predictions on the state of virtual environments and where they will be going. There have also been predictions on the nature of Social Media in that services such as Facebook would eventually fall to the wayside in much the same manner as what caused the collapse of Myspace.

These are incredibly important observations in the industry, simply because we see a trend that the industry has yet to understand. What we see happening today with virtual environments and social media tend to be things we have discussed in depth a year or more ago before these steps were taken by those companies. As a result, one may wonder what that actually means in terms of Andromeda3D and related projects, and I would like to make that connection for you today in the State of Affairs 2010 address.

Despite being starved for time and resources, as a group we remain years ahead of the industry overall.

This doesn't mean, however, that our accomplishments are directly visible overall or that they can directly translate to a finished product known as Andromeda3D no more than we can point to a single drop of rain and know exactly what plant it helps to grow. We do, however, understand that many drops of rain lead to growth of the forest, even if each drop of rain seemed to fall on something unrelated to the plants we wish to focus on. In the end, it all trickles back and makes the overall forest stronger and more vibrant as a result.

One thing I had noticed after our first year was that we had spent a great deal of time focused entirely on Andromeda3D to the exception of the metaphorical forest around it. As a result, it should have been no surprise that doing such would have been the equivalent to growing a flower in the desert. While it is possible, the odds of successfully doing so are much lower than dropping a seed in a forest and knowing it will sprout with ease even if ignored.

In the second year of Andromeda Media Group and Andromeda Underground, I decided to focus on the forest around our plant, by nurturing the network, respect and visibility of our project and related abilities, thereby creating an environment whereby the plant that is Andromeda3D could indeed flourish even if left alone.

It is by this reasoning that it may seem that we are neglecting this project, while in reality we are doing so much more to allow it to truly flourish. It's the ability to see the forest for the trees, and know how everything relates to each other in an ecosystem.

With this, I bring you the state of affairs for 2010, and I look forward to many years to come filled with reward and respect for all.

Will Burns
Project Leader

Watch the rendering equation being solved in real-time!

Video from the "Brigade" real-time path tracer:
http://www.youtube.com/watch?v=b7W4BQevKiM
It looks dreamy (because of the blur), and it fits the atmosphere perfectly because it fulfills a dream for many (including myself):



Watching path tracing in real-time is very satisfying imo, as it has been considered to be the most physically accurate but slowest solution to the rendering equation and it's real-time implementation has remained some kind of a holy grail for graphics researchers since it's conception in the 1980s. Now that this ultimate long sought after goal has (almost) been reached, I found it particularly pleasing to re-read the following overview of the history and principles of path tracing on Wikipedia:

Path tracing (shamelessly copied from wikipedia)

Path tracing is a computer graphics rendering technique that attempts to simulate the physical behaviour of light as closely as possible. It is a generalisation of conventional ray tracing, tracing rays from the virtual camera through several bounces on or through objects. The image quality provided by path tracing is usually superior to that of images produced using conventional rendering methods at the cost of much greater computation requirements.

Path tracing is the simplest, most physically-accurate and slowest rendering method. It naturally simulates many effects that have to be specifically added to other methods (ray tracing or scanline rendering), such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and indirect lighting. Implementation of a renderer including these effects is correspondingly simpler.

Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of other rendering algorithms. In order to get high quality images from path tracing, a very large number of rays need to be traced lest the image have lots of visible artefacts in the form of noise.

History


The rendering equation and its use in computer graphics was presented by James Kajiya in 1986.[1] This presentation contained what was probably the first description of the path tracing algorithm. Later that year, Lafortune suggested many refinements, including bidirectional path tracing.[2]

Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas.

More recently, computers and GPUs have become powerful enough to render images more quickly, causing more widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on a GPU in 2002.[3] In 2009, Vladimir Koylazov from Chaos Group demonstrated the first commercial implementation of a path tracer running on a GPU, and other implementations have followed.[4] This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL.

Description

In the real world, many small amounts of light are emitted from light sources, and travel in straight lines (rays) from object to object, changing colour and intensity, until they are absorbed (possibly by an eye or camera). This process is simulated by path tracing, except that the paths are traced backwards, from the camera to the light. The inefficiency arises in the random nature of the bounces from many surfaces, as it is usually quite unlikely that a path will intersect a light. As a result, most traced paths do not contribute to the final image.

This behaviour is described mathematically by the rendering equation, which is the equation that path tracing algorithms try to solve.

Path tracing is not simply ray tracing with infinite recursion depth. In conventional ray tracing, lights are sampled directly when a diffuse surface is hit by a ray. In path tracing, a new ray is randomly generated within the hemisphere of the object and then traced until it hits a light(possibly never). This type of path can hit many diffuse surfaces before interacting with a light.

Bidirectional path tracing

n order to accelerate the convergence of images, bidirectional algorithms trace paths in both directions. In the forward direction, rays are traced from light sources until they are too faint to be seen or strike the camera. In the reverse direction (the usual one), rays are traced from the camera until they strike a light or too many bounces ("depth") have occurred. This approach normally results in an image that converges much more quickly than using only one direction.

Veach and Guibas give a more accurate description[5]:

These methods generate one subpath starting at a light source and another starting at the lens, then they consider all the paths obtained by joining every prefix of one subpath to every suffix of the other. This leads to a family of different importance sampling techniques for paths, which are then combined to minimize variance.

Performance

A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases. This can take hours or days depending on scene complexity and hardware and software performance. Newer GPU implementations are promising from 1-10 million samples per second on modern hardware, producing acceptably noise-free images in seconds or minutes.[citation needed] Noise is particularly a problem for animations, giving them a normally-unwanted "film-grain" quality of random speckling.

Metropolis light transport obtains more important samples first, by slightly modifying previously-traced successful paths. This can result in a lower-noise image with fewer samples.

Renderer performance is quite difficult to measure fairly. One approach is to measure "Samples per second", or the number of paths that can be traced and added to the image each second. This varies considerably between scenes and also depends on the "path depth", or how many times a ray is allowed to bounce before it is abandoned. It also depends heavily on the hardware used. Finally, one renderer may generate many low quality samples, while another may converge faster using fewer high-quality samples.

Scattering distribution functions

The reflective properties (amount, direction and colour) of surfaces are modelled using BRDFs. The equivalent for transmitted light (light that goes through the object) are BTDFs. A path tracer can take full advantage of complex, carefully modelled or measured distribution functions, which controls the appearance ("material", "texture" or "shading" in computer graphics terms) of an object.

Sunday, May 16, 2010

Latest Research on the Impact of Technology on Student Performance

With the Australian Government giving every Year 9 student a laptop and some of these students now in Year 11, I am forever being asked how this 1:1 program will affect student performance in various subject areas. Accordingly, I'm about to embark on postgraduate studies to investigate this. As part of the preliminary Literature Review I have focused particularly on the Becta Harnessing Technology Review 2009, JTLA 2010 report One to One Computing:A Summary of the Quantitative Results from the Berkshire Wireless Learning Initiative and the OECD 2010 publication Are the New Millennium Learners Making the Grade?. My superiors were very impressed with the evidence, particularly from the Becta report. To this end I have made a Prezi (of course!) summarising the findings:

Friday, May 7, 2010

Bad hair days no more

I just uploaded the above to Very Demotivational posters. Actually this would be more motivational that demotivational but who cares.

Chewbacca needs a hairdryer like I just bought. It's purple and it blows hot air. It's fantastic. I don't know why I've never bought a hairdryer before.. I just naturally thought peoples' hair just turned out that way after having a shower. I feel like there are so many beauty secrets I've yet to learn. Like how to put on eyeliner without poking out your eyes. That stuff is annoying and I hate how half the time I look like some skanky panda.

Anyway, purple hair dryer aside.. I missed out on May 4th (may the fourth be with you) Star wars day. I feel like I've let down my geeky side. I didn't know about it and I didn't know how to celebrate it. I need to get my Storm Trooper helmet asap. Hurry up Ebay.

Voxelstein3D progress

Since the Voxelstein3D guys implemented path tracing, the engine produces some really cool and natural looking images:



The realistic path traced lighting may not be that obvious from these shots, but if you would replace the path tracing with direct lighting you'll get the usual hard shadows and there won't be a gradual lighting fall off in indirectly lit areas. It's a little disheartening that the voxels still look quite rough and create a stair stepping effect. I hope we'll soon have a free polygon based game engine supporting path tracing, the Brigade engine (by Jacco Bikker and Dietger van Antwerpen, IGAD) and Nvidia's OptiX being two serious candidates. You could theoretically re-use the geometry of an existing game(let's say Half-Life 2), apply some realistic brdf materials and render a photorealistic scene with real-time (interactive) global illumination.

Monday, May 3, 2010

A (truly) Unbiased Look at Second Life Viewer 2.0

If you listen carefully, you can hear the echoes of contention and disgust filtering from the masses. Pitchforks raised and torches lit, they march upon the stronghold, chanting against the abomination inside.

"It is an unholy creation!" some yell.

"Surely it will be the downfall of us all!" others scream.

With all of this commotion, you would think I were speaking about a scene from Frankenstein. But alas, this is not the case. This is an unbiased look, (for once), at the Second Life Viewer 2.0 from Linden Lab. What could possibly make me capable of giving an accurate and unbiased look at what is supposedly the most hated viewer in SecondLife today?

Well, for starters, I'm a real virtual environment veteran. My experience with virtual environments does not start and end with SecondLife alone, but instead spans from the early 1990s when VRML reigned supreme and SecondLife wasn't even a gleam in Phil Rosedale's eye. Back in a time when people like Jaron Lanier coined the term Virtual Reality itself, and back when the closest thing to the Metaverse we knew came from Cyberpunk fiction like Snow Crash and Neuromancer.

Many people reading this will begin with an unadulterated and rabid misguided hatred of Second Life Viewer 2.0, and many may not even make it past the first few paragraphs because they are intent on being biased against a viewer they barely understand. For those readers, I will simply say that you are doing a grave disservice to the entire industry as a whole with your petty and unfortunate bias. For those who wish to read on, I will tell you that while Viewer 2.0 isn't a holy grail, it is not the Frankenstein abomination many make it out to be.

We'll begin this journey with a flashback to the beginning of virtual environments, or as far back as the length of this article may allow. We'll begin with services like Lucasfilm's Habitat on the QuantumLink Internet Service between 1986 - 1988.



Not exactly the most glamorous virtual environment, Habitat was a beginning and utilized now archaic modems with very limited graphics and bandwidth on a Commodore computer to create a world in which many residents could interact. Many parallels exist between Habitat and Second Life, and the social interactions are by and large the same today as they were back then. There was a virtual currency called Tokens for which users could buy items and different looks. There was an inventory. There was text chat between the users, and of course there were many areas to explore and play games.

In this aspect, we can say that the idea of the virtual environment as a whole is not a new idea, and we can even say that it reaches back further with Multi-User Dungeons (MUD) which were entirely text based on early BBS systems. The most popular types of venues back then are still the most popular past times today in virtual environments - Bars, Clubs, Games, Personal Homes, Selling things, and making things.

In the early 1990s, these environments got a face lift with the introduction of services like Worlds Inc, Blaxxun Contact, and even ActiveWorlds where the user experience was now in 3D.

Worlds Inc

Blaxxun Contact


ActiveWorlds

For many residents of Second Life, the history involved with virtual environments is completely lost on them. Their first introduction to the idea of a 3D environment online came in the form of gaming such as World of Warcraft. As far as many are concerned, Second Life is revolutionary and has never been done before. This belief is completely incorrect, and only the real veterans of virtual environments actually know the history involved.

Companies such as Intel, IBM, Cisco, and computer retailers were on the bandwagon in the early days of virtual environments. Seeing a store in a virtual environment trying to sell computers is not a new idea, though it may make headlines when Dell decided to open Dell Island in Second Life, or that IBM and Intel have a presence... but they always had a presence in virtual environments from the beginning (except Habitat, I believe). What we see today in Second Life is just a continuation to those companies from virtual environments they took part in twenty years ago or longer.

For example, take the later contenders in the virtual environment industry - Kaneva, There.com, Entropia, and Anarchy Online.

Kaneva


There.com

Entropia

Anarchy Online

We know that There.com recently closed its doors, leaving many people to take on the life of virtual nomads. Of course we see Kaneva which tends to be a low resolution contender to more powerful systems like Second Life, but Kaneva isn't exactly taking off and gaining much attention. If we look at Entropia Universe, however, there seems to be a community using it - despite the overly complex amount of options involved, which brings me to Anarchy Online...

If you look at the screenshots of these systems, dating back to 1988, you'll notice there is a familiarity with the interfaces. Some chose to take up massive amounts of screen real estate in order to convey the information, while others chose a minimalistic approach that had no baring on whether the company succeeded or not.

Today, places like Worlds Inc are struggling to survive, and more up to date systems like There.com have completely folded into oblivion. ActiveWorlds has added more complexity to its interface which, if we compare to SL Viewer 2.0, is truly the Frankenstein abomination making SL Viewer 2.0 the equivalent of a wide open utopia of space.

ActiveWorlds With All Options

With chat windows at the bottom, a Web Browser that slides into view or can be pinned open, Tabs for VoIP, Contacts, World List, and Messaging, this is what a space hog looks like on an interface. I find it amusing the hear people complain about how the side dock in SL Viewer 2.0 takes up an entire 25% of the screen when opened, but never mention that when closed it takes up the equivalent of 32 pixels for the icons to remain available on the side.

A dock that slides open to reveal additional content and options is not a space hog. Not unless it was intentionally left open all the time, which is essentially defeating the point of having the side dock to begin with. That is why each of those side dock options has the ability to be opened in a new window, and those new windows can be minimized.

Of course there is the standing issue that one cannot hide the entire interface like in previous versions, and I do agree that this is a major oversight that should be dealt with. I would also go so far to say that Viewer 2 should also allow the option to switch between Advanced and Basic modes, where Advanced would revert the viewer to 1.23 style and behaviors (taking into account new options and additional functionality) and Basic mode would be the current style of Viewer 2.0 and the default on installation.

Barring those things, though... It's not a bad viewer by any stretch of the imagination. It's a work in progress. There are, of course, many glitches and usability issues which need to be dealt with in order to get the viewer up to speed with the needs of both advanced users and new users. This means that the overall look and feel of Viewer 2.0 will not be scrapped and reverted to 1.23 simply because the long time SL residents cry foul. That portion of the looks will remain, with changes to better facilitate older users, but don't expect any major overhauls. In the best case scenario, you may expect the Basic and Advanced mode option I have described here, but I wouldn't promise that you will see it.


I want each reader of this blog post to take a long look at this screenshot of SL Viewer 2.0, and then scroll up and see the interface designs of virtual environments past and present. Then I want you to look at the following screenshot:


I dare you to tell me it's too cluttered, takes up too much space, and somehow detracts from the virtual environment immersion. This is a message to the supposed 80% of SL users who have dragged this viewer through the mud at every chance they had: You're full of it.

The purpose of Viewer 2.0 is not to cater to the advanced users who have been in the Second Life platform for years. It is not somehow geared to add more complexity and options to the system in order to make the experience more powerful to the long time user. The User Interface is not twenty more menus added to the drop-down nightmare that is Viewer 1.23.

Viewer 2.0 is designed with one goal in mind: Cater to the first time user, the user who has never stepped foot in a virtual environment like this before. The design of the UI mimics a familiar layout that truly new users would be comfortable with: Their Web Browser. The number one cited reason for high turnover in user accounts during the 1.23 viewer phase was over complexity. New users would log in for the very first time, and find unwieldy menus and buttons to navigate, options buried deep in nested menus to do even the simplest things, and do you know what those new users did?

They left.

So here is a question: What is the point of being an advanced content creator in SecondLife if the audience doesn't expand much? If our audience is too afraid to deal with the complexity that we've grown used to and mastered, then we're only creating things for a community of like minded individuals with no chance to reach a wider audience as a result.

In short, the community stagnates.

Of course, the realization would be apparent that Linden Lab would rather lose 80,000 residents and appeal to 3 billion common broadband Internet users around the world as a trade off. I'll stick with appealing to the masses and not catering to a paltry 80,000 users any day. The numbers are in greater favor that way.

So, all of the half hearted threats of developers leaving Second Life because Linden Lab won't give in to their demands is nothing more than a bunch of hot air. All of the developers and long time users who spend more time bashing Viewer 2.0, sabotaging it for others with nothing but negativity, than trying to make it better through actually using it and reporting where it may be deficient, suggestions on how to make it better (which, I remind you, is not the same as demanding that it be reverted to 1.23), and overall taking the time to get used to it.

And why wouldn't you take the time to get used to the new viewer? That same proposed 80% of long time users would rather keep the overly complex viewer from prior and tell the rest of the Internet using world to "get used to it" or "tough luck, this is all you get". Funny how after all of these years, it's the long time users who are now being told that very same thing.

Of course you don't like it. You don't like suddenly eating crow and being on the receiving end of that elitist argument. But you sure as hell enjoyed being elitist to the new users for the past number of years. Made you feel good to treat "n00bs" like crap, didn't it?

Here, have a Friendship Bracelet!

So, what now? There are other options than all or nothing, you must realize. For one, the viewer is GPL so there is always the option of grabbing the source code and making the viewer the way you want. I suppose that's why there is a Third Party Viewer Directory which does, indeed, cater to more than the first time user. Seeing as there are features in viewer 2.0 that I foresee will never be removed such as Shared Media, Alpha Layers for the Avatar, and other useful things, I can also tell you that it is simply a matter of time before those features are available in the Third Party Viewers.

I hear a lot of talk about how Linden Lab should port these new features to Viewer 1.23, but those people clearly miss the point. Emerald viewer uses the 1.23 interface and I can imagine they, like other Third Party Viewers, will be incorporating the new features of Viewer 2.0 soon. So why should Linden Lab focus on back-porting the new features to 1.23 when they will be dropping support for it when Viewer 2.1 is available? Why should Linden Lab duplicate the efforts of Emerald viewer?

The reality is, Linden Lab will not be back-porting those features to 1.23. It just doesn't make economical sense. Maybe I'm wrong and Linden Lab will, indeed, do that... but my business mind says they won't.

Which brings us to the hear and now...

As it stands, until Third Party Viewers incorporate the new features of 2.0, SL Viewer 2.0 is your only access to those new features - to create with them and to experience them. I'm not telling you that you should use Viewer 2.0 if you don't like it, or if it doesn't work for you. I'm asking that you remain neutral and get a clear perspective concerning it.

Yes, viewer 2.0 has bugs and UI issues. I will not deny that claim. No, those issues aren't worth abandoning the viewer altogether, because the benefits introduced outweigh the petty (and sometimes legitimate) gripes against the viewer overall.

As a representative of the 20% who like Second Life 2.0, I'm going to tell you exactly why I like it:

I like Viewer 2.0 because it allows me to develop things that the other 80% of residents can't and won't because of their own stupidity and stubbornness. By all means, continue to boycott Viewer 2.0 and refuse to use it. You're making this way too easy for the other 20% to get a head start.

And for that, I actually thank you.

Sunday, May 2, 2010

Things Bogans like..

I will never understand Frangipani stickers on cars or the phrase "Fully sick Bro".
I don't understand why Commodores are so common or why they're always on show.

I don't get why your Footy team is a religion and why ciggies make you tough as.
I don't like the nicknames Chooka, Bazza, Johno or Robbo or Shaz.

I don't see why there are so many glassings or why men think women will get their "tits out".
You won't see me drinking Jimmy, Jack or Johnny. Not even Cider, Cask wine or Stout.

Not liking these things doesn't make me a snob, a bitch, a cow or a tart.
I just don't really laugh when you scratch your balls infront of me or fart.

I'm not calling you a Bogan because I'm a bogan too. I've been called it many a time.
I just prefer to keep my natural hair colour and I think mullets are a crime.

I'm not scared of you and I'm not holding my bag closer when you walk by.
I just think your 1990 Collingwood premiership tattoo is enough to make me cry.

So all in all I think Bogans are awesome, bloody good and pretty tops
because who else is going to warn you when they see the fricken cops.


(had to post something new and the Tow ball balls were pretty funny)

Saturday, May 1, 2010

Thea Render jumps on the GPU bandwagon

Thea Render is going to incorporate GPU rendering in v1.3 which will probably come before the end of the year http://www.thearender.com/downloads/TheaRenderRoadmap.pdf.

A list of released and announced GPU renderers:

1. V-Ray GPU (Chaos Group)
2. iray (mental images)
3. SmallLuxGPU (LuxRender)
4. Octane Render (Refractive Software)
5. Arion Render (Random Control/FryRender)
6. Thea Render
7. SHOT using iray (Bunkspeed)
8. RTT Powerhouse and RTT DeltaGen using iray (Realtime Technology)
UPDATE:
9. Indigo Render also announced plans for GPU acceleration
UPDATE 2 (Sep 29):
10. finalRender (cebas Visual Technology) http://www.cebas.com/?pid=hot_news&nid=378
11. Artisan using OptiX (LightWorks) http://architosh.com/2010/07/sig-lightworks-unveils-power-of-optix/
12. Zeany using OptiX (Works Zebra)

I bet Maxwell and Modo will quickly follow.