Tuesday, July 27, 2010

Tower of ZOMFG

Almost finished the basic layout of my town which at the moment, looks puny in comparison to the massive forest I've created :) so with this, I decided to make a tower of some sort, and preferably a bit one mainly getting inspiration from Eetio's huge castle he's making at the moment on his map. Here is a screenshot of it, when I get round to it, and finish the interior, I'll post a screenshot of that also, but so far it's about 6-7 levels of glass and stone xP



-Zuk

Sunday, July 25, 2010

The Dreaded PNG Draw Error

For years in the virtual world, few things have plagued builders more consistently than the PNG Draw error. It rears its ugly head whenever we place PNG textures in front of other PNG textures in the virtual world and manifests as objects rendering in front of closer objects even though they are clearly farther way and should be occluded or drawn behind it.

Take for example, some trees that use PNGs with partial transparency in the Alpha map to create the leaves. under normal circumstances the leaves should sort perfectly fine and that would be the end of it, but ever since I could remember, the PNG Draw Error would foil that consistently for builders. PNGs rendering in front of closer semi-transparent objects, completely ruins the ambiance of a well constructed environment, when you look out a window and see the trees rendering in front of the window instead of behind it.

This goes for other uses of PNG as well in the virtual world, whereby partially transparent PNGs will ghost against other PNGs even if both are opaque with limited transparency (maybe the edges).

So what's going on here?

I remember this being a long standing issue in ActiveWorlds for many years (and still to this day), and was surprised to see the dreaded PNG Draw Error bug rear its ugly head in SecondLife. Years ago when I inquired about this PNG Draw Error Bug, I was told that it's a common bug and every game engine has it. No solution was available, and even commercial games (I was told) had the bug but that the level designers literally designed the levels around this bug to minimize or eliminate it to the player.

This essentially meant that multi-million dollar productions were rearranging entire levels to make sure that as little as possible in the way of PNG transparency overlapped, because it would trigger this bug.

I'm not fool, but the thought of major game brands like Square Enix being terrified of this bug amuses me, and more so that they would go to great lengths to work around it without actually having a solution for it.

After some testing in Second Life, I finally got fed up and decided to dig deeper into this infamous bug, and what could possibly be the issue. In the beginning, I simply thought that the programmers at Linden Lab and Active Worlds were simply chained to a certain Alpha Rendering algorithm and could not change it, but then I realized during testing that they actually aren't chained to the Alpha rendering algorithm.

Instead, it would seem that it's not a matter of there being no solution for this bug, but that a majority (if not all) game engine programmers simply are using the default Alpha rendering algorithm chain. The default for game engines would be excellent blending of the textures, but apparently horrible at sorting the depth of the textures to the camera, and while this is fairly quick, there exists another Alpha sorting algorithm in the SecondLife viewer - RenderFastAlpha.

RenderFastAlpha (in your Debug settings) is actually the Alpha Rendering algorithm known as Clip Alpha or Binary Alpha, whereby it doesn't take into account partial transparency for PNGs, and instead renders as completely transparent or completely opaque. When enabled in SecondLife, the blending is obviously gone (the textures don't look as nice) but the benefit is that the PNG textures suddenly snap to proper depth sorting like magic.

After seeing this in action, I cross referenced the behavior of RenderFastAlpha to known Alpha Rendering Algorithms and found that it's a perfect match for Binary Alpha (Clip Alpha). So now I understand what exact type of algorithm is being used for RenderFastAlpha in the debug options, but the task was figuring out what rendering algorithm was being used as the default Alpha Rendering in Second Life.

A little bit of digging yielded the answer I was looking for: The default Alpha Rendering in Second Life is actually the Default Alpha Rendering Algorithm for the 3D engine. It's the industry preferred default because it blends the textures nicely and looks good, and has a negligible rendering cost on the pipeline, even though it is notorious for causing PNG Draw Errors for depth.

Now that I've sorted out the two known Alpha Rendering algorithms in use, it was time to sit down and figure out how to solve the PNG Draw Order Error.

This part seemed to be the easiest, and I was quite surprised at how fast I found the solution. Where most game engines default to Alpha which has draw order issues, and they also have an easy way to enable Clip Alpha (Binary Alpha), there is a third and more powerful Alpha Rendering Algorithm at our disposal.

This is known as Alpha + Sort

Alpha + Sort essentially does what Alpha normally does with the blending, but then it takes an extra step and runs a sorting algorithm in order to make sure the items are rending in proper depth in relation to the camera (user). This is also an algorithm which is supposedly a bit more costly on the computation side of things and as such is normally disabled by default, leaving the programmer to manually turn it on to use it.

Could the solution to this problem that has plagued virtual environments since the mid 1990's be as simple as enabling a single option in the code?

I would assume so at this point, although it is possible that with custom engines (like what SecondLife uses) there wasn't coded in the ability to use Alpha + Sort as a rendering pipeline to handle the partial transparency of textures properly. The same could be said for ActiveWorlds whereby it can be noted that the Alpha rendering default seems to be just that; the Alpha Rendering Default in the engine, which then brings into the equation the PNG Draw Order Error bug.

But let's say that the Alpha + Sort option still isn't a total solution, because I've seen quite a bit about how it will still have the bug in certain circumstances. So if Alpha + Sort isn't the holy grail, then what is?

Enter the idea of making a new algorithm called Binary Alpha + Sort Transparency Algorithm for Rendering Defaults. As the name implies, it's a combination of techniques and not solely from a single approach. It also makes a great acronym :)

In any event, the point of this idea is that Binary Alpha seems to have no problem with sorting properly, but the looks could be better. If Binary Alpha can figure out where the order is supposed to be most of the time (properly) then why not use a combination of Binary Alpha and Alpha to really hammer it down and eliminate the bug?

We know Binary Alpha can sort it correctly, so why not just use Binary Alpha and then add an extra step by using the sort information of Binary Alpha (which is more reliable for sorting) then doing the Alpha blend (default) to make it look good on screen, while ignoring the default Alpha pass sorting in favor of the Binary Alpha sorting information from the prior pass?

In short, the best of both worlds.

I'm still a bit biased, though, and believe that Alpha + Sort would take care of the problem.

Still confused about what this whole Alpha Rendering Algorithm stuff is all about? Luckily I've included an educational video for you below:





Thursday, July 22, 2010

Loque's Gaze

Named after one of my favorite Beasts in WoW, Loque's Gaze is the first of hopefully a few 'sub-sections' that I'll start to dot around my map. These could be anything from Signs stating something important, or something like this :)



2 posts in 1 night?! What's wrong with me! ;D

-Zuk

Operation Re-forestation!

Before I started to actually think about replanting trees, I kept fixed in the back of my head that they didn't grow, as unfortunately, I just didn't have the patience to wait for them. After chatting with Eli about saplings etc, I decided to give them a second go and well.. I've decided to create a dedicated forest area just next to my village! Here are a few development screens of how it's progressed so far:







All pictures are taken of the same location, obviously the area has become heavily grown upon now, but despite this, it's actually around 40% complete. I've placed walls on the outside so I know where to plant trees up to and so I don't get too carried away with this.

-Zuk

CUDA 3.1 out

Just read that cuda 3.1 toolkit is available (has been for some time in fact)

http://www.theregister.co.uk/2010/07/22/nvidia_cuda_parallel_nsight/

As you can see from the release notes, CUDA 3.1 also gives 16-way kernel concurrency, allowing for up to 16 different kernels to run at the same time on Fermi GPUs. Banks said a bunch of needed C++ features were added, such as support for function pointers and recursion to allow for more C++ apps to run on GPUs as well as a unified Visual Profiler that supports CUDA C/C++ as well as OpenCL. The math libraries in the CUDA 3.1 SDK were also goosed, with some having up to 25 per cent performance improvements, according to Banks.


The support for recursion and concurrent kernels should be great for CUDA path tracers running on Fermi and I'm curious to see the performance gains. Maybe the initial claims that Fermi will have 4x the path tracing performance of GT200 class GPUs could become true after all.

Wednesday, July 21, 2010

android API source JAR

Howto



A few days ago I installed the android SDK and the ADT plugin for eclipse. When I started playing with the android API I was a little disappointed. I was missing the source attachment for the android.jar which contains the android API. I was searching the web for the android sources jar and found a guide how to create an android sources jar [1].



I've adapted the sources howto and created my own android sources jar for android platform 7 aka android 2.1. You can download it here:
http://www.bughome.de/android/platforms/android-7/android-src.jar



I'm pretty sure that the android-src.jar is missing some sources, because the source jar is smaller than the binary jar. Comment me if you find the missing sources.



References



  1. http://androidforums.com/android-developers/1045-source-code-android-jar.html

Tuesday, July 20, 2010

Persona 3.0

So often in the digital era, we discuss how virtual environment technology is changing the world around us: how virtual environments and social media are the true pinnacle.


Less often discussed, however, is how the technology is changing us.


When I first began to write this, I didn't have a clear understanding of how I would begin, or really how I wanted to convey what I was about to say to the world. After a few false starts, and some rewording, I decided that it would be best to frame this in a current tense format in order that I can give you a better insight as to my own understanding of the future.


It isn't that I do not believe that social media, or virtual environments are worthwhile to engage. It's simply that I know it will not be the herald of the future that companies wish to make it out to be. Like everything before it, social media and virtual environments are simply a digital evolution to something greater. They serve today to condition us and prepare us for a future most have yet to contemplate.


This is the dawn of the Age of Personas.


Surely you have a Facebook account, or Twitter. If not, then I can assure you that if you are reading this, then you subscribe to a virtual environment or other social media network. These are examples of Web 2.0 and maybe Web 3.0.


Social Media in itself is an interesting concept, and succinctly frames the evolution of our Persona mentality, giving rise to a human powered livestream network of details and information, datamined by software and sold to the highest bidder as metrics so that those services may in turn target you for advertising. Surely even Facebook has a method to make money, and your personal information is that very manner.


This isn't alarming, and really shouldn't be. I know there are plenty of privacy concerns to be had as to what is being done with the massive storehouse of information you are putting out, but to be honest that doesn't actually matter in the long run.


We're being conditioned.


With an accelerating returns curve of technology, we are completely helpless in the digital age without the aide of this technology and the algorithms (the same which monitor our habits and do metrics) to sort our lives for us and give us insight in a manner by which we wouldn't have without it. In a manner of speaking, you can say that we no longer live in a digital age, or even a virtual age, but instead we are beginning to evolve into an augmented age.


This brings to mind technologies in their infancy such as Augmented Reality, whereby we show our webcams some image or symbol and it superimposes some sort of digital content we can manipulate on-screen. But I assure you, this is just the beginning of our evolution in the digital age. Augmented Reality is the new buzz word, along side social media and virtual environments, but it has yet to occur to anyone (not on a wide scale) that these three aspects of our digital lives are merely three components to the real end-goal.


Gone are the days of virtual environments (however much we continue to tout them as the way of the future), and social media is a toy in comparison. Entire industries are springing up to utilize virtual environments like SecondLife for business, pleasure, marketing, you name it. While on the flip-side we see how things are rapidly expanding in the social media universe of twitter, Facebook, and others. Augmented reality, however is currently in the infancy stage whereby it is seen as just a gimmick, much like we believe 3D Television is a gimmick (again, however much it is touted or pushed upon us).


But there is a difference.


Social media in and of itself is not the end-all solution, but it is playing a decidedly large part in how we behave and creating the concept of Personas in our digital world. We can be anyone and anything within these digital spaces, social media, virtual environments, and further.


We are this conglomeration of multiple online identities, and we often act in very contrasting manners within each "avatar". We log into SecondLife as an avatar persona, and we tend to have "Alts" or alternate accounts to allow us to further explore more personas. In the social media aspect, we may have a Facebook account, twitter account, blog, and half a dozen other connections for just as many purposes.


We are living as a multitasking culture of split personalities, and with increasing ability to be omnipresent and very close to omniscient. The future is not in virtual environments, nor is it in social media, nor should we make a claim with augmented reality. On the same note, however, we should not discount these technologies either, because they are a crucial part of the future.


But only when they evolve and finally combine.


Some see virtual environments as a stake in the future of the Internet and Web, and if we listen to Linden Lab then the future is how to successfully integrate SecondLife into social networking aspects and even the web itself to create some sort of all-encompassing platform of the future. Others swear by the current social media application, and others still will stake their reputation on the power of Cloud Computing.


All of those people would be very wrong about the future.


The future isn't about any single aspect of our current culture today, it is a combination of all of them in what can only be described as not Augmented Reality or Virtual Reality. No, not even Social Media or Cloud Computing can encompass what the future will be about.


The future... is about something bigger than the sum of its parts. Instead it will be about Hyper Reality. It's when the virtual environment becomes a part of our reality, when augmented reality becomes seamless and without markers, it's when information is available about anything, anyone, and anywhere without asking.


It's when the virtual world merges with the real world.


We're only seeing the beginning of this evolution today. But what if your real life and your virtual life were combined? What if the SecondLife grid were to be overlaid onto reality? What if your virtual inventory were available digitally in an augmented reality space which was laid out via grid blocks of GPS cells in actual reality?


Combine that Hyper Reality space with social networking aspects today, and you can see how our split personalities will be ever popular in the future. When virtual worlds, social networking/media, and augmented reality merge into one seamless space and become part of our actual reality, that is where the future is at.


We'll be omnipresent in a manner of speaking, with always on internet access through wireless access points covering every city, and our ability to "exist" in multiple places at once. We'll be omniscient in a manner of speaking through that same connection, and our hyper reality. The real question is whether we'll be omnipotent as well?


To answer that, we would have to better define omnipotence. I'd say it would be the power to change all aspects of our reality at will, and to be honest, a Hyper Reality would give us that power (if nothing more than virtually). But then, by that point, the question will be if virtual reality has any meaning any longer. By that point, virtual reality and actual reality will have merged into Hyper Reality and we'll not distinguish between the two, other than possibly denoting that "all virtual" environments such as SecondLife are a different space than "all real" environments, while Hyper Real environments would be the preferred experience of Hybrid Reality.


What we're looking at here is the very real possibility that we'll attain a singularity.


I know this sounds like science fiction, but thirty years ago when companies were talking about video phones (and they fell flat), who would have guessed that they were actually right, but were unable to foresee how it would be implemented on a wide scale? We use video phones all the time now, in the form of Skype or a similar service.


The computer and Internet have changed quite a bit of our habits to start. Where once it was just a fad, it is now indispensable to society. Some countries even have made Internet access a basic human right.


Who would have guessed this thirty years ago?


So I tell you today, the virtual environment is not the holy grail, nor is social media, nor is augmented reality. Instead be glad you live in the dawning of the age of Personas, and the evolution of something truly magnificent.


Whether you are an alt in SecondLife, are "married" in the digital world, or have hundreds of "friends" on a social network. Whether you believe all of your twitter followers are truly relevant to your interests. The concept of "location" has definitely been altered. The idea of "self" is irrevocably different. And soon, the idea of what constitutes "reality" will forever be changed.


The catalyst for Hyper Reality exists in today's components.


One part 3D Camera (Kinect), One part GPS, One part Compass, One part Gyroscope. Add in sufficient computation and a blanket Wi-Fi, and throw in a connection to SecondLife grid to handle the back end. Toss in a dash of social media networks and Web access, and you have a recipe for the future of our singularity.


Welcome to Hyper Reality.


Persona 3.0

Castle Entrance and Small Town overlook.

Firstly, I'd like to show the lock which I've incorporated into my castle's entrance:


As you can see, 2 light's on means the door is securely locked.


Here we see one light off yet the door remains locked.


Again, another light is off yet the other is on the the door remains locked.



Finally, both switches are pulled and the door can be breached :)



Next I've taken 2 pictures of what the small town looks like at both night and day in near-enough the same location, showing how light has effected the area mainly.






Obviously this is a work in progress :)

-Zuk

Introduction

Aside from playing WoW in most of my spare time, I also dabble in a spot of Minecraft, thus, this blog was born!

I'll try to showcase some projects I'll work on for you guys to check out and for me to personally look back and enjoy.

-Zuk

Sunday, July 18, 2010

'The Role of Interactive Whiteboards in a 1-to-1 Environment' Presentation from ISTE2010

This June I was honoured to present at ISTE 2010 in Denver. My presentation was entitled 'The Role of Interactive Whiteboards in a 1-to-1 Environment'. A SMART Notebook version of the presentation is available here. As I was sponsored by SMART (which I'm very grateful for and through whom I met some wonderful SMART Exemplary Educators) there is a bias in terms of IWB brand and some product placement which was unavoidable. However, the underlying message I hope has total integrity as it is based on the beliefs and experiences of myself, several colleagues and other great educators who chipped in their penny's worth.

As with any presentation, a full appreciation after the event is not achieved from a simple file unless you have a video, podcast or transcription to go with it. To this end a PDF version in Scribd is below plus a transcription for each slide within this blog post. Some of you may also gain amusement from an interview I gave, courtesy of Dr Ray Heipp and SmartEd Services.



The Role of IWBs in a 1-1 Environment.pdf

Slides:
  1. My contact details also contained within the QR Code (my first attempt at QR)
  2. My 16 (+1) schools in South and South-West Sydney
  3. Video (not available unfortunately) - proof that I actually was a Physics teacher utilising an IWB day-to-day
  4. Video available on this link - IWB being used in a Primary context
  5. Video available on this link - the Australian 'Digital Education Revolution' - every Year 9 student receiving a laptop from the Federal Government
  6. As you can see, these Year 10 students have laptops, Macs or PCs, with SMART Boards in some rooms
  7. The main question behind this presentation
  8. {drag the picture down} = NO
  9. {drag the picture down} = YES
  10. {drag the picture down} = YES
  11. Demonstration of a Year 11 Physics experiment where a volunteer whistles into a microphone; the wave trace on the oscilloscope emulator is captured with the SMART Capture tool; the SMART interactive ruler is calibrated against the oscilloscope grid; the length of e.g. 8 cycles is measured e.g. 8.8; the length of 1 cycle is calculated e.g. 1.1; the time period is calculated by multiplying with the time based e.g. 1.1×5ms = 5.5ms - this experiment could be performed on a laptop but not collaboratively as is achieved on an IWB.
  12. Another activity, Year 12 Physics, best performed on an IWB (and specifically designed for use on an IWB. I know because I designed it). However, students can take it away after a group experiment to further investigate themselves on their laptop.
  13. I had posed the question of this presentation via the IWB Revolution Ning and received some great responses including Chris Betcher's straight-to-the-point remark from a chat we had "Don't apologise for teaching!" i.e. an IWB can and should be used as an effective teaching tool, it is not simply the domain of kids doing cute things on a board e.g. senior Physics is not intuitive; I have to introduce new material and explain before students can grasp it to then perform some activity on an IWB, laptop, where ever.
  14. {drag the picture down} = YES, as previously explained, a concept could be introduced by a teacher through an IWB, given to the students to work on on their laptops, then possibly feedback interactively through the IWB
  15. {the first 2 columns are 'infinite clone' so each word could be dragged into the 3rd column}
    IWB + 1:1 = More than the Sum of the Parts
  16. SMART Notebook file (unavailable as yet) - Year 10 Commerce - students receive random occurrences to simulate what life might be like when students leave home e.g. flatmate moves out so they have to pay double rent and hence budget accordingly
  17. Video (unavailable unfortunately) - shows a similar activity with the students each spinning the interactive random wheel in front of each other. Easily done on a laptop but performed on the IWB allows the students to understand that some people are lucky in life and some aren't. Students can then go back and work on their individual projects and budgets on their laptops
  18. Predicting the future (slightly gratuitous SMART product placement) - smart tables will soon be upon us (not just in school but in our local coffee shops) and iPads have just arrived. What technologies will we have in the future and how will they work together?
  19. Explained in an earlier blog post
  20. IWBs are not environments in their own right, they are part of a larger learning space that is hopefully flexible. Introducing laptops into the space again demands that the space is flexible
  21. Some examples of contemporary learning spaces
  22. Video (not available unfortunately) - shows the mini-ampitheatre (carpeted) within a double 'classroom' with glass partitions at Parramatta Marist High School during a cross-curricula lesson of RE and Technology!
  23. Research paper I co-wrote on contemporary learning spaces
  24. Explained in an earlier blog post
  25. Thank yous. Note the final paragraph- Special thanks to the following artists for giving written permission for use of their music in the opening video: Sneaky Sound System 'I Love It' and The Grates 'Science is Golden' (get students to seek permission to use music rather than copy it illegally!)
  26. Obligatory SMART slide

Thursday, July 15, 2010

☯ elements Spa @ Pixel Labs


In-World Location
: SLURL - http://bit.ly/bdMLAl


As you walk to the door of this small building set along a sprawling Pacific Northwest coast line, you feel a sense of peace and calm wash over you. From the sound of the gentle waters on the shore to the delicate breeze dancing through the leaves of the nearby forest, you get the feeling of tranquility. Gone are the accustomed blasting music drowning out the subtle atmosphere, and nowhere present are the sprawling malls you would expect to traverse to get to where you were intending on going. From the onset, you feel this is definitely a different place in Second Life.

Welcome to ☯ elements Spa and Massage @ Pixel Labs, the first and only spa to cater specifically to Bio-Acoustic Massage Therapy in Second Life. While in our care, our staff will aim to provide a relaxing and immersive environment by which you can allow your mind and body to drift into a more relaxed state, whereby aiding your own body's natural recovery process.



While there is nothing like this in Second Life, there are, however, other Spas in the virtual environment. ☯ elements Spa caters to a different type of person in the virtual space, and while other spas offer text for emoting the massage process, elements does not even consider this an option. Instead, elements caters solely to voice enabled patrons, with trained staff to calm and soothe our clients with guided meditation, followed by a lengthy virtual massage process which continues where the guided meditation leaves off - entirely in voice.



The purpose of this narrow aim is to do what we can to trigger psychosomatic response in our clients for the purpose of relaxation and calm. Gone are the text chat and silent roleplay, and in its place stands a real, live voice to soothe you and guide you on your path to less stress. Much like we would listen to a guided meditation, or the sound of a soothing voice to center our thoughts and reduce the stress, allowing us to let go and focus, ☯ elements Spa uses this wholeheartedly to the benefit of our clients.



The process doesn't end there, though, with great attention to detail with the environment itself, each aspect finely tuned to create a sense of total immersion. From the gentle stream of delicate smoke rising from the incense holder, to the breeze pushing some of the linen curtains into a light dance upon the wind. Clients are greeted by our staff before their session, and time is set aside to talk and evaluate trouble areas with the client prior to the massage. While one may wonder what this has to do with the therapy, it's part of the process to help the client begin to focus on what troubles them, and so allows the process to work faster. If you're focusing on where the stress and tension is, it'll make it easier to relax them as the massage gets underway.


One part detailed roleplay, and one part guided meditation, ☯ elements Spa plays a very real part in real life relaxation therapy to virtual clients. Our process is detailed for the intention of immersion, playing the role as though the staff and client are truly in the same room together. We visualize ourselves this way, and speak during the process as though we are truly performing the actions. If we are to apply oil, we ask the client which they prefer, and then describe the scent in detail and its effects so that the client can begin to allow those psychosomatic influences to take hold. In a number of instances, we've been told that this attention to detail has resulted in the client feeling the effects, smelling the scents, and losing themselves in the atmosphere we've created.



Sounds good, doesn't it? When was the last time you've been in a virtual space and have been able to truly be immersed in the location? At ☯ elements Spa, this is exactly what we strive to do for you, and when your session is completed, it is our hope and intention that you walk away calm and relaxed.


☯ elements Spa is not yet open to the public for business, but in this post we'll give some details as to what you should expect:

Pricing

All sessions are either half hour or one full hour in length. Half hour sessions are L$700 while Full Hour sessions are L$1250, payable prior to the session beginning. There are a couple of types of session you may sign for, each with attention to specific areas and needs. while the general process remains the same, the attention of details is where each are differentiated.

☯ elements Spa does welcome walk-in clients, but keep in mind there may not always be a staff member available for you at all times. When you arrive, simply touch the board of the staff member that you wish to have perform the service (providing they are online) and you will be given the updated notecard with complete information about the services as well as pricing. After you have done this, the staff member will be notified of your interest and they will contact you directly.

Keep in mind, ☯ elements Spa is non-sexual in nature and does not condone escorting or sexual services on the premises. Our staff have been instructed to ignore these requests, and not entertain them at all - even to the point of ending the session and banning the offending client without refund.

Availability for our staff members varies, but general indicators of availability are on each board via icons for day or night. There are also star icons to denote experienced staff members, as well as an indicator icon to denote in-world voice preference or skype to ensure voice clarity during your session.

For more information or to schedule an appointment ahead of the opening, please contact Aeonix Aeon or Lavender Siamendes in Second Life for details.

OnLive + Mova vs OTOY + LightStage

I've just read Joystiq's review of OnLive, which is very positive regarding the lag issue: there is none...

As it stands right now, the service is -- perhaps shockingly -- running as intended. OnLive still requires a faster than normal connection (regardless of what the folks from OnLive might tell you), and it requires a wired one at that, but it absolutely, unbelievably works. Notice I haven't mentioned issues with button lag? That's because I never encountered them. Not during a single game (even UE3).

A related recent Joystiq article about OnLive mentions Mova, a sister company of OnLive developing Contour, a motion capture technology using a curved wall of camera's, very reminiscent of OTOY's LightStage (although the LightStage dome is bigger and can capture the actor from 360 degrees at once). The photorealistic CG characters and objects that it produces are the real advantage of cloud gaming (as was being hinted at when the Lightstaged Ruby from the latest Ruby demo was presented at the Radeon HD 5800 launch):

What he stressed most, though, was Perlman's other company, Mova, working in tandem with OnLive to create impressive new visual experiences in games. "This face here," Bentley began, as he motioned toward a life-like image that had been projected on a screen before us, "is computer generated -- 100,000 polygons. It's the same thing we used in Benjamin Button to capture Brad Pitt's face. Right here, this is an actress. You can't render this in real time on a standard console. So this is the reason OnLive really exists." Bentley claims that Mova is a big part of the reason that a lot of folks originally got involved with OnLive. "We were mind-boggled," he exclaimed. And mind-boggling can be a tremendous motivator, it would seem -- spurring Bentley to leave a successful startup for a still nascent, unknown company working on the fringes of the game industry.

In fairness, what we saw of Mova was terrifyingly impressive, seemingly crossing the uncanny valley into "Holy crap! Are those human beings or computer games?" territory. Luckily for us, someone, somewhere is working with Mova for games. Though Bentley couldn't say much, when we pushed him on the subject, he laughed and responded, "Uhhhh ... ummm ... there's some people working on it." And though we may not see those games for quite some time, when we do, we'll be seeing the future.

Just like OTOY, I bet that OnLive is developing some voxel ray tracing tech as well, which is a perfect fit for server side rendering due to it's massive memory requirements. Now let's see what OTOY and OnLive with their respective cloud servers and capturing technologies will come up with :-)

Wednesday, July 14, 2010

Real-time Energy Redistribution Path Tracing in Brigade!

A lot of posts about Brigade lately, but that's because the pace of development is going at break neck speeds and the intermediate updates are very exciting. Jacco Bikker and Dietger van Antwerpen, the coding brains behind the Brigade path tracer, seem unstoppable. The latest contribution to the Brigade path tracer is the implementation of ERPT or Energy Redistribution Path Tracing. ERPT was presented at Siggraph 2005 and is an unbiased extension of regular path tracing which combines Monte Carlo path tracing and Metropolis Light Transport path mutation to obtain lower frequency noise and converge faster in general. Caustics benefit greatly as well as scenes which are predominantly lit by indirect lighting. The original ERPT paper can be found at http://rivit.cs.byu.edu/a3dg/publications/erPathTracing.pdf and offers a very in-depth and understandable insight into the technique. A practical implementation of ERPT can be found in the paper "Implementing Energy Redistribution Path Tracing" (http://www.cs.ubc.ca/~batty/projects/ERPT-report.pdf).

The algorithm seems to be superior than (bidirectional) path tracing and MLT in most cases, while retaining it's unbiased character. And they made it work on the GPU! You could say that algorithm-wise, the addition of ERPT makes Brigade currently more advanced than the other GPU renderers (Octane, Arion, LuxRays, OptiX, V-Ray RT, iray, SHOT, Indigo GPU, ...) which rely on "plain" path tracing.

The following video compares path tracing to ERPT in Brigade at a resolution of 1280x720(!) on a GTX 470: http://www.youtube.com/watch?v=d9X_PhFIL1o&feature=channel

This image directly compares path tracing on the left with ERPT on the right (the smeary pixel artefacts in the ERPT image are mostly due to the youtube video + JPEG screengrab compression, but I presume that there are also some noise filters applied as described in the ERPT paper):
ERPT seems to be a little bit darker than regular path tracing in this image, which seems to be a by product of the noise filters according to http://pages.cs.wisc.edu/~yu-chi/research/pmc-er/PMCER_files/pmc-er-egsr.pdf.

On a side note, the Sponza scene in the video renders very fast for the given resolution and hardware. When comparing this with the video of Sponza rendering in the first version of SmallLuxGPU on a HD 4870 (which I thought looked amazing at the time), it's clear that GPU rendering has made enormous advancements in just a few months thanks to more powerful GPU's and optimizations of the path tracing code. I can hardly contain my excitement to see what Brigade is going to bring next! Maybe Population Monte Carlo energy redistribution for even faster convergence? ;)

Monday, July 12, 2010

Sparse voxel octree and path tracing: a perfect combination?

I have been wondering for some time if SVO and path tracing would be a perfect solution for realtime GI in games. Cyril Crassin has shown in his paper "Beyond triangles: Gigavoxels effects in videogames" that secondary ray effects such as shadows can be computed very inexpensively by tracing a coarse voxel resolution mipmap without tracing all the way down to the finest voxel resolution, something that is a magic intrinsic property of SVO's and which is to my knowledge not possible when tracing triangles (unless there are multiple LOD levels), where all rays have to be traced to the final leaf containing the triangle, which is of course very expensive. A great example of different SVO resolution levels can be seen in the video "Efficient sparse voxel octrees" by Samuli Laine and Tero Karras (on http://code.google.com/p/efficient-sparse-voxel-octrees/ , video link at the right, a demo and source code for CUDA cards is also available).

I think that this LOD "trick" could work with all kinds of secondary ray effects, not just shadows. Particularly ambient occlusion and indirect lighting could be efficiently calculated in this manner, even on glossy materials. One limitation could be high frequency content, because every voxel is an average of the eight smaller voxels that it's constituted of, but in such a case the voxels could be adaptively subdivided to a higher res (same for perfectly specular reflections). For diffuse materials, the cost of computing indirect lighting could be drastically reduced.

Another idea is to compute primary rays and first bounce secondary rays at full resolution, and all subsequent bounces at lower voxel LODs with some edge-adaptive sampling, since the 2nd, 3rd, 4th, ... indirect bounces contribute relatively little to the final pixel color compared to direct + 1st indirect bounce. Not sure if this idea is possible or how to make it work.

Voxelstein3d has already implemented the idea of pathtracing SVOs for global illumination (http://raytracey.blogspot.com/2010/03/svo-and-path-tracing-update.html) with some nice results. Once a demo is released, it's going to be interesting to see if the above is true and doesn't break down with non-diffuse materials in the scene.

UPDATE: VoxLOD actually has done this for diffuse global illumination and it seems to work nicely:
http://voxelium.wordpress.com/2010/07/25/shadows-and-global-illumination/

Sunday, July 11, 2010

#SecondLife Tip: TextureLoadFullRes

Sometimes you'll notice that textures don't seem to load completely, or maybe profile pictures and icons tend to not load at all (or blurry). In Second Life, textures that are uploaded often are automatically scaled to a lower resolution than the texture that was originally created, making your glorious masterpiece of pixel creation seem less crisp than you intended. Of course, there is always the option to Rebake Textures (CTRL ALT R) if the textures in question are on your avatar, but what about all of the other textures in Second Life?

For people who have a decent graphics card (at least 512MB and higher) there is a debug setting to load those at the full resolution (in most cases) and not the default scaled resolution. While this method will make your GPU work a bit harder in the process, quite often you'll see a definite gain in how textures look in Second Life by enabling it. Keep in mind, this does not make lower resolution textures suddenly higher resolution, but if the texture was uploaded to SecondLife in a higher resolution than the standard 512x512 there is a chance this trick will bring that HD quality out (except in terms of the terrain... which seems to be stuck at 512x512)
  • Go to Debug Settings in your Advanced Menu (enable Advanced by pressing CTRL ALT D)
  • Type: TextureLoadFullRes into the text box
  • Set this value to True

#SecondLife Tip: Less Buggy Transparency

Sometimes, no matter how hard we try to build our creations, we inevitably end up having to use textures with alpha transparency. While the occasional usage of alpha transparent textures isn't a taboo in and of itself, those of you who have been building for quite some time in Second Life will know that Alpha Transparencies don't play well with others.

What this boils down to is, Invisi-prims glitching on dancefloors, flexi hair disappearing in front of a window, and god forbid you're in a forest... all of the trees have totally messed up sorting order at times, making for some really trippy effects as you walk around.

To this end, the culprit seems to be that nobody (apparently) has solved the Z-Order Depth Sorting bug for alpha transparent textures on faces in a 3D scene. So what ends up happening is that the tree which is 100 feet away ends up rendering in front of the window that is 5 feet away from you.

There is, however, a solution to this alpha blending glitch on translucent items in a virtual space, but it's an internal solution which requires coding. Unfortunately, you aren't likely to go through that sort of trouble just to stop tripping balls walking in Second Life forests or watching your hair disappear on the dancefloor.

So there is a simpler solution, but it's still a trade off in quality:
  • Open your Debug Settings in Advanced Menu (CTRL ALT D)
  • In Debug, type in RenderFastAlpha
  • Set this to TRUE


What this does is switches to a different type of Alpha Blending known as Clip Alpha whereas the default for Second Life is actually Alpha + Sort. The trade off is that the transparency of the textures will gain a hard edge and likely not blend as well as the default (Soft Blending) and thus you will have some pixelation and jagged edges on them. The upshot of this method is that you'll immediately notice that the items in question are likely to snap into proper depth calculation and render in proper Z-Order.

Is this a perfect solution? No, by no means. But it should at least keep those trees in proper order in the meantime.

The internal method for fixing this would be to apply a Gaussian Blur shader algorithm to the Clip Alpha in order to smooth the edges. As for invisible prims (used to hide parts of your body while wearing items) - the solution to this is in the Viewer 2.0 ability to use Alpha transparent textures on your skin, making parts invisible versus wrapping alpha transparent objects around the body parts to hide them. As far as Flexi-Hair goes, however... that's a horse of a different color.

Friday, July 9, 2010

implementing a new eclipse remote control command

Introduction


eclipse remote control is an eclipse plugin which allows to execute remote commands within eclipse. Right now it's pretty limited to a very few number of commands. Currently you can open a file and launch a build command. Launching commands is done via the java client application.



Implement a command


To create a new command you have to implement a few classes within eclipse remote control. So you have to check out the eclipse remote control source from the github repository: git://github.com/marook/eclipse-remote-control.git



Implementing a new command requires the following steps:


  1. Implement a communication class. This class contains the data which is sent from the eclipse remote control client to the eclipse remote control plugin in the eclipse IDE.

  2. Extend the eclipse remote control. You need to parse the client's command line arguments and create an instance of your communication class.

  3. Implement a command runner. The command runner contains the actual work which is performed when the command is executed within eclipse.




Implement communication class


The communication classes are located in the com.github.marook.eclipse_remote_control.command project. Add a new java class to the com.github.marook.eclipse_remote_control.command.command package. Your new command class must implement the abstract Command java class from the same package.



The communication class must set a unique ID. This ID is used to identifiy commands in the eclipse remote control plugin. The unique ID is passed to the Command constructor.



The communication class contains all the information which is sent from the eclipse remote control client to the eclipse remote control plugin. So the communication class needs to contain fields for all transfered information. Also you have to add getter and setter methods for all the fields.



All communication classes implement the Serializable interface. Make sure your command class and the command class's fields implement the Serializeable requirements.



Extend eclipse remote control client


The client is implemented in the com.github.marook.eclipse_remote_control.client project. The client creates command classes from command line arguments and sends it to the eclipse remote control plugin. To create and send your command class you have to add the parse and send code to the com.github.marook.eclipse_remote_control.client.Client class's main method. The following listing is an example of the parse and send code for the open file command:



if("open_file".equals(command)){
if(args.length < 2){
printUsage(System.err);

System.exit(1);

return;
}

final OpenFileCommand cmd = new OpenFileCommand();
cmd.setFileName(args[1]);

fireCommand(cmd);
}



Implement command runner


Here comes the actual work. You have to implement a command runner which executes the command within the eclipse instance. All command runners are implemented within the com.github.marook.eclipse_remote_control.run project. Create a new command runner class in the com.github.marook.eclipse_remote_control.run.runner.impl.simple.atom package. All command runnerst must implement the ICommandRunner interface from the same project. For your convenience you should use the AbstractAtomCommandRunner superclass for your command.



The commands's work is implemented in the command runner's internalExecute(...) method. This method is specified by the ICommandRunner interface.



At last you must register the command runner in the SimpleCommandRunner class. Add a putAtomRunner method call to the static block in the SimpleCommandRunner class. Right now this static block contains only two registrations:



static {
putAtomRunner(new OpenFileCommandRunner());
putAtomRunner(new ExternalToolsCommandRunner());
}



Basically that's all you need to do for a new command. If you need more information check out the eclipse remote control source code. Read the source from the existing commands. I think this will be the best for getting started.

Thursday, July 8, 2010

Tokaspt, an excellent real-time path tracing app



Just stumbled upon this very impressive CUDA based path tracer: http://code.google.com/p/tokaspt/ for exe and source (The app itself has been available since January 2009)

Although the scenes are quite simple (spheres and planes only), it's extremely fast and it converges literally in a matter of milliseconds to a high quality image. Navigation is as close to real-time as it gets. There are 4 different scenes to choose from (load scene with F9) and they can be modified at will: parameters are sphere size, color, emitting properties, 3 material BRDFs (diffuse (matte), specular (mirror) and refractive (glass)) and sphere position. Path trace depth and spppp (samples per pixel per pass) can also be altered on the fly thanks to the very convenient GUI with sliders. When moving around and ghosting artefacts appear, press the "reset acc" button to clear the accumulation buffer and get a clean image. Definitely worth messing around with!

Wednesday, July 7, 2010

New version of Brigade path tracer

Follow the link in this post to download. There's some new features + performance increase. Rename cudart.dll to cudart32_31_9.dll to make it work.

The next image demonstrates some of the exceptional strenghts of using path tracing:

- indirect lighting with color bleeding: notice that every surface facing down (yellow arrows) picks up a slightly greenish glow from the floor plane, due to indirect light bouncing off (this picture uses path trace depth 6)
- soft shadows
- indirect shadows
- contact shadows (ambient occlusion)
- superb anti-aliasing
- depth of field
- natural looking light with gradual changes

all of these contribute to the photorealistic look and it's all interactive (on high end cpu+gpu)!

Saturday, July 3, 2010

Friday, July 2, 2010

Brigade path tracer comparison

The following screenshots are taken from the Brigade real-time path tracer demo, available at http://igad.nhtv.nl/~bikker/
Rendered with CPU only at resolution 832x512
Images with 100 and 800 spp were taken without frame averaging (only 1 iteration)
Images with 2, 8, 16, 32 spp taken with frame averaging (averaging samples of several frames)

2 spp


8 spp


16 spp


32 spp


100 spp


800 spp



To top it off, one big image comparing 800, 8, 16 and 32 spp. It amazes me that the quality of just 8 samples is already great and with some filtering it could rival the quality of the 800 spp image:

Thursday, July 1, 2010

Gaikai's cloud bussiness model: play games for free in your browser

Interview with Gaikai's Dave Perry on Joystiq:
http://www.joystiq.com/2010/06/30/dave-perry-on-the-innovation-of-gaikai/

Gaikai focuses on delivering game demo's, not complete games: you see an advertisement of a game on a website (could be Gamespot, EA.com, Eurogamer) or read a game review, and with just one click you can play a demo of that game in your browser without paying a cent. The game publisher pays for your playing time, 1 cent per minute per user.
This approach is economically safer, more practical and more retail/publisher/gamer friendly than what OnLive is doing. With the current network infrastructure of the internet and its bandwidth limitations, this is probably the most successful route for cloud gaming. Gaikai has already signed EA (http://games.venturebeat.com/2010/06/17/gaikai-signs-ea-as-digital-distribution-partner/), so I think cloud gaming is gonna get big pretty soon.

UPDATE: another video of OnLive, showing mouse latency in F.E.A.R. 2 behind a router: http://www.youtube.com/watch?v=Edf5xsqST90