The following article and video show that even the most demanding games of today can be played on any web-enabled device:
http://www.techcrunch.com/2009/06/22/exclusive-otoy-goes-mobile-turns-your-cell-phone-into-a-powerful-gaming-rig/
A portable game platform with dedicated game controls such as the PSP connected via WiFi to a 3G phone seems more comfortable and mobile than the setup in the video. In my opinion this is just a proof of concept and I think that the real crowd pleaser will be the photorealistic virtual world with LightStaged characters.
Tuesday, June 30, 2009
Monday, June 29, 2009
Friday, June 26, 2009
Wednesday, June 24, 2009
Monday, June 22, 2009
Sunday, June 21, 2009
Angry Vending Machines
The only time I've had an altercation with a vending machine was when my packet of chips got caught in the hoop mechanism and I was had to hop on a train quickly. I shook the machine. I glared at the machine. I gave up on the machine.
Vending Machines of the world: 1 Tammy: 0
Vending Machines of the world: 1 Tammy: 0
Friday, June 19, 2009
iCecream
Thursday, June 18, 2009
Green fingers
Two weeks ago I planted a packet of seeds for a Venus Fly Trap. They were tiny little black things smaller than the grains of fine sand they were packed with. I didn't hold out much hope though. When my brother tried to grow one, the whole packet of seeds failed to sprout.
So the pot has been sitting on my kitchen windowsill in a plastic bag, in a tray of water for a fortnight. And I've just had a look at the compost through the bag. Looks like there is some green fuzz and a single green bud! It's tiny (about a millimetre) but it's poking up above the soil. In another two to four weeks it should be ready to come out of the bag. Maybe even to re-pot, who knows.
I'm actually quite excited about it.
There will be pictures once it's out of the bag.
So the pot has been sitting on my kitchen windowsill in a plastic bag, in a tray of water for a fortnight. And I've just had a look at the compost through the bag. Looks like there is some green fuzz and a single green bud! It's tiny (about a millimetre) but it's poking up above the soil. In another two to four weeks it should be ready to come out of the bag. Maybe even to re-pot, who knows.
I'm actually quite excited about it.
There will be pictures once it's out of the bag.
Tuesday, June 16, 2009
2 New OTOY video's
2 new video's showing the server side rendering in action and giving some specifics about the lag:
http://www.techcrunch.com/2009/06/16/videos-otoy-in-action-you-have-to-see-this/
The first video shows a guy playing Left 4 Dead and Crysis on his HD TV, hooked up to his laptop, which is connected to the OTOY server through broadband access. He can switch instantly between both games while playing, very impressive. According to the TechCrunch article, EA is partnering with OTOY.
The second video shows GTA4 being played in a web browser while running in the cloud. According to the tester, there is some lag, but it's very playable. Personally, I think GTA4 is not the best game to show off server side rendering because it runs on a terribly unoptimized, buggy and laggy engine. There's no way you can tell if the lag is due to the crappy engine or to the connection. Unfortunately, there's no info about the geographical distance between the player and the cloud. What it does show is that a GTA game with LightStage quality characters and environments could definitely be possible and playable when rendered in the cloud. In fact, I asked Jules this very question yesterday and he confirmed to me that it was indeed possible.
update: Many people cannot believe how OTOY can render so many instances per single GPU. I checked my notes and as Jules explained it to me, he can run 10 instances of a high-end game (like Crysis) and up to 100 instances of a low-end game per GPU. The GPU has a lot of "idle" and unused resources in between the rendering of frames for the same instance. OTOY efficiently uses this idle time to render extra instances. The games shown in the videos (Crysis, Left 4 Dead, GTA IV) are of course traditionally rendered. When using voxel ray tracing, OTOY scales even better.
OTOY can switch between rasterizing and voxel raycasting, because it uses a point cloud as input. Depending on the complexity, one is faster than the other. The scorpion demo for example (the Bug Snuff movie), was first rendered as voxels, but rasterizing it was faster. The Ruby demo from last year was completely raytraced (the voxel rendering is not limited to raycasting, but uses shadow rays and reflection rays as well, so it could be considered as true raytracing).
http://www.techcrunch.com/2009/06/16/videos-otoy-in-action-you-have-to-see-this/
The first video shows a guy playing Left 4 Dead and Crysis on his HD TV, hooked up to his laptop, which is connected to the OTOY server through broadband access. He can switch instantly between both games while playing, very impressive. According to the TechCrunch article, EA is partnering with OTOY.
The second video shows GTA4 being played in a web browser while running in the cloud. According to the tester, there is some lag, but it's very playable. Personally, I think GTA4 is not the best game to show off server side rendering because it runs on a terribly unoptimized, buggy and laggy engine. There's no way you can tell if the lag is due to the crappy engine or to the connection. Unfortunately, there's no info about the geographical distance between the player and the cloud. What it does show is that a GTA game with LightStage quality characters and environments could definitely be possible and playable when rendered in the cloud. In fact, I asked Jules this very question yesterday and he confirmed to me that it was indeed possible.
update: Many people cannot believe how OTOY can render so many instances per single GPU. I checked my notes and as Jules explained it to me, he can run 10 instances of a high-end game (like Crysis) and up to 100 instances of a low-end game per GPU. The GPU has a lot of "idle" and unused resources in between the rendering of frames for the same instance. OTOY efficiently uses this idle time to render extra instances. The games shown in the videos (Crysis, Left 4 Dead, GTA IV) are of course traditionally rendered. When using voxel ray tracing, OTOY scales even better.
OTOY can switch between rasterizing and voxel raycasting, because it uses a point cloud as input. Depending on the complexity, one is faster than the other. The scorpion demo for example (the Bug Snuff movie), was first rendered as voxels, but rasterizing it was faster. The Ruby demo from last year was completely raytraced (the voxel rendering is not limited to raycasting, but uses shadow rays and reflection rays as well, so it could be considered as true raytracing).
A quantum leap of faith
Believe it or not, but yesterday I was on the phone with Jules Urbach, the man himself behind OTOY and LightStage (I guess writing a blog does pay off ;-). He had offered me the opportunity to talk a bit about OTOY, LightStage, the Fusion Render Cloud and where things are heading. It was my first interview ever and I was just shooting one question after another. Too bad I was bloody nervous (I haven’t been that nervous since my last oral exam). Due to my nervousness and my limited understanding of graphics programming, I didn’t absorb a lot of the things he said, but I think I’ve got the bottom line. He answered a lot of my OTOY-related technical questions (unfortunately OTOY isn’t open source, so obviously he couldn’t answer every single one of my questions) and offered me some insight in the cloud computing idea. What follows is my own interpretation of the information that Jules gave me.
Just a couple of weeks ago, I was still wondering what the technical specifications of the next generation of consoles would be like. But after yesterday… frankly I don’t give a damn anymore. The promise of OTOY and server side rendering is even bigger than I initially thought. In fact it’s huge and that’s probably an understatement. In one interview, Jules said that it “is comparable to other major evolutions of film: sound, color, cinemascope, 70mm, THX, stereoscopic 3D, IMAX, and the like” I think it’s even bigger than that, and it has the potential to shake up and “transform” the entire video game industry.
Server side rendering opens up possibilities for game developers that are really hard to wrap your head around. Every game developer has learned to work inside the limitations of the hardware ( memory, polygon and texture budgets, limited number of lights, number of dynamic objects, scene size and so on). These budgets double in size only every 12 to 18 months. Now imagine that artists and level designers could make use of unlimited computational resources and no longer have to worry about technical budgets. They can make the scene as big as they want, with extreme detail (procedurally generated at the finest level) and with as much lighting information and texture layers as they desire. That’s exactly what server side rendering combined with OTOY’s voxel ray tracing might offer. It requires a shift in the minds of game developers and game publishers that could be considered a quantum leap of faith. The only limitation is their imagination (besides time and money of course), and anything that you see in offline rendered CG, could be possible in real-time. Jules is also working on tools to facilitate the creation of 3D environments and to keep development budgets reasonable. One of those tools is a portable LightStage, which is (as far as I understood) a cut down version of the normal LightStage that can be mounted onto a driving car and that can capture whole streets and cities and convert them into a 3D point cloud. It’s much better than LIDAR, because it captures lighting and texture information as well. Extremely cool if it works.
Because the server keeps the whole game scene in memory and because of the way that the voxel ray tracing works, OTOY and the render cloud can scale very easily to tens of thousands of users. Depending on the resolution, he can run 10 to 100 instances of a game scene on one GPU. And you can interconnect an unlimited number of GPU’s.
The best thing about the server side rendering idea is that every one is a winner: IHV’s, ISV’s, game publishers and most importantly the gamers themselves (for a number of reasons which I talked about in one of my previous posts).
In conclusion, I guess every PC gamer has dreamt at some point about a monster PC with terabytes of RAM and thousands of GPU’s working together, with a million unified shaders combined. Until recently, no one in their right mind would make such a monster, because economically it makes no sense to spend a huge load of cash on the development of a game that would make full use of such enormous horse power and could only be played by one person at a time. But with the rapid spreading of broadband internet access, suddenly a whole lot of people are able to play on that monster PC and it becomes economically viable to make such an extremely high quality game. I think OTOY will be the first to achieve this goal. Following the increasing trend of office applications being run in the cloud, server side rendering is going to be the next step in the evolution of the video game industry and it will make “client-side hardware” look like an outdated concept. Jules told me he thinks that in the future, the best looking games will be rendered server side and that there’s no way that expensive local hardware (on the client side) will be able to compete. I for one can’t wait to see what OTOY will bring in the near future.
Just a couple of weeks ago, I was still wondering what the technical specifications of the next generation of consoles would be like. But after yesterday… frankly I don’t give a damn anymore. The promise of OTOY and server side rendering is even bigger than I initially thought. In fact it’s huge and that’s probably an understatement. In one interview, Jules said that it “is comparable to other major evolutions of film: sound, color, cinemascope, 70mm, THX, stereoscopic 3D, IMAX, and the like” I think it’s even bigger than that, and it has the potential to shake up and “transform” the entire video game industry.
Server side rendering opens up possibilities for game developers that are really hard to wrap your head around. Every game developer has learned to work inside the limitations of the hardware ( memory, polygon and texture budgets, limited number of lights, number of dynamic objects, scene size and so on). These budgets double in size only every 12 to 18 months. Now imagine that artists and level designers could make use of unlimited computational resources and no longer have to worry about technical budgets. They can make the scene as big as they want, with extreme detail (procedurally generated at the finest level) and with as much lighting information and texture layers as they desire. That’s exactly what server side rendering combined with OTOY’s voxel ray tracing might offer. It requires a shift in the minds of game developers and game publishers that could be considered a quantum leap of faith. The only limitation is their imagination (besides time and money of course), and anything that you see in offline rendered CG, could be possible in real-time. Jules is also working on tools to facilitate the creation of 3D environments and to keep development budgets reasonable. One of those tools is a portable LightStage, which is (as far as I understood) a cut down version of the normal LightStage that can be mounted onto a driving car and that can capture whole streets and cities and convert them into a 3D point cloud. It’s much better than LIDAR, because it captures lighting and texture information as well. Extremely cool if it works.
Because the server keeps the whole game scene in memory and because of the way that the voxel ray tracing works, OTOY and the render cloud can scale very easily to tens of thousands of users. Depending on the resolution, he can run 10 to 100 instances of a game scene on one GPU. And you can interconnect an unlimited number of GPU’s.
The best thing about the server side rendering idea is that every one is a winner: IHV’s, ISV’s, game publishers and most importantly the gamers themselves (for a number of reasons which I talked about in one of my previous posts).
In conclusion, I guess every PC gamer has dreamt at some point about a monster PC with terabytes of RAM and thousands of GPU’s working together, with a million unified shaders combined. Until recently, no one in their right mind would make such a monster, because economically it makes no sense to spend a huge load of cash on the development of a game that would make full use of such enormous horse power and could only be played by one person at a time. But with the rapid spreading of broadband internet access, suddenly a whole lot of people are able to play on that monster PC and it becomes economically viable to make such an extremely high quality game. I think OTOY will be the first to achieve this goal. Following the increasing trend of office applications being run in the cloud, server side rendering is going to be the next step in the evolution of the video game industry and it will make “client-side hardware” look like an outdated concept. Jules told me he thinks that in the future, the best looking games will be rendered server side and that there’s no way that expensive local hardware (on the client side) will be able to compete. I for one can’t wait to see what OTOY will bring in the near future.
Monday, June 8, 2009
Alco Frog!
My cartoons often have an educational undertone. You learn something everyday. But not here.. probably at school or like.. watching docos on TV.
Sunday, June 7, 2009
Community Service Announcement
Saturday, June 6, 2009
Has it been 1 year already?
The Ruby city demo, the very reason why I started this blog, was first shown to the world on June 4, 2008. Check the date on this youtube video: http://www.youtube.com/watch?v=zq1KTtU8loM
One full year, I cannot believe it. AMD has released every Ruby demo to the public well within a year after the introduction of the hardware. It began with Ruby Double Cross on Radeon X800, then Ruby Dangerous Curves on X850, followed by Ruby The Assassin on X1800, and finally Ruby WhiteOut on HD 2800. So it made perfect sense that the Voxelized Ruby would come out within a few months after the unveiling of the Radeon 4870. Even Dave Baumann said on the Beyond3d forum that the demo would be released for the public to play with.
So what went wrong? Did ATI decide to hold back the demo or was it Jules Urbach? I think the intial plan was to release the demo at a certain point, but the voxel technology was not finished and took longer to develop than expected. To enjoy the demo at a reasonable framerate, it had to be run on two 4870 cards or on the dual 4870X2, so only the very high end of the consumer market would be able to run it. The previous Ruby demo's were made by RhinoFX, and this was the first time that OTOY made a Ruby demo. Either way, if AMD is making another Ruby demo (with or without OTOY, but I prefer with), it has to look better than the last one, and they better release it within a reasonable amount of time.
Something else that crossed my mind: OTOY is now being used to create a virtual world community (Liveplace/Cityspace) and I think OTOY's technology would be a perfect match for Playstation Home. Virtual world games are much more tolerant to lag than fast-paced shooters or racers, and I think that even a lag of 500 milliseconds would be doable. Imagine you're playing a game on your PS3. Once you're done playing, you automatically end up in Home, being rendered in the cloud. Sony has trademarked PS Cloud, ( http://www.edge-online.com/news/sony-trademarks-ps-cloud ) and I wouldn't be surprised if Sony moved the rendering for PS Home from the client to the server side sooner or later.
One full year, I cannot believe it. AMD has released every Ruby demo to the public well within a year after the introduction of the hardware. It began with Ruby Double Cross on Radeon X800, then Ruby Dangerous Curves on X850, followed by Ruby The Assassin on X1800, and finally Ruby WhiteOut on HD 2800. So it made perfect sense that the Voxelized Ruby would come out within a few months after the unveiling of the Radeon 4870. Even Dave Baumann said on the Beyond3d forum that the demo would be released for the public to play with.
So what went wrong? Did ATI decide to hold back the demo or was it Jules Urbach? I think the intial plan was to release the demo at a certain point, but the voxel technology was not finished and took longer to develop than expected. To enjoy the demo at a reasonable framerate, it had to be run on two 4870 cards or on the dual 4870X2, so only the very high end of the consumer market would be able to run it. The previous Ruby demo's were made by RhinoFX, and this was the first time that OTOY made a Ruby demo. Either way, if AMD is making another Ruby demo (with or without OTOY, but I prefer with), it has to look better than the last one, and they better release it within a reasonable amount of time.
Something else that crossed my mind: OTOY is now being used to create a virtual world community (Liveplace/Cityspace) and I think OTOY's technology would be a perfect match for Playstation Home. Virtual world games are much more tolerant to lag than fast-paced shooters or racers, and I think that even a lag of 500 milliseconds would be doable. Imagine you're playing a game on your PS3. Once you're done playing, you automatically end up in Home, being rendered in the cloud. Sony has trademarked PS Cloud, ( http://www.edge-online.com/news/sony-trademarks-ps-cloud ) and I wouldn't be surprised if Sony moved the rendering for PS Home from the client to the server side sooner or later.
Friday, June 5, 2009
A possible faster-than-light solution for the latency problem?
Interesting article: http://spectrum.ieee.org/computing/networks/game-developers-see-promise-in-cloud-computing-but-some-are-skeptical/0
I have totally embraced the cloud computing idea. I hope OTOY and OnLive can pull it off and create a paradigm shift from client to cloud rendering. The main problem seems to be lag. Apart from the extra lag introduced by encoding/decoding the video stream at the server/client side respectively, which should not be greater than a couple of milliseconds, there is lag due to the time that the input/video signal needs to travel the distance between client and server, which can amount to several tens to hundreds of milliseconds. This is due to the fact that information cannot travel faster than the speed of light (via photons or electromagnetic waves). Quantum physicists have discovered ways to do quantum computing and quantum encryption at 10.000 times the speed of light, but they all agreed that it was not possible to send information faster than lightspeed, because they could not control the contents of quantum entangled photons. But very recently, Graeme Smith, a researcher at IBM has proposed a way to "transmit large amounts of quantum information" described in the following paper, published in Feb, 2009:
http://domino.research.ibm.com/comm/research_people.nsf/pages/graemesm.Main.html/$FILE/NonPrivate.pdf
http://tech.slashdot.org/article.pl?sid=08/08/06/0043220
If his theory holds true, IBM or someone else could make a computer peripheral based on quantum technology (sort of like an Apple Airport) that can communicate large amounts of data instantaneously! Distance between client and cloud would no longer be a problem and transmission lag would be non-existent! It would make playing server side rendered games an almost lag-free experience and the ultimate alternative for costly, power hungry consoles.
I have totally embraced the cloud computing idea. I hope OTOY and OnLive can pull it off and create a paradigm shift from client to cloud rendering. The main problem seems to be lag. Apart from the extra lag introduced by encoding/decoding the video stream at the server/client side respectively, which should not be greater than a couple of milliseconds, there is lag due to the time that the input/video signal needs to travel the distance between client and server, which can amount to several tens to hundreds of milliseconds. This is due to the fact that information cannot travel faster than the speed of light (via photons or electromagnetic waves). Quantum physicists have discovered ways to do quantum computing and quantum encryption at 10.000 times the speed of light, but they all agreed that it was not possible to send information faster than lightspeed, because they could not control the contents of quantum entangled photons. But very recently, Graeme Smith, a researcher at IBM has proposed a way to "transmit large amounts of quantum information" described in the following paper, published in Feb, 2009:
http://domino.research.ibm.com/comm/research_people.nsf/pages/graemesm.Main.html/$FILE/NonPrivate.pdf
http://tech.slashdot.org/article.pl?sid=08/08/06/0043220
If his theory holds true, IBM or someone else could make a computer peripheral based on quantum technology (sort of like an Apple Airport) that can communicate large amounts of data instantaneously! Distance between client and cloud would no longer be a problem and transmission lag would be non-existent! It would make playing server side rendered games an almost lag-free experience and the ultimate alternative for costly, power hungry consoles.
Thursday, June 4, 2009
Module 2 - Blogs
There are lots of excellent blogs out there, Chris Betcher's 'Betchablog' is one of my favourites, particularly for eLearning (read about mobile phones in the classroom): http://betch.edublogs.org/
I also think there will be a lot of great blogs to come out of this course.
Blogs are particularly good for journalling with students e.g. for a TAS project
Wednesday, June 3, 2009
xtranormal Animation
Ok so I'm not following the order again but I found xtranormal and thought it was great! Here's my first attempt.
Tuesday, June 2, 2009
Network Specifications Finalized
After a tough few weeks of mapping network protocols, we have finally come to a consensus for the underlying backbone fo Andromeda3D. A layered protocol working in concert with various technologies, A3D will be able to handle the "crowd" aspect of virtual environment in a cost effective manner. As opposed to a purely centralized server approach, we're working with a hybridization approach that will be hard pressed to collapse under the weight of large scale simultaneous users.
Employed in our approach are peer to peer redundancy to ensure data stability across the virtual universe, as well as an intelligent P2P network to relay user information to those who are closest in 3D space. Database systems will also be handled using a novel and highly effective distributed database technology so that no one server in the network is ever overwhelmed with requests.
When combined, these technologies which comprise the underlying backbone infrastructure of A3D will facilitate a system by which the more simultaneous users there are, the stronger and faster the virtual universe becomes overall, while reducing central bandwidth in the process over time.
This is in complete opposition to current technologies employed today by Linden Labs, Active Worlds, There and any other virtual worlds platforms whereby their approach involves adding more servers centrally to handle the load.
In order to ensure privacy, we are also employing top level encryption to files prior to sending them to the network cloud. While the files in question will be represented only by 128kb file blocks, which are not directly tied to a single file on a 1:1 basis, we will be employing data encryption to supplement security measures and further secure intellectual property from prying eyes.
The level of complexity involved will be a challenge overall for us to employ, but we are confident that by following this network protocol specification roadmap we will be able to truly take a virtual environment expectation of a few hundred simultaneous users at a single virtual location to orders of magnitude higher.
Expectations are high with our choices, and we are constantly looking to improve and amend our systems to include better approaches. As it stands, we have the basis for a very solid hybrid decentralized network, and possibly the breakthrough of the industry.
Capacity thresholds for a single location (reliably and with adequate frame rates) is expected to be approximately 5,000 - 10,000 simultaneous users per location, though we are being highly conservative with our estimates until we can fully stress test the system and make adjustments.
In theory, the hybrid decentralized system should handle upwards of 500,000 simultaneous users at a single location (and a few billion users simultaneously logged in across the universe). We believe our novel approach will once and for all shatter the glass ceiling that has become an ongoing barrier to virtual worlds environments using the standard centralized approach, and allow us to finally serve the rest of the world as a new gold standard for virtual worlds implementation.
Our planning, in terms of release testing, will follow a strict guideline and will include only the ability to build using an Object Path repository for worlds. At a later time we will upgrade the Andromeda system with the ability to offer personal inventory space for premium users. The underlying backbone for the system already supports a personal inventory space, but in order to properly stabilize inventories, we will need to have a central storage capacity per user as an anchor point.
As we progress with implementation, we will be releasing tests for our active beta group to evaluate, and we look forward to milestone 1 of A3D Beta.
Employed in our approach are peer to peer redundancy to ensure data stability across the virtual universe, as well as an intelligent P2P network to relay user information to those who are closest in 3D space. Database systems will also be handled using a novel and highly effective distributed database technology so that no one server in the network is ever overwhelmed with requests.
When combined, these technologies which comprise the underlying backbone infrastructure of A3D will facilitate a system by which the more simultaneous users there are, the stronger and faster the virtual universe becomes overall, while reducing central bandwidth in the process over time.
This is in complete opposition to current technologies employed today by Linden Labs, Active Worlds, There and any other virtual worlds platforms whereby their approach involves adding more servers centrally to handle the load.
In order to ensure privacy, we are also employing top level encryption to files prior to sending them to the network cloud. While the files in question will be represented only by 128kb file blocks, which are not directly tied to a single file on a 1:1 basis, we will be employing data encryption to supplement security measures and further secure intellectual property from prying eyes.
The level of complexity involved will be a challenge overall for us to employ, but we are confident that by following this network protocol specification roadmap we will be able to truly take a virtual environment expectation of a few hundred simultaneous users at a single virtual location to orders of magnitude higher.
Expectations are high with our choices, and we are constantly looking to improve and amend our systems to include better approaches. As it stands, we have the basis for a very solid hybrid decentralized network, and possibly the breakthrough of the industry.
Capacity thresholds for a single location (reliably and with adequate frame rates) is expected to be approximately 5,000 - 10,000 simultaneous users per location, though we are being highly conservative with our estimates until we can fully stress test the system and make adjustments.
In theory, the hybrid decentralized system should handle upwards of 500,000 simultaneous users at a single location (and a few billion users simultaneously logged in across the universe). We believe our novel approach will once and for all shatter the glass ceiling that has become an ongoing barrier to virtual worlds environments using the standard centralized approach, and allow us to finally serve the rest of the world as a new gold standard for virtual worlds implementation.
Our planning, in terms of release testing, will follow a strict guideline and will include only the ability to build using an Object Path repository for worlds. At a later time we will upgrade the Andromeda system with the ability to offer personal inventory space for premium users. The underlying backbone for the system already supports a personal inventory space, but in order to properly stabilize inventories, we will need to have a central storage capacity per user as an anchor point.
As we progress with implementation, we will be releasing tests for our active beta group to evaluate, and we look forward to milestone 1 of A3D Beta.
Subscribe to:
Posts (Atom)