Octane Render, the ultra-fast unbiased GPU renderer (made in Belgium just like me :-)) is soon going to introduce a new MLT-(Metropolis light transport)-like algorithm, which will make the rendering of certain difficult scenes with small light sources much more efficient: the scene will converge much faster, with less noise and will kill fireflies (bright pixels as a consequence of long paths from reflective caustics).
MLT is the base rendering algorithm used by unbiased CPU renderers like LuxRender, Maxwell Render, Fryrender, Indigo Renderer and Kerkythea.
Making Metropolis light transport (or an equivalent) work on current GPUs was thought by many to be impossible and it was one of the main criticisms from GPU rendering skeptics such as Luxology (Modo) and Next Limit (Maxwell Render), who believe that GPUs can only do dumb, inefficient path tracing and nothing more. Luckily there's Octane Render to prove them wrong. The fact that it has taken the developer such a long time to make it work shows that it's quite tricky to develop. Octane Render is currently also the only renderer (to my knowledge) that will utilise a more sophisticated rendering algorithm.
On a sidenote, ERPT (energy redistribution path tracing) is also possible on the GPU, as described in one of my previous posts. It combines the advantages of Monte Carlo path tracing and Metropolis light transport to allow faster convergence with less noise and can achieve fantastic results, which look indistinguishable from the path traced reference (see http://raytracey.blogspot.com/2010/09/small-update-on-brigade-real-time-path.html) Timo Aila, a graphics researcher at Nvidia and GPU ray tracing genius, is also working on real-time Metropolis light transport (http://research.nvidia.com/users/timo-aila).
Octane's MLT-like algorithm has been hinted at by its developer since the unveiling of the software in January 2010, and it should be here very soon (within a couple of weeks, post will be updated when that happens). I'm very curious to see the first results.
Future GPU architectures, like Kepler and Maxwell, should make the implementation of MLT-like algorithms on the GPU much easier, but it's nice to see at least one developer trying to squeeze the maximum out of current GPUs, bending their compute capability until it breaks.
Nintendo finally told the public the price for the 3DS, and as it has been expected, its 300$.
If you don't know what the 3DS is, its the newest handheld from Nintendo, after the DS. And as the name implies, the 3DS uses 3D technology, and the best part is it does, but you don't need those special glasses, for the 3D effect.
and that's not all! it now has an analog (And about time), home button, and a wider upper screen (that's the one that will have 3D. But, if you don't use 3D that's fine, there's a slider that de-activates the technology.
Now, if all of that wasn't enough, many companies (Like Konami, Capcom) are releasing hit series titles to the 3DS, alongside with a Legend of Zelda : Ocarina of Time remake, Metal Gear Solid 3 remake, and many other titles as well as the new Street Fighter!
What amazes me most, in all of this is, that their stock has now lowered price, big time!
Let's just hope nothing bad happens to our company!
At GTC 2010, Nvidia announced their future GPUs named Kepler and Maxwell. One of the more interesting quotes: "Between now and Maxwell, we will introduce virtual memory, pre-emption, enhance the ability of the GPU to autonomously process, so that it's non-blocking of the CPU, not waiting for the CPU, relies less on the transfer overheads that we see today. These will take GPU computing to the next level, along with a very large speed up in performance," said Jen-Hsun Huang.
Pre-emption was already revealed in a slide from a presentation by Tony Tomasi at Nvision08 (http://www.pcper.com/article.php?aid=611), depicting a timeline showing pre-emption, full support for function pointers, C++, etc. :
The part about "the ability of the GPU to autonomously process, so that it's non-blocking of the CPU, not waiting for the CPU, relies less on the transfer overheads that we see today" is very interesting and suggests the incorporation of CPU cores on the GPU, as shown in a slide from an Nvidia presentation at SC09 (http://www.nvidia.com/content/GTC/documents/SC09_Dally.pdf):
We all know that Intel and AMD are looking at merging CPU cores and GPUs on the same die. In my mind, the future is for hybrid computing, where different kind of processors working together and find their own kind of tasks to work on. Currently, multi-core CPU and many-core GPU are working together, tasks are distributed by software schedulers. Data parallel tasks are assigned to GPUs and task-parallel jobs are assigned to GPUs. However, communication between these two kinds of processors is the performance bottleneck. I hope NVIDIA can provide a solution on their desktop GPU product line too.
Bill Dally:
That's exactly right. The future is heterogeneous computing in which we use CPUs (which are optimized for single-thread performance) for the latency sensitive portions of jobs, and GPUs (which are optimized for throughput per unit energy and cost) for the parallel portions of jobs. The GPUs can handle both the data parallel and the task parallel portions of jobs better than CPUs because they are more efficient. The CPUs are only needed for the latency sensitive portions of jobs - the serial portions and critical sections.
Do you believe a time will come when GPU and CPU are on the same chip or "board" it seems the logical next step to avoid the huge PCI-E latency and have a better GPU-CPU interactivity ? i know there is ongoing research in this area already ...but what is your personal opinion on the possibility and benefits of this ?"
Bill Dally:
Our Tegra processors already combine CPUs and a GPU on a single chip. For interactivity what's important is not the integration but rather having a shared memory space and low latency synchronization between the two types of cores. I don't see convergence between latency-optimized cores and throughput optimized cores. The techniques used to optimize for latency and throughput are very different and in conflict. We will ultimately have a single chip with many (thousands) of throughput cores and a few latency-optimized cores so we can handle both types of code.
From the above slide, Nvidia expects to have 16 CPU cores on the GPU by 2017, deducing from that you would get:
- 2017: GPU with 16 CPU cores - 2015: GPU with 8 CPU cores - 2013: Maxwell with 4 CPU cores - 2011: Kepler with 2 CPU cores
My bet is that Kepler will at least have one and probably two (ARM based) CPU cores and Maxwell will probably have 4 CPU cores on the GPU. The inclusion of true CPU cores on the GPU will make the CPU-GPU bandwidth problem of today obsolete and will enable smarter ray tracing algorithms like Metropolis light transport and bidirectional path tracing on the GPU. Biased rendering methods such as photon mapping and irradiance caching will be easier to implement. It will also give a tremendous performance boost to the (re)building of acceleration structures and to ray tracing of dynamic geometry, which will no longer depend on the slow PCIe bus. Apart from ray tracing, most other general computation tasks will also benefit greatly. I think this CPU/GPU combo chip will be Nvidia's answer to AMD's oft-delayed Fusion and Intel's Sandy Bridge.
About twenty years ago, Chip Morningstar and F. Randal Farmer described the strange idea that in the future, virtual environments would no longer handle being centrally located, but instead a method for decentralization would need to be employed if we were ever to break the glass ceiling that is bandwidth and co-current users. Of course, I am referring to the classic writing of Lessons Learned From Lucasfilms Habitat. Later though, something even more remarkable happened in the budding industry.
It was not the evolution of the virtual environment, but by some strange twist of fate, quite the opposite. One could argue that the graphics and complexity of the software have indeed been in an upward evolution since the early 1990s, but there is an underlying issue (or more appropriately multiple issues) which continue to nag and plague systems even to this day.
In the race forward for virtual environment dominance, the lessons learned from habitat have been discarded to the wayside as a footnote in history. Instead, we see continued creation of server architecture which not only is decidedly centralized, but further so than predecessors. This is in stark contrast to the warnings and valuable insights given to us from the early pioneers.
Fast forward to the end of 2010, and we see datacenters running night and day to accommodate not only co-current users, but now the pre-processing and streaming of the virtual environment to those users. It is little wonder why the evolution of complexity for these environments is undoubtedly overwhelming the bandwidth limitations and back-end. In this manner, the industry has a metaphorical memory leak, which is to say they simply forgot those lessons and act truly bewildered and perplexed when the scenarios described by those lessons suddenly manifest.
In so much as Second Life, there too exists a literal memory leak to go along with their metaphorical memory leak. Viewer 2, for all the good it brings, is currently crippled by an unfixed memory leak which leads to the viewer suddenly, and without warning, quitting.
When we think of a program quitting, we often imagine the notion of it showing a dialog saying “Do you really wish to quit this program?” and a handy Yes or No button. In more severe forms, we could even say that a program suddenly quitting would default to a crash logger. Even so, in most instances, this is not the case.
What Viewer 2 suffers from is rushed and error prone programming. With the layoff of 30% of the Linden Lab employees, those who are left are possibly ill-equipped to sort through the source code for Viewer 2 and make timely corrections. With the increased schedule of agile development added to that scenario, we begin to understand that many of these crippling issues will remain unfixed for the foreseeable future.
In so much as the idea of the literal memory leak for the viewer, whenever it manifests it literally shuts the entire viewer down without notice. There are no dialogs. There is no crash logger (unless you have enabled Debug OpenGL), and there are no answers. The viewer will simply quit.
I’ve reported something similar in the JIRA a number of months ago concerning the minimized chiclet conversations and a possible memory leak which resulted in the same scenario. The longer they remained idle in the bottom bar, the higher the risk that clicking them to restore the IM window would result in crashing the viewer in a force quit scenario.
Months later, and the memory leak seems to have found another outlet in the viewer, except this time it isn’t tied to instances of instant messages, but literally using the viewer in the environment for too long. Looking into carious JIRA posts on related issues, some have tied this memory leak to the viewer ignoring the VRAM limit, causing it to fault and crash.
Whatever the reason, I still use Viewer 2 codebase, however I do so a bit more sparingly. When I absolutely need to have reliability, I usually opt for Phoenix. When I need to have shadows and lighting, I then turn to Kirsten’s viewer S20. Of course in that mix of installed viewers I also have the latest Second Life Snowstorm Development build installed, as well as the standard Viewer 2.0
Will I abandon Viewer 2 as a result of this memory leak? No more than I would abandon Second Life itself for having a metaphorical memory leak when it comes to lessons of the past.
I've just realised, OK not just realised but have known for awhile but decided to accept, that I'm a Geek Girl. Girl Geek if I'm not allowed to use the term Geek Girl (even though it's just two words put together to describe someone of the female gender who is into geek like things!)
Signs that I know makes me a Female nerd/geek:
Comics and manga: the art, the storylines, Superheroes.. etc. Have been since I was younger and my brother and I would buy comics at the market on Sundays. I own Supernatural comics and the Japanese Star Wars mangas.
Sci-fi TV and movies but also any TV show/movies worth obsessing over: Sci fi love comes from the Star Trek my Dad use to watch constantly and also my love of Star Wars. Cliche geek maybe.. but it's a childhood movie love thing that I can't get rid of.
Collectibles and figurines: Currently just Star Wars and Mickey Mouse collectibles but slowly getting into autographed posters and figurines. Lost lots of my anime collectibles after moving states a few times.
Blogging and internet addiction: I've been blogging since I was 18 and have been coding/making layouts for websites since I was 14. HTML is my only 2nd language. I can speak a little Japanese though. Also an avatar and animation maker back in the day. Coined the phrase: "Muffins are just ugly cupcakes". No joke.
Gaming skill: Old school Nintendo fangirl and likes games from Mario Kart to Final Fantasy yet I also play 1st person shooters (COD: WAW) and other various games. Use to go under the username of Zerg Goddess on Starcraft Battlenet. I don't have one genre I prefer.. which messes with my brain.
Nerdling: Took double Math back in year 10, completed all the computer assignments early in the year, took up 3 extra classes and set up experiments to work out how to reduce water pollution for fun.. Major nerd.
OTOY will also make use of CUDA in the future which is great news!!! Hopefully this will speed up adoption of the technology by a factor of 10 to 50x ;-)
UPDATE: here's the full PR release:
OTOY to Present Enterprise Cloud Platform at NVIDIA GPU Technology Conference
OTOY will unveil its Enterprise Cloud platform at the GPU Technology Conference this week. The platform is designed to enable developers to leverage NVIDIA CUDA, PhysX and Optix technologies through the cloud.
Santa Clara, CA (PRWEB) September 23, 2010
OTOY announced that it will unveil its Enterprise Cloud platform at the GPU Technology Conference this week. The platform is designed to enable developers to leverage NVIDIA CUDA, PhysX and Optix technologies through the cloud. OTOY's proprietary ORBX GPU codec will enable high performance 3D applications to render on a web server and instantly stream to any thin client.
OTOY is participating in the GTC “Emerging Companies Summit,” a two-day event for developers, entrepreneurs, venture capitalists, industry analysts and other professionals.
OTOY Enterprise Cloud platform The OTOY Enterprise Cloud platform sandboxes an application or virtual machine image without degrading or limiting GPU performance. CUDA-powered applications, such as Adobe's Creative Suite 5, will be able to take full advantage of GPU acceleration while streaming from a virtual OTOY session.
OTOY bringing GPGPU to the browser In addition to supporting CUDA through its server platform, OTOY's 4k web plug-in adds CUDA and OpenCL compliant scripting across all major web browsers, including Internet Explorer, Mozilla FireFox, Google Chrome, Apple Safari and Opera. GPU web applets that cannot run locally are executed and rendered using OTOY server side rendering. This ensures that GPU web applets can be viewed on any client, including HTML 4 browsers.
Next generation rendering tools coming to developers OTOY enables server hosted game engines to render LightStage assets and leverage distributed GPU ray-tracing or path-tracing in the cloud. The OTOY Enterprise Cloud platform can host complete game engine SDKs, making game deployment to Facebook or other web portals simple and instantaneous.
OTOY will add native support for CryEngine content in 2011, starting with Avatar Reality's Blue Mars. Blue Mars is the first virtual world built using the Crytek engine. It is currently in beta testing on the OTOY platform.
About OTOY OTOY is a leading developer of innovative software solutions for GPU and CPU hardware, as well as a provider of convergence technologies for the video game and film industries. OTOY works with a wide range of movie studios, game developers, hardware vendors and technology companies.
OTOY integrated in CryEngine and supporting distributed GPU ray tracing and path tracing in the cloud!! The dream of real-time ray traced or even path traced games is getting closer every day! I do hope that OTOY will deliver this dream first, they have all the right technology and partners now.
Sunday, September 19, 2010
For some reason, my mac freezes randomly during online activities... I wonder what's wrong. One thing's for sure though, it only happens when I'm online! Is this a driver related issue? Or a bootcamp issue? I wonder...
I tried updating the broadcom wireless card drivers and no difference it still does it randomly, one thing that worries me is that its starting to happen more and more... Is this Apple's way to make me switch to Mac OS X?
If you, like me, would rather use Windows on your mac, then I have a good selection of tools for you!
Let's start with Lubbo's Macbook Pro Fan Control! This tool allows you to control the fans on your Macbook Pro (Also compatible with Macbook). Mind you through, these fans are REALLY loud when in maximum speed!
Now, we all know that due to Apple's sleek designs, these macs don't have an HDD LED, so I have the solution for that too! Diskled is an app that will stay on your system tray and will blink accordingly to your hard drive activities!
And last but not least, ever wanted to get more horsepower out of your graphics card? There's an app for that too (Provided that you, like me, have an nVidia Geforce 320m ) and this app is called EVGA Precision it actually lets you overclock your GPU very easilly with absolutelly no hassle! If you want more information about your card, use GPU-Z!
My best stuff is usually drawn late at night. This came out of no where. It's what happens at the Booger factory (your nose) near the water cooler. Boogers like to gossip.
Talking about boogers.. I think I'm coming down with that flu everyone else has been getting. I'm always late on the biggest trends.
I can't remember when I drew the above.. but it came down to me thinking about silly string and why there isn't a serious version. You know the weird stuff that comes out of a can. I have to buy some and do the Tom Hanks blow my nose but silly string comes out thing.
Anyway the reason I drew it is because I like how things are just called SILLY string or CRAZY straws. Like these things have a personality or that the words silly and crazy are now adjectives on how we can describe something. Oh she has mad crazy hair. His muscles are silly awesome. OK maybe that last one won't catch on but you get what I mean.
I think I'm more a cross between Silly String, where I am all over the place and dryer lint, I'm random but in a fuzzy organized way that you have to deal with (and throw away/make a jumper out of). I made all that up. I have no idea what I'm like so ignore this late night written post.
I'm going to the Royal show on Monday and I'm more excited about it then my kids. That's what I am! I'm a big kid :D that describes me to a tea. Tee. t-shirt?
So anyways, I gave MacOS X a try, and to be honest, I hate it. It seems too bloated to me, and I just don't get along with it. I don't like the way it works, all apps are just an icon, they don't have their folders, its all inside them. iTunes...Meh I had enough of it from my iPod times. I hate it, as soon as I dragged my songs onto it, it started copying them to the hard drive! Without asking first!
Oh well... One thing I do like about Mac OS X is that it looks good, but that's been apple's Philosophy all along, to attract hipster customers, it's really easy to use, and there's always Mac OS equivalents of lots of apps you use!
but alas, I like the close button on the right, and a maximize/minimize button too! (What the hell does that "+" do anyways? I never got it! It somehow "maximized" the window, but didn't stretch on the whole monitor.
What apps did I use?... iTunes for music, no matter how much I googled, that seemed to be the right choice (Once you go foobar, you don't go back), Adium for MSN and etc.. (Still prefer Digsby, why? To be honest I don't know, maybe I wasn't used to Adium yet, even though digsby uses some things from adium), uTorrent for torrents (Amazingly, its really well done, but I never got to check out the tetris easter egg), Dropbox, I have to have this in every computer I own, and I think that's it.
So, to finish all of this..
Hi! I am a mac, and Windows 7 was completely MY choice!
The answer you are quoting from was very much a personal reflection. It also was an answer that highlights the degree of internal lack of logic that that exists within me when I doubt.
If there is an other-than-me reflection in it at all it would be towards those that profess Christian belief but fail to live like it. To the extent of my failings, I consider myself in that number.
I'm not sure if I would use the phrase "not being sensible" to describe those who doubt the Christian God in a way that is entirely in accord with their worldview and philosophical framework.
I'm not sure that there is one phrase that would demarcate those of a Christian worldview from those of atheistic worldview. For instance, I know of some atheists who have come to faith through the path of logic (i.e. they have applied "sense" within their own framework and reframed their conclusions about God), others along the path of moral conviction, others along the path of grappling with some form of revelation that rendered their previous worldview untenable. "Becoming sensible" certainly would not adequately or consistently describe these transitions.
I know some current atheists / holders of other religion who are quite "sensible" (in the sense of coherent internal framework). And there are others who are less sensible - in that they espouse one thing and live like another.
In other words, it's a mixed bag all round, and I don't know if the internal sensibleness of a worldview is a useful tool for demarcating the debate.
An interesting thought. And thinking about it I can see how some people's yearning for the afterlife is a variant of materialism.
But I generally tend to associate thoughts (or expectations) about the things after this life with the human passion not for "more things" but for "more knowledge" or "more understanding." In other words it wells up from the human trait of enquiry - to find pattern in chaos, meaning in mystery, to understand where things are not understood.
We have looked to the miniscule and the astronomic, the visible and the invisible - why would we stop that enquiry when it comes to the shape, purpose and finitude of human life?
In that sense I do not think it is wrong to want "more." While there is value in a sense of being content with "what is" - without the passion to look further, look beyond, a key driver of human activism for good grinds to a halt.
1. Silicone moulds are not better than metal ones. Sorry but they're not. I've never had a problem with small moulds for cupcakes but this one takes the cake. Literally. It took my cake to Fail town. Booked a motel room and made sweet sweet fail love to it.
2. Check my oven temp. It's obviously faulty. I think the cake was in there for an hour. So wrong.
3. Don't double a recipe ever again. My Math skills are failing. Either that or I can't bake anymore (nooooo!)
4. Cooked on the outside and gooey on the inside doesn't mean it's going to taste fantastic.. or be some cool rainbow lava cake invention I've come up with.
5. Don't attempt to mix cream with icing sugar simply because you can't be f**ked actually making decent frosting. Plus it's fattening. How dare you do that to your thighs.
6. Don't apply said frosting/icing on warm cake. Meltingggg melttting!
7. Sprinkles DO NOT make everything better. They make them more sprinkle-y. Pretty but a pretty disaster
8. Don't take a photo of the Fail Cake, post a blog about it and expect people to respect your baking skills anymore. Sniffle.
Seriously it does taste OK and the layer of cream and sprinkles are disgusing the sheer failness of this cake.. but the kids don't mind. They think it's funny. That's what I was aiming for. A clown cake that's been in a car crash.. *sigh*
Added note: This cake tasted great. Looked like a cake version of playdoh though. Very brightly coloured. Kids loved it obviously.
I remember when I was five telling my mother I didn't believe in God. I'm not sure why. It was probably precociousness. It's the last time I remember doubting the _existence_ of God.
I remember toying with the idea in my teens. What would it be like if God wasn't real and I could live as a non-Christian? As a hormone-ravaged young lad the initial preoccupation were about the rampant amounts of premarital sex I could (hypothetically) then have. But even then I realised that even that preoccupation would become meaningless if there was nothing else "under the sun" except what I could experience. And emotionally speaking I teetered on the edge of having nothing to hold on to, nothing to refer to, nothing to guide, uphold, support, correct, or shape me. To be defined by and limited to... me, my own thoughts, my own experiences, my own strategems and philosophies. It literally scared me.
The doubts I have now, when I have them, are usually associated with moments of depression - when my emotions have moved away from what is actually True (arguably a good definition for depression). But these doubts would not be about the existence of God, or his goodness - but of his ability to love me, save me, care for me, nurture me, to not turn his back on me or forget me. In other words, in times of depression, I have a tendency to forget the reality and extent of God's grace and embrace the self-centered notion that the love of God revealed in Christ is big enough for everybody except me.
As with all doubts of depression these doubts are irrational and somewhat nonsensical. These doubts are undermined by the truths of the Christian gospel.
So no, I don't doubt God's existence. And when I do doubt, I am not being sensible.
Jacco Bikker has released two new video's of the progress with his real-time path tracer named Brigade, demonstrating some kind of game where a truck has to push gold containers or something. Looks fun:
There is also an update from Dietger van Antwerpen on the GPU path tracer (subsystem of Brigade path tracer) running with the more advanced ERPT (energy redistribution path tracing) algorithm. He has improved the ERPT code to produce virtually identical results to the path traced reference and released a high quality image with it (ERPT on the left and path tracing on the right):
"After some complains pointing out that in the movie, ERPT is significantly darker then path tracing , I fixed the darkening effect of the ERPT image filter, solving the difference in lighting quality. I made an image ( http://img844.imageshack.us/img844/7... ) using ERPT for the left half, while using path tracing for the right half and waited until the path tracing noise almost vanished. As you can see, the lighting quality between the left and right half is pretty much the same. (The performance and convergence characteristics remain unchanged)"
It would be interesting to know the time for ERPT and for path tracing to achieve these results.
As the videos show, ERPT converges considerably faster than standard path tracing and the noise is significantly reduced. Very cool and very impressive. I wonder if the optimized ERPT code will be used in Brigade for real-time animations and games.
Thanks for the question, which I assume derives from an article on my blog ( http://god-s-will.blogspot.com/2010/09/asking-right-question-in-marriage.html ). Caveat: These are initial thoughts only.
The fundamental question to ask is whether or not we want marriage law to be _passive_ or _active_. The passive sense of law is to reflect society - to enact or provide a legal model that encapsulates societal reality and allows for legally guided (and bound) interactions between members of society according to those reflected norms. The active sense of law is to guide, shape or even control society - to provide rights, assert responsibilities, and enable punitive measures in order to modify behaviour or shape cultural norms.
FLOW OF THOUGHT #1 - We need something in the passive sense, to reflect society. --------------------------------------------------------------------------
The problem is that if we look at society I don't think this "something" is the Marriage Act. In particular, it is not the concept encapsulated in the Marriage Act that is the "solemnisation" of a marriage.
Solemnisation is not just about something being solemn or heartfelt. Legally speaking we can consider it to be a "formality necessary to validate a deed, act, contract." I guess its much like the settlement on a house - something happens when the keys are exchanged. It is not wrong to think of a solemnified marriage as an enacted contract then, in two senses:
a) A contract between the parties. Entering into marriage implies (as is recognised in law) a whole bunch of rights and responsibilities. These only usually come into play when a marriage ends (e.g. inheritance rights) or breaks down and where some form of reparation for obligations-not-met are required - alimony, custody of children, separation of assets etc.
b) A contract with society. Entering into marriage implies a legal state that is recognised and taken into account when it comes to affairs external to the couple - everything from taxation, social welfare, interaction with the education system, issues relating to privacy, issues relating to next-of-kin, and (topically for NSW at the moment) the adoption of children etc. - all take into account (to a greater or lesser extent) the existence, or not, of a marriage contract.
But solemnnisation, legally speaking, is becoming more and more meaningless. For instance, the "common law" or "de facto" marriage, is now pretty much taken as an implied contract even thought it has never been "solemnifed." This is true in both sense of the contract. As a contract between the parties the implications of a relationship breakdown financially and in terms of children etc. is now pretty much identical to that of "real" marriages. Similarly, as a contract with society, there is very little distinction made between solemnified and registered marriages, and de facto situations.
To a lesser extent, the advent of "civil unions" or the ability in some jurisdictions to register a same-sex relationship, also provides the rights of the contract without the solemnisation of a marriage. This is particularly the case in the sense of the contract between the partners (shared property rights etc.), yet increasingly so in the sense of the contract with society (availability of the privilege to adopt etc.)
As the distinctiveness of solemnised marriage is reduced, so is its value.
Solemnisation alone, therefore, provides very few things, legally, that are not provided for by other means. Perhaps this is simplistic, but the only thing you can get via legally solemnised marriage that you can't get anywhere else is:
a) Convenience. Sign four or five pieces of paper and you have the legal recognition of your relationship in a few easy steps. More importantly: your relationship can be enacted by proclamation (we are now married) rather than by demonstration (we are cohabiting, so consider us married). b) Cross-recognition. Generally speaking (and less uniquely now that there is provision for cross-recognition of civil unions), a legal marriage in one jurisdiction is recognised in another. c) Symbolism - you get to refer to your relationship, unquestioningly, as a "marriage" and have the certificate to prove it.
And none of these things are inherent to any deeper concept of "marriage."
Personally, I would, for instance, and for some good theological reasons (for another time), define a marriage relationship as: a faithful, sexual, lifelong relationship between a man and a woman in a covenant freely entered before God, each other and the community. If any of those characteristics were not present a relationship would not easily be classified as a marriage in my thinking.
Legal solemnisation is not needed for any of these characteristics to exist. It is not even needed for a relationship with these characteristics to be legally recognised (although it is a possible way in which that legal recognition can occur).
So why have legal solemnisation at all? Let relationships be formed either by behaviour or voiced intention or religious rite and then them recognised as legal by registering them. Let the legal reality be a _recognition_ of relationship rather than the creation of the relationship. Let marriage (defined by man-and-woman) be, legally, simply one form of recognised civil union (defined more broadly as the case may be - including non-sexual relationships).
After all, that is, in practice, what we have now. And if we are looking at representing reality, let us represent it.
Freedom can still be exercised. Ministers of Religion would, just like now, be able to lead people through religious rites - to solemnify spiritually - and exercise their conscience and religious freedom as to who they would do this for and who they wouldn't do it for. Relationships covenanted within those rites would be able to be registered and recognised legally. All is well.
The debate about what gets to be called "marriage" therefore becomes what it actually is - a cultural debate about definitions and nomenclature.
However,
FLOW OF THOUGHT #2 - Do we need something in the active sense, to shape society? --------------------------------------------------------------------------
Starting with my definition of the characteristics of marriage - a faithful, sexual, lifelong relationship between a man and a woman in a covenant freely entered before God, each other and the community. Is it possible to ensure that the legal representation of marriage reflects that definition?
Only partially, but substantially. Solemnisation, with any effect, can only insist on the objective characteristics of a relationship - that it is a covenant freely entered before the civic community, and that it is between a "man and a woman."
The debate is about whether to reduce the restriction of this latter characteristic to "between two people." Some would even like to see the characteristic further liberalise to recognise polyamory - i.e. more than two people.
The fact that the law is resistant to change in this characterisation of marriage is itself a "shaping of society." The law is active. And there is value to that.
The problem is that it is only active in a shallow sense. If the legal affirmation of marriage will only extend to the depths to which solemnisation under the marriage act extends then this is not very far because the activity of solemnisation is of lessening practical effect (see previous flow of thought). It confers fewer and fewer particular rights and the choice to not seek legal solemnisation of a relationship carries less and less penalty.
Those who are intent on marriage law maintaining a particular objective definition of marriage need to not only argue for the retention of that definition but also consider the extent of its enforceability. Their needs to be an increased discussion of how the law can actively assert that definition. The argument needs to not just be about what legal marriage _is_ but what legal marriage _does_ - what unique rights does it bestow? What things are unavailable to those who do not avail themselves of legal marriage? What penalties apply where a marriage covenant is broken?
The question becomes: where do we draw the line as to what the law should do?
Which is where I'll leave it - unanswered for now.
Ok, so today I went to Mohawk College to get my Process in motion. Yesterday I got an English assessment (Which I passed with flying colors). The rest though, wasn't as easy as I expected... So, allow me to show you my progress today:
-Enter J137 and get results
-Be told to go to C112 -In C112, be told to go to J107 because I'm an international student -J107 sends me back to J137 for another assessment -Secondary assessment wasn't required go to library and start Mocomo Account
-Get in the library, open Mocomo, start my timetable, doesn't work.
-Back to J107, be redirected to C066 -C066 tell me I'm not registered in any program, sends me back to J107
Finally, they put me in networking t'ill January, which is when Software Support starts...
A long day indeed, All of this could've been done easier, not making me walk all day.
J137 could've simply redirected me to J107 and they could've taken care of everything, but oh well...
"a real-time massive model visualization engine called VoxLOD, which can handle data sets consisting of hundreds of millions of triangles.
It is based on ray casting/tracing and employs a voxel-based LOD framework. The original triangles and the voxels are stored in a compressed out-of-core data structure, so it’s possible to explore huge models that cannot be completely loaded into the system memory. Data is fetched asynchronously to hide the I/O latency. Currently, the renderer runs entirely on the CPU.
...
I’ve implemented shadows in VoxLOD, which has thus become a ray tracer. Of course, level-of-detail is applied to the shadow rays too.
While shadows make the rendered image a lot more realistic, the parts in shadow are completely flat, devoid of any details, thanks to the constant ambient light. One possible solution is ambient occlusion, but I wanted to go further: global illumination in real-time.
GI in VoxLOD is very experimental and unoptimized for now. It’s barely interactive: it runs at only 1-2 fps at 640×480 on my dual-core Core i7 notebook. Fortunately, there are lots of optimization opportunities. Now let’s see an example image:
Note that most of the scene is not directly lit, and color bleeding caused by indirect lighting is clearly visible. There are two light sources: a point light (the Sun) and a hemispherical one (the sky). I use Monte Carlo integration to compute the GI with one bounce of indirect lighting. Nothing is precomputed (except the massive model data structure of course).
I trace only two GI rays per pixel, and therefore, the resulting image must be heavily filtered in order to eliminate the extreme noise. While all the ray tracing is done on the CPU, the noise filter runs on the GPU and is implemented in CUDA. Since diffuse indirect lighting is quite low frequency, it is adequate to use low LODs for the GI rays."
There is a very interesting graph in this paper, which shows that when using LOD, the cost of ray casting remains constant once a certain number of triangles (0.5M) is reached:
Quoting the paper,
"by using LODs, significantly higher frame rates can be achieved with minimal loss of image quality, because ray traversals are less deep, intersections with voxels are implicit, and memory accesses are more coherent. Furthermore, the LOD framework can also reduce the amount of aliasing artifacts, especially in case of highly tesselated models."
"Our LOD metric could be also used for several types of secondary rays, including shadow, ambient occlusion, and planar reflection rays. One drawback of this kind of metric is that it works only with secondary rays expressible as linear transformations. Because of this, refraction and non-planar reflection rays are not supported."
Implemented on the GPU, this tech could be the ideal solution for real-time raytracing in games:
- it makes heavy use of LOD for primary, shadow and GI rays which greatly reduces their tracing cost
- LOD is generated automatically by the voxel data structure
- nearby geometry is represented by triangles, so there isn't any voxel blockiness on close inspection
- characters and other dynamic geometry can still be represented as triangles and as such avoid the difficulties with animating voxels
- huge immensely detailed levels are possible thanks to the out-of-core streaming of the voxels and triangles
- it uses Monte Carlo GI, which scales very easily (number of bounces + number of samples per pixel) and can be filtered, while still giving accurate results
I am really starting to hate Final Fantasy XIII. Why? I just spent two whole hours, grinding my characters, to see if I could kill this bloody boss, but guess what? Almost at the end, and my Party Leader gets a doom curse! 2000 second countdown, bye bye, game over.
So, 2 hours grinding, plus another one TRYING to defeat this mordor. GONE! Just like that!
In a recent blog post from Linden Lab, the team introduced Kim Salzer as their new Vice President of Marketing. Kim Salzer (Kim Linden) comes in as a heavy hitter to SecondLife, with her former position being from Activision Blizzard Entertainment. The following is an excerpt from that blog post to better frame this conversation:
Linden Lab recently welcomed a new addition to its executive ranks: Kim Salzer, who became our Vice President of Marketing, at the beginning of August. Kim (known as Kim Linden inworld) joins us from Activision Blizzard, where she was Vice President of Global Brand Management for properties like Guitar Hero and Call of Duty 2, among others. Kim brings a deep background in gaming (including experience at Electronic Arts working on massively multiplayer online games and hit sports franchises), and in online learning, and we’re excited to have her as part of the team.
Q: What are your goals for your new position as VP Marketing at Linden Lab?
A: What I really want to do here is help the Lab figure out what the “X Factor” is going to be for Second Life. When I was working on games, I always tried to choose a single idea to focus on and bring out, an X Factor that helped people get into the game and helped them discover all the rest of the possibilities there. If I can help bring that kind of focus to Second Life, I’ll be happy about how I’m doing my job.
The topic of this post, as it would logically follow, is about what it is that SecondLife can offer as a "killer app" X-Factor when it comes to the next generation of marketing aspects. This will likely be an ongoing quest for Kim Selzer, as she spends a few months acclimating to the nuances of SecondLife and the virtual environment sociology aspects. Unlike gaming, as we would see in World of Warcraft, or Guitar Hero, there is a manner of fluid dynamics involved that goes far beyond what traditional aspects may call for.
SecondLife is not a game, regardless of how many people wish to make that claim. It is true that the virtual environment has elements of gaming involved and that it can surely be used in the gaming aspect. For proof of this, we need only look at the vast amount of roleplaying communities, and Zyngo game halls. But for all of the gaming aspects, there is an underlying ability to be more than just a video game, and in many aspects, SecondLife transcends "just being a game" into something much more powerful and compelling.
SecondLife is a sandbox virtual environment, and I gladly make the distinction between this and just a video game. To say that SecondLife is merely a video game is the equivalent of saying that the Internet is a repository for pornography simply because it happens to have such pages. It's this sort of thinking which ignores the larger possibility that there are Art Galleries, Music, Video, Educational, and countless other interactive and beneficial aspects which in turn create what we have come to call Social Web.
SecondLife is a virtual environment system, and as such the underlying interface borrows heavily from gaming paradigms. There is the 3D engine, the concept of a virtual currency which is exchangeable to real world currency, there are avatars, and of course there are plenty of gaming elements within the virtual environment. But there are also thriving businesses, content creators, and service providers in this virtual environment as well.
Whether we are creating a fantasy based sword for a roleplay system, a costume, virtual clothing, or even vehicles, this is a real world economy based system. With this understanding, we begin to see that there is, indeed, a viable X-Factor as Kim Selzer puts it. At this moment in time, I am not sure she understands what that X-Factor is, but she is on the active search to narrow it down and quantify it in terms that the rest of Linden Lab can act upon.
Unlike World of Warcraft, where the items in the virtual space are created by the developers and released to the players, an open sandbox system like SecondLife is much different as all of the "players" are given the tools needed to create not only the items they use in-world but the environment itself. This puts Linden Lab in an interesting position, as the company responsible for the environment system would tread on slippery slopes with their in-world userbase if they were to suddenly compete with them.
For the new Vice President of Marketing at Linden Lab, this surely will come as an interesting enigma to unravel in her search for the X-Factor.
In order to make her search a little easier, I'm going to take some time to define and outline what the X-Factor is, as well as how best to put it to practice. I understand that writing this may or may not be seen as stepping on toes, but after fifteen years in virtual environments, I have yet to see any company truly act on this. Today however, Linden Lab and Kim Selzer are sitting on a golden opportunity to bring SecondLife into a new guilded age.
What is this X-Factor?
To put it bluntly, the x-factor is something in which the infrastructure already exists in SecondLife to accommodate. Linden Lab already has the Marketplace, countless high-end content creators in-world at their disposal, and above all else the overwhelming demand for this x-factor to be implemented.
A cursory look at the marketplace shows that there are a high number of "copy-cat" type items for brand name merchandise. Of course, most of those cloned items are renamed ever so slightly in order to try and avoid some sort of legal action against the creators. Instead of Armani, which is known for high-end fashion and luxury, we have the SecondLife equivalent named Armidi. Instead of Apple computers we have Pear computers. Instead of Coca-Cola we see Coco-Cola derivatives.
There is a massive demand for true branded content in SecondLife, and this is the beginning stage of our x-factor.
As an in-world content creator, it is nearly impossible if not entirely impossible to seek permission from IP holders in order to introduce (legally) a product in SecondLife which is a clone of a real world product.
Major corporations often times have a strict "No Solicitation" policy which puts a brick wall up between content creators and those companies. In the rare chance that a content creator continues to pursue due diligence in asking for proper permissions from the IP holders, and submits the required forms to the legal departments or licensing, those applications not only go unnoticed but nearly always disappear into a black hole.
In many cases, it's not that content creators in-world haven't tried to get permission (and I'll give them the benefit of the doubt), it's often times not worth bothering to go through the hurdles and labyrinth of phone calls and tracking down the ever elusive point of contact in order to stay on the straight and narrow. A majority of the time (and I mean 99%) you will be stonewalled, ignored, or patronized as they tell you that they appreciate your idea but they have a no solicitation policy.
Which is to say, that you may submit the ideas but they not only won't involve you with it, but they are more than free to go ahead and do it on their own with no acknowledgment whatsoever for the person who gave them the idea to begin with. I've personally seen this happen a number of times over the years, where a virtual world content creator would try to contact the licensing department of a company, only to be told that there is a no solicitation policy - but later on (about a month or two) I've seen that same idea submitted internally with somebody else's name on it in the marketing department for approval. This also goes for the scenario whereby the licensing request and documentation are actually filled out in detail by the content creator, submitted, and subsequently rejected, only to have the company do exactly what the content creator submitted for approval later on. At best, this is disingenuous.
In this very hostile environment, why would any content creator in their right mind ever bother to appease the IP holders?
So we have a serious problem. It's not ok to create works that infringe on intellectual properties, trademarks or rights. However it's not alright for companies to routinely ignore, reject, or patronize independent content creators. There is obviously a ridiculous demand for in-world branded products, but as of right now there is no supply.
There are those of us who put in the licensing and IP forms with full details and disclosure as to the nature of the project and environment it will be available in, but those forms seem to find their way into a perpetual black hole after they are sent off. Take into consideration, my own arcade machines; they are near faithful replicas of the real machines (with some discrepancies), but they are all playable, point to official channels online for the IP holders, have a manner to offer the history of those games, and most of all - I've submitted the licensing paperwork for all of them (where applicable).
So what happens after that paperwork is officially submitted?
Nothing.
If you make the attempt to verify one way or the other that the request is accepted or denied, you get the runaround, stonewalled, or perpetually delayed. They simply are not interested in dealing with an individual in any capacity and would much rather deal with a major corporation or game publisher instead. In the grand scheme of things, independent in-world content creators are just annoying gnats that those corporations swat away and patronize.
So we have items that are legally submitted with the appropriate channels, paperwork, full details, scope of the IP usage, etc, and the legality of their existence remains in limbo indefinitely. In the modern age, I take this to mean that those companies neither accept nor deny the existence of those items until such time as they find it beneficial to tell you to remove them in order to allow them to make their own versions "officially".
In short, if those items are popular enough, those companies will suddenly find a reason to find that application and stamp it "denied", only to come in a month later and do exactly what you already were doing but "officially". Again, I have to ask why any content creator in a virtual environment like SecondLife has any incentive to attempt that due diligence process if it more than likely will lead to futility in the end?
However, this isn't about who is and who is not breaking what laws. As for my video demonstrations, and the reason why I personally create branded content in virtual environments, I maintain that it is purely an academic exercise. The Capri Sun juice pouch in my mouth? It's a personal item and not for sale. The Papa Johns pizza box? Again, it's a personal item and not for sale. The speakers in the back, however, are for sale and ridiculously cheap (75L) and the arcade machines are all for sale at 250L each or 1000L for the entire set.
The latter, (arcade machines), are sold at the exact price that was stated in the licensing paperwork that I filed with Namco and Atari games. In that paperwork, I described the purpose of the creation and sale of those items as an academic exercise concerning my upcoming book chapter about the same topic (Electronic Commerce and Virtual Worlds). It was also detailed in that paperwork that the purpose of those items was not to create an atmosphere of monetary gain, and to that point they would only be sold in-world for the equivalent of $2 or less, and a majority of that would be funneled back into promotion costs for those items. Any further profits from those items would simply act as payment for the time spent to create, script, design and promote those items continuously.
In short, I did all of the work to create what accounts for a viral marketing placement in a virtual environment on their behalf. They reap whatever rewards there are from such practices and placements in the virtual space, and it would cost them absolutely nothing in return. Where they would normally have paid quite a lot of money for something similar if they hired a marketing firm to do the same thing, in this instance they get all of the benefits and none of the costs.
Let's take a step back though. I did mention that magical phrase "viral marketing" in the last paragraph, and if you are a marketing company in real life you probably just read the last few paragraphs and realized that it's a veritable Fort Knox of opportunity.
When writing the book chapter, and many years before that (back to 1998), I've outlined this exact same scenario time and again. Real world brands within the virtual environment in a manner which facilitates viral marketing capacity. This is where the Papa Johns pizza comes from at Pixel Labs; it originated from building a Papa Johns pizza building in ActiveWorlds in 1998, and making the online ordering option clickable in-world.
Since 1998, we've introduced a variation of that Papa Johns virtual item/location and online ordering in nearly every project we've built. The latest iteration in Second Life has not only in-world abilities but the built-in ability to allow people to actually order a real pizza via online ordering at Papa Johns. This is an example of the X-Factor in action, and I've known about it since the mid-1990's.
Possible Solutions
We've already taken time to establish what the x-factor is in this post, and we've also covered what the problems are which currently keep it from reaching full potential. Let's break down what we have, followed by what it is that is hindering it, and then we'll lay out the possible solution.
What we have:
Countless high quality content creators in-world
Fully functional and polished marketplace
Increasingly high demand for branded merchandise
What we're missing:
Reliable manner to license such IPs in a way that is beneficial to content creators, Linden Lab and the IP holder.
Supply of legally branded merchandise in-world
Possible Solution:
Linden Lab Marketing Dept. extends the opportunity to said IP holders to participate
A list of IP holders participating in a Verified Vendor program
Content Creators in-world can submit proposals to fulfill IP holders request for in-world items.
Linden Lab and IP holders choose which content creators are considered the best match for the job.
Chosen content creators receive approval for the creation of those items on behalf of the IP holders. Those content creators are given Verified Vendor status for those items.
Linden Lab may charge the IP holders (companies) for the marketing.
Content creator is allowed to place the items created (after approval) on Marketplace set to a free value.
Each sale of the item (free) is considered a promotional sale, and the IP holder agrees to pay the price, which is paid to the content creator, with percentage to Linden Lab to cover costs.
Users get the branded item for free, content creator gets paid, IP holder gets viral marketing for cheap, Linden Lab receives a fee from IP holder for the marketing services.
Verified Vendor items are given full Marketplace enhancements as part of the marketing package, for the life of the contract with the IP holders. When the IP holder has decided to end the involvement, the Verified Vendor may continue the sale of the items under the normal circumstances that any vendor in world would be subjected to: They are responsible for their own promotion, marketing, etc. They no longer can offer the item free unless they pay the fee themselves.
It would be pointless for an IP holder to end involvement with a verified vendor project and also expect to pull the item completely off the market. This would lead to copy-cat items again to fill the void they've left. It would be better, instead, to allow the vendor to remain the verified vendor of that item and continue to sell the item for a nominal fee which they are paid from the buyers.
In this manner, we exploit every conceivable aspect of social environments for the benefit of absolutely all involved. The Verified Vendor program would be seen as a competition of sorts, with some of the best content creators competing for the right to create items for those IP holders. Real life brands would enter into the marketplace, filling a demand (and subsequently acting to lower the demand for illegitimate items), and IP holders would have a safe and cost effective way to create an ongoing presence in the virtual environment without the overhead of an entire region.
And so ends the explanation of the SecondLife X-Factor, and how to utilize it to the fullest advantage. I'm curious to see how long it takes for virtual environment companies to figure this out. I'm patient... I've waited about 15 years so far.
Better yet, spread this article, retweet it, and somebody send a copy to Kim Selzer.
Are there any plans to add fixed function raytracing hardware to the GPU?
David Luebke: Fixed-function ray tracing hardware: our group has definitely done research in this area to explore the "speed of light", but my sense at this time is that we would rather spend those transistors on improvements that benefit other irregular algorithms as well. ray-triangle intersection maps well to the GPU already, it's basically a lot of dot products and cross products. ray traversal through an acceleration structure is an interesting proxy for lots of irregular parallel computing workloads : there is abundant parallelism but it is branchy and hard to predict. Steps like Fermi's cache and unified memory space are great examples of generic hardware improvements that benefit GPU ray tracing as well as many other workloads (data mining, tree traversal, collision detection, etc)
When do you think real-time ray tracing of dynamic geometry will become practical for being used in games?
David Luebke:
ray tracing in games: I think Jacopo Pantaleoni's "HLBVH" paper at High Performance Graphics this year will be looked back on as a watershed for ray tracing of dynamic content. He can sort 1M utterly dynamic triangles into a quality acceleration structure at real-time rates, and we think there's more headroom for improvement. So to answer your question, with techniques like these and continued advances in GPU ray traversal, I would expect heavy ray tracing of dynamic content to be possible in a generation or two.
Currently there is a huge interest in high quality raytracing on the GPU. The number of GPGPU renderers has exploded during the last year. At the same time there are critics saying that GPU rendering is still not mature enough to be used in serious work citing a number of limitations such as not enough memory, shaders are too simple and that you can only do brute force path tracing on the GPU, which is very inefficient compared to the algorithms used in CPU renderers. What is your take on this? Do you think that these limitations are going to be solved by future hardware or software improvements and how soon can we expect them?
David Luebke:
re offline renderers - I do think that GPU performance advantages are becoming too great for studios to ignore. You can definitely get way past simple path tracing. I know of a whole bunch of studios that are doing very deep dives. Stay tuned!
Do you think rasterization is still going to be used in 10 years?
David Luebke:
re rasterization: yes, forward rasterization is a very energy-efficient way to solve single-center-of-projection problems (like pinhole cameras and point light shadow maps) which continue to be important problems and subproblems in rendering. So I think these will stick around for at least another 10 years
There have been a lot of papers about reyes style micropolygon rasterizing at past graphics conferences with the feasibility of hardware implementation. Do you think this is a good idea?
David Luebke:
re: micropolygons - I think all the work on upolys is incredibly interesting. I still have some reservations about whether upolys are REALLY the final answer to rendering in the future. They have many attractive attributes, like the fact that samples are glued to parametric space and thus have good temporal coherence, but they seem kind of ... heavyweight to me. There may be simpler approaches.
am I wrong in thinking that game graphics are limited more by the artist than the graphics or are the game companies just trying to reach a broader market?
David Luebke:
you are not wrong! game developers are limited by artists, absolutely. But this translates to graphics - better graphics means simpler, more robust, easier to control and direct graphics. A good example is cascaded shadow maps, used very widely in games today. These are notoriously finicky and artists have to keep a lot of constraints in their head when designing levels etc. Looking forward, increased GPU performance and programmability both combine to make simpler approaches - like ray tracing - practical. On the flip side, graphics is certainly not solved and there are many effects that we can't do at all in real-time today, so you will continue to see games push forward on graphics innovation, new algorithms, new ways to use the hardware
UPDATE: Some tech websites such as xbitlabs, have taken the comment from the live chat about rasterization out of context, stating that we will have to wait at least another 10 years before ray tracing will be used in games. Apparently, they didn't read the answers from David Luebke very well. The way I understood it, is that rasterization will be used in conjunction with ray tracing techniques, both integrated in novel rendering algorithms such as image space photon mapping. From the ISPM paper:
Image Space Photon Mapping (ISPM) rasterizes a light-space bounce map of emitted photons surviving initial-bounce Russian roulette sampling on a GPU. It then traces photons conventionallyon the CPU. Traditional photon mapping estimates final radiance by gathering photons from a k-d tree. ISPM instead scatters indirect illumination by rasterizing an array of photon volumes. Each volume bounds a filter kernel based on the a priori probability density of each photon path. These two steps exploit the fact that initial path segments from point lights and final ones into a pinhole camera each have a common center of projection.
So ray tracing in games will definitely show up well within 10 years and according to Luebke you can expect "heavy ray tracing of dynamic content to be possible in a generation or two". Considering that Fermi was a little bit behind schedule, I would expect Nvidia's next generation GPU to come around March 2011 (on schedule), and the generation after that around September 2012. So only 2 years before real-time raytracing is feasible in games ;-D.
Do you have an ATI or nVidia graphics card on your computer? Well, in case you didn't know, Steam made an agreement with ATI AND nVidia, giving players free games depending on their cards!
Anyways, here are the links:
- For ATI cards (Half Life 2 : Lost Coast, Half Life 2 : Deathmatch, plus various discounts)
- For nVidia Cards (Half Life 2 : Lost Coast, Half Life 2 : Deathmatch, Portal : First Slice, Peggle Extreme)
Again, these are for free, so get them while you can! Because I sure did! :)
Exophase.com is holding a contest for a new design for their Gamercard. Here's their current gamercard:
What are the prizes? Prizes are the following steam games:
BioShock 2, Grand Theft Auto IV, Left 4 Dead 2, Shatter, Half Life 2, Half Life 2: EP1 Indigo Prophecy, Thief, Team Fortress 2
Each 3 final contestants get 3 games, but here's the catch, he who comes in first place, chooses 3 games, after that, second contestant will choose another three, and the last, gets the "left-overs".
Think its fair.
How do I participate? Well here are the rules. Just join their forum and post your entry in this thread! Be sure to have a PSD and the fonts used of the entry.