Jump to content

HolmstN

Member
  • Content Count

    2
  • Joined

  • Last visited

  • Medals

Everything posted by HolmstN

  1. First off, a link to a nice lil' tutorial. In the beginning it does a pretty good job of explaining RTT: http://rbwhitaker.wikidot.com/render-to-texture Really, this is a weighty question as it entirely depends on implementation and circumstance. For example, what's being rendered at one time? We know ArmA has a draw distance (or has had in the past), so it's safe to assume we're not rendering outside of that. Chances are, though, there's drawing being done all the way to that mark. That's a lot, but systems can handle it nowadays (especially with more advanced techniques like LOD- level of detail). Another question is, what sort of 'angle' are we drawing at? Is it a full sphere around your character for that entire distance? Usually games will draw what's nearby behind you in case you turn quickly- you don't want your computer to have to quickly load a gun in your face behind you! The distance may be minimal there, though, as well to the sides. This would create a sort of 3-dimensional teardrop of rendering space. Now I'm not TOTALLY familiar with this stuff and I'm definitely getting out of my territory now, so anyone with more experience please correct me. With all these to consider, the final point would then be this: if you're rendering an entirely NEW scene from across the island, then it'll be far more costly than simply rendering the same scene that's already rendered with some postprocessing effects. Therefore, all this: Could be feasible. Remember though that these are all technically new scenes being rendered, and I'm not sure if you can actually share objects between rendered scenes. If you wanted to have a command post that had camera effects from all across the island, though, that would be a much greater strain on the system.
  2. In as simple terms as I might dare to put it, the difference between rendering and texturing is this: rendering is a dynamic process that your computer undertakes to develop the 3D world (models) and the application of 2D textures to said 3D world (amongst other things). The textures are themselves essentially static paint. When you look at a piece of art, it, of course, is stationary. This is the texture. No matter if you twist and turn the MODEL, the paint doesn't move (in relation to said model). Render to texture actually changes that. Essentially what is happening is that the 'hard' part of 'rendering' (think as if you were to create a model car) is turned into these static pictures. Suddenly you get a moving picture. This hardly describes the process, unfortunately, but perhaps illuminates some of the underlying aspects of how powerful such technology can be. It can be used almost like television within a game; you have one thing 'recording' and sending it as a texture to an output node. In many games you can be placed into the role of a different 'camera' (such as when you launch a remote recon drone in ArmA2), but what you're really doing is essentially transferring the 'spirit' (if you will) of course character to this alternate 'character.' No one else could see what you're seeing except by also 'transferring' their spirit. Now, you could have a camera monitor showing exactly what that recon drone is seeing without ever having to switch to some new view.
×