With 3.2 you give up a few things, but you gain older OSX support.īy using OGL 2.x you reintroduce lots of legacy crap into your render code, I don't think its worth the effort.
With 3.3 in theory is supported by all D3D10 level hardware (cept for Intel on Windows because reasons) and it has a few nice things (samplers, explicit attribute location, etc), and you pretty much guarantee that if a card can run OGL 3.3 its either new hardware or old hardware with new drivers (new drivers = good thing, specially if doing OGL). If you like macs (and I know you do), go 3.2+, otherwise I'd leave it at 3.3+. Right now, my engine only supports Direct3D10/OpenGL 3.x level hardware and above. Lastly, I personally wouldn't consider D3D10 or OpenGL 3 hardware current generation either. If D3D11 level hardware for a visually lower scale game is required, that's ridiculous. But for a more extravagant title, I think supporting legacy hardware will only hold the game back (and if the person buying a computer wants to play extravagant titles, they should know better).īut considering that D3D11 allows you to run on D3D9 level hardware, there isn't much loss there (unless you're targeting WinXP). As Frob stated, consider who you're targeting. If you are making a 2D ( or simplistic 3D ) game, don't be an # ( like a couple game companies I have seen ), and require OpenGL4 / Directx 11. Heck - there are still lower end PCs being sold today that only support the earlier generation graphics standards.
You will be cutting off a ***LOT*** of folks if you just concentrate on "current generation".īelieve it or not, only the more 'wealthy' folks upgrade their PCs every year. That's especially true if the target demographic has a machine that can handle a more modern API. The last thing I want to do is hold my engine back with legacy stuff, because that's just more maintenance than I believe it's worth. I can choose between any supported API just by setting a single flag. I've been targeting my rendering engine to tailor to D3D and core OpenGL to minimize performance loss.
As much as I don't like the API design (most likely because I don't understand the design choices), it's in "the now", and that's what I need to be targeting. For me dx10/dx11 was big dissapointment as to what I had been hoping forĪlthough I also found D3D10/11 to be disappointing in some areas, I'm actually beginning to like certain things better than D3D9. You cannot use gpu for wider computation.
In the end, I was expecting that in dx11 one will be able to define rasterization rules (not at all), have in pixel function read pixel properties against them (not at all), instead dx11 added a fancy AAA antialiasing (what would be amazingly doable if upper given was implemented). (examples are Stalker for example)Īnd finally, a rendering engine is just more attractive if it can run its demos on dx9/ogl2.0 and have them eye candy.
In my experience, if you turn on in some games dx9 to dx10, you notice minimal graphic advance with enormous performance drop. I do not sample texture in vertex function, I do not render to vertex buffers, I do not alter device coordinate of drawn pixel (I instead do a different trick for that), I can continue for so long. I have not needed higher shader model, I instead reformed my rendering to be more sane instead (no 25 interpolated atrributes for pixel function and such).įor me, sticking to standard routines and constructing with them, seems to make me create faster software, even if I need to perform redundant draws, targets, than issuing with some "amazing" instruction of dx11. The only thing I may need to drop dx9 for in the future is the shader model 3.0 limit. I myself find d3d10+ a trappy library pointing you to tehcniques that usualy (nearly all of them) seriously damage performance - if you actualy use them. Totaly this, but, if you do not need to give up any feature, go as low as it gets. Define your target market, the answer will follow.