Jump to content
Sign in to follow this  
ric

.NET (C#) and the Arma 3 Game engine

Recommended Posts

I don't need to google to realize that this is non-sense but I'll quote for you most accepted answer from one of the links you posted:

Wow, you're an even worse nutcase than I thought. That's the MSDN description, and it's even RIGHT THERE IN IT.

The runtime can use native images from the cache instead (of) using the just-in-time (JIT) compiler to compile the original assembly.

It just gets rid of the JIT compilation. The runtime is still required.

Share this post


Link to post
Share on other sites
Even though it's possible to compile a C# app to native code, the app will still make calls into the .NET Framework, which will still use JIT compilation.
The runtime can use native images from the cache instead (of) using the just-in-time (JIT) compiler to compile the original assembly.
Wow, you're an even worse nutcase than I thought.

Still, it's you who is full of BS.

Share this post


Link to post
Share on other sites

Point is, it's still dependent on the framework, regardless of whether you use NGEN or not.

Share this post


Link to post
Share on other sites
Wow, you're an even worse nutcase than I thought. That's the MSDN description, and it's even RIGHT THERE IN IT.

It just gets rid of the JIT compilation. The runtime is still required.

Batto is right. JIT loads assembly just once, then it's reused until recompiled. (or restarted), but both languages work the same way - both have some Runtime that executes kind of Intermediate Language - compiled code and runs binary.

Javas problem is, that for UI it's a crap - it's slow and sluggish, am not sure how good it will be for Arma scripting.

Still, i think C# would be a better option then Java because of it's simplicity. However, i agree that C#rps garbage collector is a shitte. (causing for example stalls on web servers in some cases)

Regarding overflows checks in C++.... i guess you haven't noticed that i was comparing checks on overflows in C# against C++, and what i said was, that it's pretty easy to overwrite memory using c++ (using arrays, pointers or whatever else)

and cause application crash. - try that in c#.

Share this post


Link to post
Share on other sites

MulleDK19, batto

+1 infraction each for flamebaiting each other. And now we try to get back to a mature discussion, else we close here.

Share this post


Link to post
Share on other sites

The original question is a bit silly since no-one is likely to rewrite a very large codebase in a new language, particularly when

1) Your development team has lots of experience in the old language

2) It's 'common wisdom' that the new language is unsuitable for your application.

That said, someone made a very good point that there is no reason why you couldn't benefit from a change in language for new code/subsystems. The debate about managed vs unmanaged performance will probably never be settled by anything as prosaic as facts but probably the best analysis I've read is here http://www.codeproject.com/Articles/212856/Head-to-head-benchmark-Csharp-vs-NET.

If you can't be bothered to read the whole thing the basic conclusion is that well-written C++ is faster (10-30%) than well-written C#. The real question is whether you're capable of writing sufficiently good code for it to matter and whether that part of your code is even performance-critical.

IMHO the most effective optimisations fall into two categories: either a very small tight loop which is executed repeatedly or a rethink of a high-level algorithm. Very rarely do you conclude that your program is just 'too slow' across the board. For a small loop, well you'd probably rewrite it in assembler anyway. At the other end of the scale I've found the support offered by C# and the .Net framework actually far more conducive to experimenting with alternative algorithms or data structures than when I used to use C++.

The other 'performance' complaint that is also raised is the non-determism introduce by the process of garbage collection. It's certainly true that this can be an issue. OTOH it is possible to structure critical sections of C# code to avoid creating large numbers of 'generation 1' objects which are the worst case for GC. It takes some effort to do this but somewhat less than the thought required to do 'manual' memory deallocation in an unmanaged language. (BTW, it is actually considerably faster to _allocate_ objects in a managed language due to the compaction of free memory into a single block).

I recently had the experience of starting a completely new development (comms stack) which had reasonably strict real-time requirements (<4ms response time). This is the kind of application where it's "obvious" that you should use C++. We started from the position that we would implement the 'control' layers in C# for productivity reasons and call out to C++ dlls where performance was critical. 8 months later we haven't had to fall back to C++ and my belief is that we have been considerably more productive with the language and framework than we would have been if we'd used C++. (And at least an order of magnitude more productive than the corresponding functionality we have had to implement in embedded C.)

On a separate point, it's no accident that Intellisense works well for C#. If you read or watch interviews with the language designers they frequently state that 'ease of development' is as important for a modern language as any other feature and they often tweak language features (such as 'async') with a view to how well Intellisense will work.

Edited by sbsmac

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×