shinRaiden
Former Developer-
Content Count
1953 -
Joined
-
Last visited
-
Medals
-
Medals
-
Everything posted by shinRaiden
-
Do an exhaustive background check on your system first. My experience is AMD + nForce2, but can be applied to P4 usage as well. iirc, NOT all amd64's are unlocked, only the FX series. Places like www.nforcershq.com have extensive lists of locked and unlocked proc's. I haven't looked into rumors about unlocked mobile chips, but I think the variable speed stuff was multiplier related, so that would make sense. If you're looking at the non-PCI-e mobo market, bear in mind that the AGP bus is red-lining in terms of power load. 256mb cards (without aux power) can be especially unstable. You may not immediately detect it however, as flat 2d video from your desktop draws significantly less power than running a fullscreen 3d accelerated app. RAM can also be a problem. On some earlier nForce2 chips on certain mobo's (including a couple of mine) memory quantities of 1gb or more can require more power than the chip is capable of routing. Components not getting power may be experienced as CTD's or CTR's (crash-to-reboots). Isolating them can be difficult, unless you had just done major upgrades or overclocking. Some times marginal systems will appear stable thermally and in activity, but will mysteriously crash well into a high-load application. In my case, OFP was crashing 5 minutes into play, SOF:II 30~45minutes, and Age of Empires after about 2~3 hours. This was due to increased power loads from gradually increased resource loads and increasing power requirements from hotter components.
-
Actually, it's all a matter of how much time the map editor wants to spend on the map, and what the minimum specs of the target system are. I ran a proof-of-concept map the other day with 1 200 000 trees on it, but it ctd'd exiting the mission editor going back to the main screen after I was done flying around on it. The max viewdistance for that density (~20 trees per cell on a 12.8km map) is about 1200m flying before you get weird clipping. FPS was not a problem though, but I only had flat terrain. With forests like that you could drop the viewdistance down to ~800m on foot.
-
My bad, the recount (and election) just got certified this moment, with Republican candidate for Governor Dino Rossi being certified as "Governor-Elect", by 42 votes. It is widely expected that the Democratic Party will call for at least a partial recount. Currently claiming poverty, they say they do not have the $700 000+ to recount the entire state, so they plan to recount selected precients and counties. If they do so, the Republican party is prepared to request a challenge recount for all the other precincts and counties, to make sure "every vote counts". If the Democrats do initiate a challenge hand-recount, it is expected to run through the week of Christmas. IF that challenge recount conflicts with the current recounted results, the Democrat's will get their money back, and State law will automatically order a THIRD all-precinct state-wide recount, which should conclude about the second week of January, when whoever decides they want to be Governor is scheduled to be inaugurated. The possibility is very good that the next Governor of Washington State may very will not know that he or she is going to be Governor until the day before they're to take the oath of office and start work.
-
Wall Street Journal - John Fund The Secretary of State will certify the election on Wednesday Dec. 1st, after which the results will be contested and a statewide hand-recount from Dec. 6th through 20th. Ironic that voters in the Ukraine are taking their election so much more seriously than the election here in Washington State. They have a (dubious, I know) margin of several hundred thousand, and we're at 42. Yet we have no rioting, no campouts, just an apathetic populace that says "hmmph. whatever" and flips channels.
-
Your DDOS adventures are not worth one precious bit of my newly aquired bandwidth... yess... preciousss... we wouldn't want that wouldz we... no... yes, bad... yess preciouss...
-
No, we already have too many broken designs, shackled hardware, and fuzzy (fizzy? ...) software. You're right there though in that stuff like Deathrack runs much better on a 286, and is unplayable on a 486. (too fast) I wish there was a suitable DE-accelerator so that the good old games would still be playable. Layered interfaces with space-pucks. Seriously, the transcendent logic of it all... Better yet, get tennis elbow from reaching out and 'touching' your computer using a modded PowerGlove. All compilers inevitablley lead back to assembly, which needs not be nearly as binary in logic as the data it manipulates. But to see the sea one must first stop looking through a straw.
-
Sigh... how many young grasshoppers have already forgotten the lessons of the beowulf masters... He who wishes to cross the south bridge must answer me these questions three... What... is the computer? What... is the network? What... is the probability that we'll see 64-bit floating-point map data in OFP2 with at least micrometer grade controlable precision across multiple defined terraingrid levels? How does the computer decide what goes to #1 and what goes to #n, or #n-1 for that matter? That is an operation, and one that can be ignored in uniprocesser situations. Perhaps a closer analogy is better. C4 is t3h k3wl f3r n00bs, but shaped charges and stategicly placed demolitions are much better. Researchers are experimenting with a deformable flight-surface mod of the F/A-18, basically in n00b terms the wings flap. How does this all work? Do they have an uber-rack of a bazillion Itaniums using quantum prediction streaming data via broadband laser? no, they use a 4-pack of 68040's. If you go down to your local PC junk store you can pull one out of an old Apple Mac IIfx for $1.95. The key is two parts: 1) Properly engineered OS and related operational limits. You don't have time in-flight to exclusively defrag a gigabyte swapfile. You need raw sensor input and flight-control output, and you need it now. If you want a robust x.400 directory services system and secure network security infrastructure, you can get a quad-proc Xeon with a couple gig of ram and Windows Server 2003, or Linux on a single P4 with ~768mb of ram, or Novell Netware 5 on a P3 500 with 256mb of ram from the "free" box in the alley. There are all sorts of multimedia wonders that the BeOS can do on half the hardware of Windows because of how it vectors data differently. 2) Bandwidth, baby... This is something all can understand. No pipes, hi ping, all = ownage. This is why hypertransport and AMD moving the memory controller on-cpu has such wonderful results. Even the server manufacturers have a hard time putting obscene piles of memory on the boards, you run into problems with mobo traces and distance latencies. When you're working with traces the size of what run between the CPU and your RAM sockets (DIP sockets for the old schoolers), timing becomes a critical issue. Next time you crack open your PC, look closely at the mobo traces. Notice how those on the "inside" of the track have extra squiggles to make them about as long as the "outside" ones. That and you can only make so many traces per layer, and increasing the unber of layers reduces reliability and increases cost. If you wanted ultimate l33t bandwidth between your devices, ideally you'd want a massive mesh interconnect. That gives you two problems though - optical interconnects at the chip level are not ready yet, and that's a lot of routing to do and add in each component. Cray did it, but Cray could do pretty much what ever they wanted to as the government was footing the bill. Ok, let's say you have a rack of Cell blades with multiple direct interconnects. What about your apps? What do you plan to use it for... to play a single-threaded copy of NetHack in ASCII console mode? No, you'll be using it for something more fun like linking functions of massively paralell-executable code, such as particulate analysis of volumetric data (Real clouds in OFP3?) and so on. Imho these dying gasps by the legacy brute-force method of binary engineering is rather over-rated. What is interesting is the amusing potential that binary paralell systems have in simulating the complexities needed to execute quantum logic. Some times I fear that our 1D linear obsession with binary logic has crippled society's ability to truely expand into utilizing 2D vector solutions, let alone the possibilities of 3D quantum logic. Sometimes feeble attempts to enrich data (such as XML et al) fly by in a haphazard fashion oblivious to the flaws inherent of trying to mash it into a 1D linear regression. As noble as these efforts are, they barely even hint at the depth the binary shackles have on logical expression. Ignore the hardware for now, think of the logic. Transcend the bliss. Enlighten through assimilation. Be Borg.
-
That maybe all fine and dandy for you, but you're still quantified in the margin of "error" for accounting. However, if a million people decide they're not going to buy Madden 2005, that's viewed as a million single decsions, not a demographic shift. A million EA fans = general population trends, a million non-EA gamers = a million single instances of alternate buying preference. You see the faulty logic here? It's not limited to just EA. Emotional, mental, and physical "benefits" are not quantifiable under textbook accounting rules. Another case in point is the .com mis-accounting, swapping operating and capital expenditures columns on the spreadsheets. The 'perks' common in various tech jobs are usually related to promotional expenses in recruitment, ie marketing. No one seriously does a quantification of quality-of-life from overhead benefits, because it's impractical. Frankly using the same logic that many places I've worked at, they might as well rip out the bathrooms. Think about it, everything about them is an expense. Lights, water, sewer, fixtures, paper, cleaning. They are non-revenue creating square-footage, contracting out cleaning involves security and fiscal headaches, and time spent there is clock time that employees are not in a quantifiable production postion. So rip out the toilets, turn off the drinking fountains, and sell the coffee machines on eBay. Projected operations expenses will plummet, you can expand your data center into the stalls, stock price will soar, and you'll get a plum villa in the Bahamas? What more could you want?
-
That's one of the no-win situations you get yourself into when you're in these situations. When you go into the next interview, you have stuff like: You've got a few choices... 1) My boss looked like the leftovers Dr. Frankenstien rejected, and ran the office with an iron fist. He gave me 2 tech's worth of work and no way to rely on the supply system. Then he said that there was no overtime, but that I still had to complete all the calls before 5pm. Well, how am I supposed to do that when I'm on the phone for two hours reporting a client for running a huge warranty scam, saving the vendor millions of dollars in falsified liabilities. It got so bad I was having panic attacks while sleeping at nights that somehow I had forgotten or messed up a service call. How I handled it: Fortunately I had the presence of mind to only sign up for a three month contract. I told the interviewer that I had filled the contract terms and was looking for new opportunities. 2) Different job. First job out of High School, wild days of .com in late 1999. I got hired to "test stuff", only when I showed up for work did they realize they forgot to tell me what I'd be testing. Turned out it was enterprise clustering systems for database servers processing billions and billions of dollars of transactions. No sweat. I signed on for a short contract to get some cash to "go see the world" and other things. Things went well and we parted on good terms. Another head-scratcher for HR. Again the easy way out is that I was only on for a four-month contract, and that it coincided with the end of that product's development life-cycle. Key is to emphasize what I did there. 3) Got hired as tech, discovered a month later that they were just trying to keep a "hot-spare" pool on stand-by to meet their impossible SLA marketing cooked up. Contract said 60 minute lunch, SLA said 20 minute response time to service request. Weekly performance audits, daily automatic time-in-motion compilation, none of this was disclosed up front in the interview or orientation. Ended up on a chop-shop project, we were supposed to rebuild ~2500 machines. Vendor supplied 4 engineers, they only spoke Japanese. I got volunteered to be Translator, Vendor Host, Logistics Coordinator, and Lead Engineer - all full time jobs each - and on top of that had to be regularly cranking out more vendor certifications to meet the SLA. The last straw was there apparently was a SOP (standard operating proceedure) that said you only had 3 months as a temp to be placed as a full-time tech before you got let go. So my being stuck on that dead-end project meant my reward was having my boss take my badge on the way out the door. How I handled it: Emphasized the roles and activities, and explain that we concluded the contract at the end of the contract period. ----------------------------------- The moral of all this is non-engineers do not always appreciate what engineers have to say, and are often upset when engineers suggest "illogical" solutions. Customers often rant and scream about mountains of bugs and slipped shipping dates, and the common solution of this form of mis-"management" is to throw more hours, and sometimes bodies, at the problem. This results in an increase in bugs, extends sign-off dates, and bastardizes future project maintanence and development. If you start asking these kind of specific questions, you'll either get cagey non-responses or emphatic denials. Get the interviewers contact information. Take the contract offer home and think about it for a few days. If its an immediate opening offered specifically to you, it can wait a few days if it's legit. If there's any grey areas, or things you're concerned about, call the interviewers and the person who gave you the contract and let them know what terms are unacceptable and what securities you need in writing. Above all, no ship (sinking or not) - no matter how gilded the nameplate - is worth your soul, or your family's.
-
several reasons: 1) People get too attached to their product, and put up with the demands out of personal interest in the product. 2) Fear of the "resume stain". In future interviews... "so, why'd you leave EA? Because you were a slacker?" Remember it's HR Management that you generally interview with. 3) Fear of uncertainty... "Hi honey, I'm home! I just told my boss where he could put my timecard and badge... say we don't need to eat food and pay the morgage right?"
-
NY Times Business Section, Nov. 21, 2004. If you please Mr. Scrooge, at this Holiday season... a little dividend for the class-action filing solicitors? "Extreme Galley Slave - The nextGen Reality TV show!" Back to work chump! There's more mithril to mine out of Moria, ignore that Balrog trying to split you asunder. The NY Times implying that even the legendary miser Ebenezeer Scrooge would, at this holiday season, be shocked by the "Galley Slave" treatment of the employees, is not something that the markets will be wanting to hear. As long as EA continues to brag about significant piles of revenue, the class-action lawyers will have ample reason to circle and close for the kill. A couple possible outcomes of this. First, EA starts slashing jobs and outsourcing work to India. Productivity of bugs and counter-intuitive (to westerners) interfaces will skyrocket, and schedules wil tank. On the other hand, EA just bought out Digital Illusions, so you can kiss BF-2 goodbye.
-
1) This is not OFP troubleshooting, this is Offtopic troubleshooting. 2) We need some system specs here, what version of Windows you are running on each machine, etc. 3) Network settings for your network (properly sanitized if you wish) and settings on each machine 4) Intended LAN structure, ie what do you want to network and why 5) Inventory of neworking hardware - cards, switches, routers, cables. thanks.
-
You don't. About 80% of the way down (without word wrap) you might see some path data in these type of situations. That can help you narrow down the problem sometimes... if it has to do with some I/O operations. Sometimes though if it is an engine crash there's not a whole lot of usable user data for the community to look at.
-
This can be a bit of a bear to track down as the symptoms can be all sorts of things. I tried to fix a cop's Win 3.11 laptop once (from the back seat reaching through the window... long story), system was BSOD'ing on Windows shell loading. Digging a round I soon noticed I was getting read errors when trying to access certain files like win.ini and system.ini. Ran scandisk, found that both were on bad sectors. I figured I could probably re-write minimal versions by hand, so I said "I could try hacking a solution together" to get the more robust Win32s I/O drivers. (note to self, never tell a cop you're about to hack his computer.) I abandoned that idea as the 16-bit scandisk came back with dozens of more bad clusters. The poor guy was given a 486sx laptop with Win3.11... in late 2002. The problem with spotting this in OFP is that you're not likely to access the data other than through OFP the application. A generic scandisk may not always pick it up, you would have to sometimes run the more detailed scan. Unfortunately, "data read error" can sound rather cryptic, as often times our data in mod-making gets scrambled. It doesn't indicate whether it is a File I/O or an engine error. Was there any data in the config.bin or flashpoint.rpt?
-
This is because there is no installer for PBOx, it does not retain awareness of something it didn't have in the first place. In this case the file linking as described by Placebo is statically handled by Windows, instead of Dynamically by the app.
-
Well, recount in Washington State Governor's race is pretty much done, all but one county have checked in, and the unofficial numbers from there say that the Republican candidate for Governer has won the machine recount by a whopping... 42 votes. Democrats have already begun motions to request a manual hand recount at their expense of probably 3/4 of a million dollars.
-
In the general election, Washington State Republican gubernatorial candidate Dino Rossi won by 261 votes. This triggered an automatic machine recount, and so far the recount has given him another 431 votes to his Democratic opponent's 394, widening his lead to 298 votes. In King county however (57.75% for the Democratic candidate), the county elections board has been 'ballot enhancing' ballots that the machines rejected, filling in the bubbles or what ever it takes to manually count the ballots. GOP lawyers have thus far unsuccessfully argued that the manual manipulation constitutes an un-sanctioned hand recount, which the Democrats should be obligated to pay the ~$750 000 to cover, and to do it state-wide instead of in targeted precincts and counties. Thus far, the recount vote trends seem to be mirroring the election margins. A teneative conclusion to the recount is expected by end of today (11-23) or early tomorrow (11-24), however the Democratic Party has stated that if their candidate still has not won, they will go ahead and demand a second recount to be done by hand statewide. There have been no reports of 'ballot enhancing' in counties that use punch-cards, the laws are pretty clear that at least 2 of the 4 corners must be severed for the vote to have been punched. Reports of ballot manipulation are only coming in from King County, where the Elections Board director, a former Democratic Party leader, succeeded the previous director who was forced out after sitting on ~50 000 ballots and mailing them too late in the last election.
-
There's plenty of space in the resource.cpp file to implement additional unitinfo properties and hud characteristics, what would be nice is to be able to dynamically write those values, say to change the RGBA value or toggle displayed or not. Conversely, having grown up with poor eyesight, I've mentally grown to rely more on my hearing for situational awareness, and sight-touch for localized sensing. This of course is aggravated by the sound ranging issue. so many times I'll be driving around and hear a helicopter, hop out with an AA launcher, and about the time I get chumped I remember that the helicopter is miles away. Perhaps an additional config value, of 0 to 1 defining what % of external noise is hearable (disabled on turnout or doors opened) would help. Actually, it would need to be two values: hearExternalSound : 0.75; // hearing is 75% of normal hearExternalSoundWhenEngineOn : 0.5; // hearing is 50% when engine is on Impacts and plinks would be exempt, ie always 100%. Maybe adding a doors grouping to the anims, so that if a member of the doors grouping is animated in a non-default state, the sound levels would be 100%.
-
Sigh, the red-headed step child of the metric debate... many parts of Europe use the comma (,) for noting the decimal place, and in America generally its the period (.)
-
Some thoughts. First off, the terrain deformation in [Z]oldner is not persistently dynamic, if you go wway long enough it flattens back out, or maybe that was one of its many many bugs. The 'craters' they made though are extremely simple, and could easily by netsynced. Basically if you passed the params of the deformation from serverside the clients could then add it to their loaded terrain. Secondly, suprised no-one's really covered this. [Z]oldner like many other games now blur their LOD transitions. In OFP you either see the LOD or you don't, making for the nasty chop. What would make a whole lot of difference is objects fade to transparent at the viewdistance, and the LOD transitions involve multiple layers blurred together. Unfortunately a lot of games today do not properly handle the LOD'ing of dense vegetation. Take for example SOF II. You can see stuff in detached view that you can't in first-person or more so in scope-view because of the much lower LOD. If you're in scope view, your vision is obstructed as well asinpenetrable objects 'appear'. Out of scope view, your vision is not blocked, and you can shoot without blocking. Contrast that with OFP today. In OFP, your LOD is fixed, regardless of how 'strong' your scope maybe. That needs to be corrected so that you see the higher LOD in scope-view, but the fire geometry needs to stay consistent as it is now. I also think it would be nice to have access to rudimentary shaders at the mod level, that could enhance the ability of mod-makers to create vintage film, NVG, thermal, flashbang, nuclear flashpoint, and other rendering effects beyond the limits of the particle and overlay system currently available. There's already been some request for more destructable materials. Whether they are implemented or not, it would still be nice to be able to add a "non-destructable" LOD in certain cases. I guess this has more to do with the issues surrounding destroyed warping. [Z]oldner handles downed trees 1/2 way okay, if you run into them they have a collision LOD you have to drive over, but if you hit them a second time they disappear. It would be nice if they could break, but if they don't, leave them as is so we can build barricades and other mayhem.
-
Well adding additional RAM will help somewhat as I mentioned above. Because of the situation you have with the IGP chip, I'm not sure how much performance boost you'll actually get out of it. You'd need to talk to someone with more experience in that area. What you probably have right now is 1x256mb + 1x128mb for ram. I don't think you'd have to go to 1gb - although that would be nice - but adding RAM will impact your battery life and increase the system temperature. If you replaced the 128mb with a 512mb that would give you 768mb, which given the situation of your laptop would in effect becomparable I think to what you'd experience with a 512mb or 640mb desktop. Before you go messing with hardware though, you need to do some load tests, like looking at the process details in the Task Manager, memory allocations, page faults, and more. Another good test would be to run the Direct3d test from the DirectX diag tool with Fraps and see what the FPS looks like there, to see what level OFP might be reachable.
-
Video cards on laptops, especially IGP (video on the mobo, not a seperate card), are normally not upgradeable. 82845G series specs DVMT Dynamic Video Memory Technology details Let's break it down like this: You currently have 384 mb of RAM. 1024X * 768Y * 32B will camp around a 16mb video buffer. You may see an allocation of 32mb in the DirectX tool, this is just smoke and mirrors to trick apps that check for available VRAM. The dynamic window Intel quotes is upto 64mb for systems with more than 128mb of ram and driver revisions newer than PV1.1. If we assume that DVMT is aggressive and only takes the 16mb, you're left with 368mb, and a split 50/50 makes for 184mb for Windows and OFP before hitting the page file on the HD, at which point you can kiss any hope of playability goodbye, especially with the higher latency laptop HD's. On the other hand, if DVMT decides to take the whole 64mb that it can, you're down to 320mb, 50/50 split leaves 160mb for OFP. That's not a whole lot of addons there, the default RES dta\hwtl folder is 129mb, and another 217mb in RES Addons, 1 mb of bin data, and about another 4mb of applications. If you're not familiar with the technical details of how to aggressively streamline an XP installation for low memory footprint (killing processes, limiting services, culling graphics effects, tuning performance variables) chances are good that you're probably running a 60/40 memory split in favor of Windows. At the very least with what you've got to work with imho you'd need 768mb because of the DVMT shared memory situation. That is not going to fix the problem however of bandwidth through the same memory pool, limited computing power of the chipset and CPU, and other system latencies. -edit- If you go to the Dell support website and enter your serial tag from the bottom of your laptop it will pull your system model specs from it's soul-sucking database. Secondly, you can generate a DirectX diagnostics report that will produce the same information and more, but the video memory numbers will probably be unreliable.
-
The video chipset in your model is not designed for handling graphics more "extreme" than Solitare, that's what you get with a shared memory Intel Extreme Graphics chipset. Secondly, it uses shared memory, so even if OFP had all that 384mb to its self (which it won't because Windows camps most of it), you're still not factoring the video 'card's shared memory demands. They cut the cost by not putting any memory on the video chip, and just borrowed it from the system. Third, you've got a couple major system bottlenecks. Instead of the video card minding it's own business chewing up whatever the CPU throws at it, you've got the video chip and CPU fighting for the same system resources and processing, with the hard drive and sound chip wondering when it will be their turn. Additionally as far as I can see most of the CPU's for those models are bargin-bin Celerys, not too bad when you have a desktop that you can throw adequate cards and resources at, but here some pretty hard handcuffs. Forth, you've got all the latencies that come with having a laptop. OFP can always use more CPU power, hence Kevbaz's comment. Unlike other games, you can't max out OFP on today's hardware, you'll only get lag or maybe a CTD if you do something really creative. But seriously, congratulations on getting OFP to come up on that machine. It's going to take some creative work to see if it's even going to be usable, but it's still worth a try. At the very least you'll understand your computer better. For starters, you'll want to try some pretty dracoian measures. See the other threads here for adding the "-nomap" option to change the memory dynamics. Open the flashpoint prefs tool and set all parameters manually to their lowest level, and set your OFP display settings to something like 800x600x16, and Direct3D only. I'd be real antsy about trying HWTL on it. Theres a lot of things you should do to tune Windows for a smaller system load, see some other places that have more detailed support on that stuff. Me personally I'd kill every process and service possible and really go psycho on the machine, your milage of course will vary. There *may* also be some driver settings for the video driver that will allow you to alter the system resource dynamics of the onboard chip and other display settings. You'd need to talk to someone online or in person who knows more about the specific options available to that card. Hope this helps a little.


