Please help us improve Stack Overflow. Take our short survey. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Command-line options to force googletest to run in single thread Ask Question. Asked 6 years, 9 months ago. Active 2 years, 9 months ago. Viewed 7k times. Joined Jun 24, Messages 3, Not sure whether it's programmed that way or not, but I notice my son's computer with a Q loads levels much faster then my rig with an E running at 3.
His CPU is at the stock 2. I have the faster hard drive. To me, that points to it being the CPU. Joined Aug 7, Messages 1, PCMusicGuy said:. Careful, almost every game these days is multi-threaded. Click to expand Frosteh 2[H]4U. Joined Nov 30, Messages 3, BladeVenom said:. Joined Nov 3, Messages Joined Oct 2, Messages FSX will scale in excess of 8 cpu's. Joined Jun 11, Messages 16, Versions 1. Joined Jul 22, Messages Microsoft FSX is great a utilizing multiple cores.
This will enable the engine's true multi core support. It also gives about a fps increase. So "position. You might think this is overkill, but determinism is really great for multiplayer games. If you'll shoot a grenade you often see some sort of latency because the server sends lots and lots of position updates for the grenade.
With determinism you don't need to spam your internet connection with position-updates. Having a multithreaded implementation that allows for FPS-drops and absurd FPS like over without slowing down the game and still being deterministic is a difficult problem to solve and hence the difficult solution.
If you know of any other solution I really like to know about it, I find this stuff intriguingly interesting! The solution does not spin on Sleep 0 , that would be a seriously bad implementation imho. The site discusses the how thread-switching may cause stuttering and the two proposed solutions are partial thread synchronization and motion extrapolation. If you are using a custom physics engine, you'll just plug it into the update-thread.
It is as easy to integrate as in any other single threaded method. Usually you'll create a producer-consumer queue, and generate tasks that your background threads will handle. If you are adding this to the solution from slapware then the update-thread will be the 'producer' and he will spawn a couple of background threads consumers to calculate physics, AI and other CPU related stuff. If you optimally want to use your GPU you need 1 thread for all the drawing heavy on GPU, light on the bus and one thread for updating models and textures light on GPU, heavy on the bus.
Assuming you have only one GPU. And not to subtract from the achievements of CIV4, a great game, but it is not one of the most complicated games for an engine to handle. Luckily, the complicated stuff from the proposed solution is only in the engine and not in the game. It is still possible to do everything you want without writing absurdly complicated code when creating a game. You appear to misunderstand what reality is. If you are using mutexes at all in your game engine, you have already fucked yourself beyond all hope of redemption.
You should NEVER implement any algorithm that doesn't use some form of lockless or at least obstruction-free implementation. Determinism is impossible. Even if they were perfectly reliable, lag ensures the world state will always be inconsistent with itself no matter what you do. The potential inconsistencies that can arise in a threaded actor-model concurrent system are extremely minor when handled properly, and as a result can simply be dealt with through the same correction algorithms that must deal with floating point inconsistencies and errant packets.
I can make a concurrent game engine that is highly efficient and won't suffer from any significant FPS drops, yet is vastly simpler than what you propose here, because you solve the problem from the wrong end. The future of games will be in engines that use actor-based concurrency models by packaging object information into immutable packets and processing them all using a swath of worker threads using multiple lockless queues that have virtually zero overhead and a single sync point per frame.
This is simple, elegant, and is vastly more robust than the hellish complexity you propose. I can even do it for a 2D engine, where the rendering queue is strictly ordered. A 3D game is even easier to pull off, because you can render things wildly out of order using the z-buffer and clipping spaces. This problem is not answered by creating separate threads that do specific things, its done by evaluating everything at the same time in small bursts.
So unless you are saying your engine is literally faster than every other engine in the entire industry, because Civ V's engine is, you really shouldn't be talking. And yes, that includes Unreal Engine 4, which simply does more fancy things, its raw rendering speed can't beat Civ V. I sense you too have a lot of knowledge about the subject, which makes this an interesting discussion.
I'll try not to make it into some sort of flame-war between two techniques, because I really like to read what you have to say : 1. Mutexes I totally agree with you. Mutexes should be prevented wherever possible. Determinism is possible. Floating point errors occur, true, but if the floating point errors are consistent then there isn't any problem. This is also possible cross-platform because floating-operation are standardized. Most compilers even have flags to allow you to change the way that floating point are handled.
Network issues are no problem either, unless you use UDP or some other protocol. This does not mean that packets arrive at your computer at the same time, but TCP uses a buffer and automatically asks for re-transmissions if needed. Your application will receive all packets in order. If there are any problems some packet will never be received, even after asking for re-transmissions the connection will just be dropped.
Packet corruption is handled by check-sums in both ip-packet-header and tcp-packet-header The complexity of a solution is never a real issue for me because I enjoy figuring out what the best approach is for my situation.
The technique I used is not as far fetched as you might think. It is a technique called triple buffering not to be confused with triple buffering on the video card. I'm not going to compare these engines, lets just say that most these engines are pretty darn good.
Especially because I have written this implementation in c , lol. It is very difficult to look at a certain engine and determine what techniques they apply. They often limit themselves to saying things like " we are supporting volumetric clouds " or " we have a taskmanager to optimally use all cores in your CPU " without diving into the specifics of 'how' it is implemented. I wouldn't be surprised if these engines are also able apply a form of tripple buffering on render states. And the technique you describe seems pretty solid.
We actually agree on most topics except for the one where I prefer a separation of 'task-manager' for GPU and CPU, while you just say that just one task-manager for both is good enough. I also use triple render-state buffering, but I probably wouldn't have implemented that if I didn't need determinism. A task-manager with 50 workers for a dual-core CPU is silly as you probably agree with me. There should be some balance between the nr.
They contain a different number of cores and the are connected through a bus that only allows one 'message' to be send to it at a time protected by mutexes in the video card driver. Since the bus is protected with mutexes it should also seem silly to have more than 1 threat to try and send a message to the video card.
Because they are so different it makes more sense to separate the queues so we are able to fine tune these queues perhaps even setting thread-priorities. Oh, P. Determinism in games is not outrageously naive. For both network-games and replay-files For example Starcraft II. In my implementation, just like Civ V, I queue rendering calls into a single thread that sends them to the GPU as fast as possible.
You'd do this no matter what technique you were using. The precalculations necessary are what the worker threads do. There are always as many worker threads as there are logical cores - for an i7 you'd have 8. Triple buffering is a standard technique, but my technique implements something akin to it on a more distributed level. Each actor is, by itself, considered immutable, so a copy of it is made upon which calculations are done, and when this calculations are finished, the result is committed by flipping a pointer to ensure atomic usage of information across all actors.
I feel like I should point out that your implementation is a particularly horrendous complication of triple buffering. Everyone uses something like that, its the only way to get anything done. You completely missed that last point I made on determinism - even if all floating point calculations are executed perfectly and all packets arrive as expected, lag ensures that inconsistent situations will always crop up, and those inconsistencies will be several orders of magnitude more severe than any inconsistency that would occur due to out-of-order processing of objects within a single frame.
You are trying to solve a problem that isn't a problem. You seem to think that my approach significantly sacrifices determinism when it really doesn't. Satoru View Profile View Posts. Originally posted by Satoru :. Last edited by EleventhStar ; 31 Oct, pm. Belhedler View Profile View Posts. Are there configuration files for the game and if so can you modify sensitive values? That's the only possible answer to your question.
And yes there are games whose settings set ot a default that does not use fully high end machines. Skyrim is one example of that although the tuning can go to far higher ends on that title.
I'd advise on at least 8 GB of Ram, which is sufficient for the large maps. I'd steer clear of any integrated graphics, as my game tends to crash on my laptop Integrated , but works well on my PC Dedicated graphics card, Additionally, multi-threading is an important tool, but it's only as good as the software developer designs for it.
There are all sorts of factors in play. It's not simply a matter of having extra points moving data from point A to B. Last edited by DHood ; 1 Nov, am.
0コメント