Ok, long post ahead. I am just going to dump all my current opinions and everything I know or think I know
at once so it's at least said and it's out there. I figure this is the time for anyone else with something to say to say to do the same before details start getting carved in stone. A lot of things rely on the way other things work here.
uze6666 wrote:It sure sound cool, but realisticly I think it would be hard to have so many connections at once, yet alone finding so many other peoples willing to play the same thing in such a small community. I think it would be wiser to aim for simplicity with supporting only one client and one listeners socket at any time.
Yeah, this is a large task overall, my thoughts are drastically moving towards doing whatever is the easiest and still accomplishes all goals. At that point with a working product, you could always see which parts are good enough and upgrade what isn't. Do you think prototyping the protocol in Uzem might be worth thinking about?
uze6666 wrote:You'll need it anyway to discover other players.
The way most games do it is to create a master server. All active games are reported to a known address with how many players are on, which rom being played, etc and kept track of. Occasionally these servers give a heartbeat back or "hey I'm still alive just so you know" message so the master server keeps them in the list. If the master server doesn't hear it for a while, maybe the server crashed, lost internet, remove it from the list so people aren't trying to play on something that isn't there anymore. Do a TCP connection to uzebox.org then "GET masterlist.txt" and extract the updated master IP or any other way to get it would work.
uze6666 wrote:I'm not so sure about the concept of just sending pad states, specially via UDP since you can loose packets unnoticed. This could desynchronize quickly and cause erratic and non-sensical movements of the player movement on the other console.
We should use TCP for everything to simplify the process of getting this off the ground. TCP is great because it's like C, easy to use and you can coordinate high level stuff with good enough performance for most things. UDP is like assembly, basically a thin layer of paint over the low level IP packet; which is the actual way all the higher level protocols work. Writing a whole game in assembly would be difficult to organize, but writing the mode 3 scanline routine in C would probably yield a less desirable result too. TCP is a finite state machine
that is optimized for general data streams where %100 throughput is most important but fluctuations in delay for availability for any particular small piece of data doesn't matter(buffer it ahead for consistency). It is still dealing with all that loss, duplicate(rare), and out of order stuff behind the scenes in a generic way. All that sequencing, acknowledging, and asking for retransmissions for lost data sucks at short time intervals like 1/60 second. TCP will not give you any data until it has everything in order up to that point so lost packets stall the game out every time equal at least to the the round trip delay it takes to ask and receive it(it DOES keep taking new packets in the mean time so you have them later). In a file no byte is more important than another, but in a reflex game the newest data is the most important. So what I said made no sense because I forgot to mention how to leverage UDP when the data is so small. Classic and easy is a Tick ID at the start of each packet so we actually know exactly which data we have and have not(custom solution to our specific goal) To have %100 game replication on both ends and much better lost packet tolerance/timine: for each frame or so you send a new padstate to the module(offloading all further work away from the Uzebox) concatenates also the last maybe 9 states to that
. That is still an extremely small packet.
Basically imagine someone sends 10 packets, each one holding the very latest(at the time of being sent) state with the previous 9. Packets sent out with id #0,1,2,3,4,5,6,7,8,9 and on your end you receive 0,1,3,2,5,6,9. You lost 4,7,8,received 2 out of order(by the time you get it, it's worthless since you already got 3 and moved on), but if you think about it you had the information in those lost packets anyways as soon as you get a new one. So the hiccup in your data stream is only equal to the time it takes to get a new packet, which on his end was sent only 1/60second after the one you lost and you don't have to ask and receive anything new which would take much longer. IF for some reason you get major packet loss for a time(more than 10 packets), then you still must have some mechanism to indicate "hey I only know what has happened up to TICK ID, send me everything since then" then the adapter hides those details from the Uzebox. That relies on the Uzebox knowing that whenever the module does not give you information on that tick, then you can't move on until you have it. You also know the other guy is doing that. So even if he didn't experience the packet loss and got all your padstates up to the last one you sent, he wont get any more new states because you will not send them until your situation is rectified(ie. he tells you what he knows). Does that make any kind of sense? I'm trying to think of a quick picture to show it better but I am at a loss how to symbolize that. This is the simplest way to do it that would have any kind of performance. Basically any game will have some form similar to this:
Code: Select all
InitGame();//start up the module too
//process normal game logic...
It is very easy to change that general structure to:
Code: Select all
InitGame();//start up the module too
otherpad = NewNetworkData();
//process normal game logic
Pad states are absolutely the laggiest way to do things, you feel all the latency all the time. I only like padstates because we never have to guess what happened or build a large list of information about all the objects that have changed or are currently visible(however the system was designed). The adapter could not help there, the Uzebox would have to build the data set and any predictions that give the appearance of no lag because an adapter is unaware of how the game actually works unless it can replicate some parts by itself. Making that system individually for new games(it must be fairly custom to each games specifics obviously) is a problem and being that difficult it's nearly assured our existing library has little prayer of anyone going back and making all that. It is really complicated and takes lots of ram and cycles in general, and most games are pushing the hardware pretty tight as is. The idea works great where possible: a good example is some of the new fighting games using GGPO
or similar that hides
lag like this: Run the tick even if we don't have new data yet. Make an entire copy of all relevant game variables for each probable outcome, and then show whatever the prediction thinks will actually happen on screen as if it really did just happen. Then for EACH of those many new game ticks prediction, make a totally new copy of variables for those possible outcomes, and keep repeating that process until we finally receive information on what really happened from the authority(server). If the predictions are totally right(for the amount of game ticks that it takes to equal your latency)and the gameworld you were shown happens to be what really happened after all: You have essentially defeated latency since it entirely appeared that your moves were confirmed instantly! If your opponent didn't predict correctly though, why should he concede that you got to move "faster than light" and accept that you hit him with a punch he never got to see. In actuality It works pretty awesome! Prediction makes things seem smoother but the problem is when the prediction is not correct, you can only let it go so far until you are showing a client something that is nothing like reality at which point any further input he makes is based on such a wrong replication that he might as well put the controller down. In that case you are forced to correct it. You can "half correct" it so the transition is much more smooth by taking what you know is true showing a world blended with the failed prediction. That is ULTRA complex and a very fine balance, since an inaccurate view isn't worth any more than an outdated view. So smoothness vs accuracy.
Mostly related, a long time ago I made a bomberman clone using DirectX and was very eager to make it work over the internet. I spent quite a bit of time and frustration learning how to make sockets do what I wanted in winsock, and I had just read "Learn C++ in 24 days" probably the year before, so you can imagine how I struggled. But not counting all that time learning the standard socket basics, I managed to make the simplest possible 2player,client->client, TCP, No prediction, Lockstep network setup pretty quickly. When I say it worked, what I mean is it never lost sync(TCP+Lockstep keyboard states makes that automatic). I found that actually playing it over the internet was fairly bad when I had to push a button to turn a corner before I got there or else miss it, with stalls and fluctuating timing. Being an avid online FPS player back then and still fascinated with the concept, I wasted a lot
of time reading the source code to every FPS game I could find especially influenced by John Carmack's doom->quake world->quake 2/3 network code evolution. Later I worked pretty heavily for 2 years on what I consider an impressive indie shooter game, but it reached a point where it was too big and far beyond my skill level and ability to generate the needed graphics,levels,etc. Long story short, I did manage to make: master server to keep track of active game servers(which were all in my bedroom), in-game server browser, and Client/Server UDP networking with client prediction and mechanisms for reliable messages. It played way better than my first attempt, but it was orders of magnitude more complex/difficult.
Ok I'm done for now