Uzenet

Topics regarding the Uzebox hardware/AVCore/BaseBoard (i.e: PCB, resistors, connectors, part list, schematics, hardware issues, etc.) should go here.
User avatar
D3thAdd3r
Posts: 3221
Joined: Wed Apr 29, 2009 10:00 am
Location: Minneapolis, United States

Re: Uzenet

Post by D3thAdd3r »

kivan117 wrote:I genuinely wish the people who made the stock 8266 firmware hadn't made it so awkward and verbose.
Could not agree more, it's a struggle I want to sidestep eventually. Once I get high scores server working on the basic level I will share some Uzebox side code and compare notes. It's a kludge at best right now.
kivan117 wrote:Seems like a simple two minute fix to prevent every program that wants local times from duplicating the effort.
The format specifiers are cool because you can make minimal or verbose time strings however you like, but a variable format is ugly to string hack. I found a more legit solution using gmtime(), time requests are going to need to send signed 8bit number to represent the difference from UTC. Time zone would be a nice thing to save to eeprom, and seeing as there are only 24(25?!) we only need a 1+4 bit number and we could get away with ORing them into the MSbits of the name. It would also have the benefit of generally letting you know who should have lower latency due to proximity for match making. Things using the name/password from eeprom would just have to remember to do &127 to extract the ASCII part only. I store card,music,etc preferences over the top of the high scores entries like that in my little game and it's easy to deal with.
User avatar
kivan117
Posts: 73
Joined: Sat Mar 14, 2015 5:43 am

Re: Uzenet

Post by kivan117 »

Been brain storming a bit about the best way to handle the networking for Ghosty Ghost and if I'm not careful it'll spiral out of control really fast with variables and buffers tracking what needs sent, was sent last in case it needs resent, what reply I'm expecting so I know how to deal with incoming messages, etc etc. In your testing, has it been pretty common to only have part of a message available from uart at one time? Also, aside from waiting for a timeout, is there any way to know the module is even there and functioning?

High level, my plan is to basically have a Uzenet controller/state machine. It gets a chance every so often (probably every ___ frames) to read and send packets (in that order so I don't destroy incoming data). Basically the game just does what it already does and treats the Uzenet controller like a lazy mailman. Whenever the game needs a score sent or a request for scores, it sets the contents of a buffer to the message and sets a bool letting the controller know there's a message needing sent. The rest (sending those messages, reading incoming messages and stripping useless AT command junk from them, setting game variables based on incoming score data) is all going to be handled by the Uzenet controller completely independent of the game. The game will just assume that whatever data it has is up to date at all times. I plan on taking the lazy way out and having the game send every score > 0 every time letting the server decide if they're worthy, and only ask for updated scores when the user requests to see online scores (show what we have now because we assume we always have up to date data, send a request for new ones, update data when they arrive thus instantly displaying the updated ones).

On the high level aspect it seems like it won't be too hard, I've already done some testing with the chat program in the past and the basics make sense, but it's the low-level stuff relating to the AT command chatter that put me off from dealing with it back when I was working on the updated chat program. Any design tips for handling the incoming stuff?
User avatar
D3thAdd3r
Posts: 3221
Joined: Wed Apr 29, 2009 10:00 am
Location: Minneapolis, United States

Re: Uzenet

Post by D3thAdd3r »

kivan117 wrote:Also, aside from waiting for a timeout, is there any way to know the module is even there and functioning?
Shortest method is sending "AT\r\n" and you should quickly get back "OK\r\n". It's probably best not to do that while it is in the middle of something else like scanning for APs(which takes quite a while), connecting, right after a send, etc. The newest firmware seems to recover from it that I've seen, but the older firmware would say "Busy now..." and be worthless from there on until you did a hard reset. That was truly frustrating trying to debug stuff before I knew what was going on, I find the newest firmware reliable.
kivan117 wrote:High level, my plan is to basically have a Uzenet controller/state machine.
All those details you mention; it sounds like we are doing the exact same concept, mine is very lazy also to avoid using too many cycles per frame. I removed a lot of the checks for things that should never fail, because well, they never seem to fail anymore and it makes the code faster/smaller/simpler. The caveat of sending UART traffic while it's busy still remains, but my state machine makes sure it's working by verifying baud/functionality with "AT\r\n" at step 0, after that things like "AT+CIOBAUD", "AT+CIPMUX=0", "AT+CIPMODE=x", even "AT+CIPSTART.." I treat as deterministic. I give a wait frame in between and "eat" whatever response it sent at the next step without checking. The only things I don't take for granted are network things like "AT+CIPSEND=..", "AT+CWJAP", "AT+CWLAP", so I do hard error checking there. If there were previous errors before, well, they will get caught there since it wont send the data. My error handling is simple setting the state machine back to step 0 which resets the module and tries again, I think of it basically as a script, and it's reliable.
kivan117 wrote:Any design tips for handling the incoming stuff?
It is complex to write code that works with partial packets, since of course the data might not all be there until next frame. I rely on the fact the server will always send me an exact amount of bytes per the request made, and it will never send extra data I didn't ask for, so you do have complete control over everything that comes in the form of "+IPD...", except for how long it takes to get it all. I propose not to read any data after a send, until you have at least the correct amount of bytes in the UART buffer you should be getting, including the "+IPD.." bytes. Then you don't have to deal with partial data, and you can know if an error occurred simply by checking the first byte for anything besides 'S' in "SEND OK\r\n", then eat "SEND OK\r\n+IPD...", then use the raw data you have left. I plan to do this until I find evidence it's not reliable, I hope you do too so we can determine what assumptions are safe to make so our lives get easier as community experience goes. This is still experimental stages afterall! Also, you have to figure, if you do discover an error, there is not much better to do than simply start from the beginning. Again, I am not seeing this happen personally.

Edit- BTW, that doesn't cover all cases. For instance we have to keep in mind data bigger than 256 bytes seems to come in separate "+IPD" chunks split across a max size of 256 each, a simple byte counter should work to eat "IPD" again, but I haven't needed it yet. Another thing to consider is that at any point in time you could get disconnected from wifi, internet could go down, the server could go down, etc. Again, that should be very rare, but it will instantly send "UNLINKED" or whatever(I forget). I implemented a time out while waiting for the amount of bytes the payload should have, so if I have not received all the bytes I should get from the server for a while, I restart the whole thing which should cover that. Error messages should never just interrupt the middle of an IPD but should wait until it's over so data corruption should not be an issue. The server always disconnects an Uzebox instantly if it sends something with a bad format, like not enough bytes, too many bytes for the num and len of the entries you specify specifically to prevent issues. This should always indicate bad programming. Even with the artificial delays it currently does, I do not actually see the 8266 breaking packets in 2, or combining packets using the "Nagle Algorithm". That's weird but I have not spent a lot of time with WireShark watching it yet nor sent much a variety of data to it.
User avatar
D3thAdd3r
Posts: 3221
Joined: Wed Apr 29, 2009 10:00 am
Location: Minneapolis, United States

Re: Uzenet

Post by D3thAdd3r »

More study into the problem we will eventually run into when doing precise timing networked stuff(basically using what other people found out). There is no trick with any currently available firmware, sending small packets back to back results in a delay of 300-400 ms even if the 8266 gets the ACK for the first one in <1ms because of the way messages are queued in the 8266 OS. It's very TCP like, optimized for large packets, but not good for us. Finally I see an explanation why the ping times to the 8266 are so lousy over a local network(which Nagle didn't explain for ICMP), but other peoples statements lead me to believe we can get it as low or lower than 20ms latency added to normal latency you would get through your PC on ethernet, which would be extraordinary once I see it in real life.

Alec I think you are the only one who could have a good idea on this. Do you think it would be possible to operate reliably at 115200 baud with a sufficient buffer and prolonging HSYNC periods if needed(would Tv's still sync)? One cool use of Uzenet and the SPI ram would be streaming music, in kin with UzeAmp or maybe just added to it. If we could get >8kB of bandwidth through reliably it should be enough for a reasonable quality. We could try to keep the 128k buffer filled to cover any lost packets, etc, but I don't think we'd need resort to UDP even. Would that need to work using the non-inline mixer and filling the buffer each frame or is it more complicated?
User avatar
kivan117
Posts: 73
Joined: Sat Mar 14, 2015 5:43 am

Re: Uzenet

Post by kivan117 »

The 8266 and I have gone round and round this weekend and for now I give up. Without a hardware debugger or any way to see the UART traffic it's too difficult to tell what is happening and where things are going wrong. Just a bunch of stab in the dark and hope. On the upside I have the visual interface aspect for the project pretty much together and all I need is a working state machine to plug in and get this baby running. Hopefully you're having better luck than I did. Best I could get it to do was reliably recognize that there was/was not a Uzenet module. Even just connecting was iffy. I also realized when I went to merge the networking code into the main gghost project... I'd previously been building against an old version of the kernel, so I had a great time running in circles with the makefile until I figured that out. :|

Whenever you get your project up and running I'll have to see how you did the code, because my way is not working. Conceptually it's a very simple cycle and it should work, but in reality it doesn't and I have no way of knowing exactly why since I can't see what's happening on the hardware.
User avatar
uze6666
Site Admin
Posts: 4801
Joined: Tue Aug 12, 2008 9:13 pm
Location: Montreal, Canada
Contact:

Re: Uzenet

Post by uze6666 »

Alec I think you are the only one who could have a good idea on this. Do you think it would be possible to operate reliably at 115200 baud with a sufficient buffer and prolonging HSYNC periods if needed(would Tv's still sync)? One cool use of Uzenet and the SPI ram would be streaming music, in kin with UzeAmp or maybe just added to it. If we could get >8kB of bandwidth through reliably it should be enough for a reasonable quality. We could try to keep the 128k buffer filled to cover any lost packets, etc, but I don't think we'd need resort to UDP even. Would that need to work using the non-inline mixer and filling the buffer each frame or is it more complicated?
Technically, if you can process the bytes fast enough in the main loop and can't see why it would not be reliable at 115200. The rules is a speed low enough to not have more than one byte received per scan line. For music streaming, I guess it could work with the vsync mixer but that will be a lot of buffer copying. Though I did not give it a lot of though, it seems a dedicated inline audio mixer would probably be a better choice.
User avatar
D3thAdd3r
Posts: 3221
Joined: Wed Apr 29, 2009 10:00 am
Location: Minneapolis, United States

Re: Uzenet

Post by D3thAdd3r »

uze6666 wrote:The rules is a speed low enough to not have more than one byte received per scan line. For music streaming, I guess it could work with the vsync mixer but that will be a lot of buffer copying.
Ah, I didn't do my basic math to see HSYNC at 15khz should be enough for 115200 with a 512 byte buffer. Without blitting much ram tiles it would seem ok on speed, or shorten the screen a bit if it came to it. I think SPI ram would be needed due to fluctuating network throughput, but I am not able to "gut feeling" estimate cycles like with mode 3, since I've never used it. An assembly solution does sound a lot more realistic, it's pretty interesting.
kivan117 wrote:Conceptually it's a very simple cycle and it should work, but in reality it doesn't and I have no way of knowing exactly why since I can't see what's happening on the hardware.
I agree it is frustrating, I have been "cheating" using the very slow version of Uzem with "working" 8266, and the faster threaded version that does not work at all(except to show your UART streams full speed). My mkii ISP went to hell, it works when it wants, and lately it hasn't wanted to on multiple Uzeboxes. Had to stop, I was about to grab my BFH(a type of hammer) and have a freak out on it...anyway; Even when it does work it is too slow programming each build, so today I did what I should have done 2 months ago and threw some productive hours at threaded Uzem+8266 more. As usual it's "not done but closer®". I am postponing all finish work on the game until there is working emulation and at least the high score server correct.

BTW can you run any of the existing Uzenet tests like Uzenetdemo(from github), or chatter.hex from the attachment in this thread? If you can "AT\r\n" and get "OK\r\n" then you have the baud rate right, if not then it would give pretty confusing results to a state machine. I don't know what firmware your module has, so that means it could be 1 of many default baud rates at startup. I'm assuming you have connected it to your home wifi at some point so that it remembers the credentials. I found a claim that sending "AT\r"(no '\n') repeatedly for a while would make certain firmwares auto detect the baud rate you are using and change to it, but I have not been able to try it on hardware yet. It must be at least a possible thing, since the firmware upgrade mode does it.
User avatar
uze6666
Site Admin
Posts: 4801
Joined: Tue Aug 12, 2008 9:13 pm
Location: Montreal, Canada
Contact:

Re: Uzenet

Post by uze6666 »

Your latest post makes me think that the initialization procedure (and ideally uzem) should account for different possible baud rates to be robust. At start up it should try the most common baud rates in order (i.e 57k, 9600, 1155200, etc) until it receives a OK. The code should never assume a specific baud rate. Though I'm unsure it's a good idea, I could allow more flexibility in thems of more throuput (and more buffer) VS more free RAM.
User avatar
D3thAdd3r
Posts: 3221
Joined: Wed Apr 29, 2009 10:00 am
Location: Minneapolis, United States

Re: Uzenet

Post by D3thAdd3r »

uze6666 wrote:Your latest post makes me think that the initialization procedure (and ideally uzem) should account for different possible baud rates to be robust.
I got a chance to do some major refactoring on ESP8266, organized nicer, and in C++ now for no reason. It's thread based, you probably saw how slow the non-threaded one was. The UART aspect is currently emulated as an entirely different device of sorts. It has 2 separate sets of circular Tx/Rx buffers on the Uzebox side and ESP8266 side to allow for threads stalling/running at different speeds(it should be full speed with network activity on a fast PC). It also has simple mechanisms for "locking", basically things can always go into the devices UART buffer on it's side, but can not move from Tx buffer on 1 device to the Rx buffer on the other unless the other device/thread has specifically set a flag to do it. The device then transmits 1 byte into the buffer and clears the flag letting the other thread know it's done and new data is available. It's the best and fastest way I can think of, cycle by cycle emulation would be lovely but not realistic for speed.

So my question in all that, do you have any ideas or pseudo code for how to roughly emulate different baud rates? I could see no possible way that was remotely fast, I think you would essentially have to resort to very low level emulation of the UART which I don't think would be too accurate anyway or make much of a difference(is 1 type of gibberish really better than another fake kind..). Right now I have some things in place but not fully implemented to screw the bytes up if the baud rates do not match. It doesn't take into consideration there should actually be less bytes received if the Uzebox was running at a faster baud rate, etc. I also have not verified this, but I suspect the UART bits in the '644 relating to framing errors would get set on real hardware very frequently when receiving at the wrong baud rate. The real question, the UART in the kernel is pretty much canned up into the best product possible at this point agreed? Do you think it is worth it to emulate such things, when it would only be useful for some advanced UART development(which I can't think of anything to develop further)? It seems to me so far that it's possible to get away with a lot of abstraction and still be functionally the same. I agree the final product should start at a specific baud rate and not work until the proper procedure has taken place to match baud like the real hardware.
User avatar
uze6666
Site Admin
Posts: 4801
Joined: Tue Aug 12, 2008 9:13 pm
Location: Montreal, Canada
Contact:

Re: Uzenet

Post by uze6666 »

So your threaded code results in roughly the same byte rate as the real thing? Does it support a single rate or uses the UART baud registers?
So my question in all that, do you have any ideas or pseudo code for how to roughly emulate different baud rates? I could see no possible way that was remotely fast, I think you would essentially have to resort to very low level emulation of the UART which I don't think would be too accurate anyway or make much of a difference(is 1 type of gibberish really better than another fake kind..). Right now I have some things in place but not fully implemented to screw the bytes up if the baud rates do not match. It doesn't take into consideration there should actually be less bytes received if the Uzebox was running at a faster baud rate, etc. I also have not verified this, but I suspect the UART bits in the '644 relating to framing errors would get set on real hardware very frequently when receiving at the wrong baud rate. The real question, the UART in the kernel is pretty much canned up into the best product possible at this point agreed? Do you think it is worth it to emulate such things, when it would only be useful for some advanced UART development(which I can't think of anything to develop further)? It seems to me so far that it's possible to get away with a lot of abstraction and still be functionally the same. I agree the final product should start at a specific baud rate and not work until the proper procedure has taken place to match baud like the real hardware.
One thing we could do is to use the eeprom to store the last set Uzenet uart speed. I think that would avoid the whole issue.
Post Reply