Sending "Large" Amounts of Data

Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
Before you make a thread asking for help, read this.
Post Reply
User avatar
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm

Sending "Large" Amounts of Data

Post by Lap »

So I've had basic multiplayer working fine for a few months and everything has been working well up until now. I had a simple system that involved clients loading up a map file from their own computer and the server would send them everything else they need. Recently, I decided to also allow sending of map files and I've noticed some problems when I start sending larger amounts of data.

I am occasionally getting dropped packets, even using TCP. Just wanted to run by the plan.

Currently I:

-break data up into relevant tables
-Send tables smaller than 50000 bytes,
-Break up tables larger than 50000 bytes
-Send tables and recombine clientside.
-Client will rerequest any malformed tables.

However, tables smaller than 50,000 occasionally get sent, but never arrive. Since they never even started being recieved, the client doesn't know to rerequest them.

Solutions?

1. Queue sending out of packets. Maybe sending all this data on the same frame with no delay is too much.
2. Have the client autorerequest missing tables after a certain timeout.

Any suggestion?
User avatar
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:

Re: Sending "Large" Amounts of Data

Post by bartbes »

Since dropped packets can't exist with tcp (tcp automatically asks for a retransmission), I'm assuming the error actually takes place when sending, have you checked the return value of sock:send()?
User avatar
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm

Re: Sending "Large" Amounts of Data

Post by Lap »

bartbes wrote:Since dropped packets can't exist with tcp (tcp automatically asks for a retransmission), I'm assuming the error actually takes place when sending, have you checked the return value of sock:send()?
I thought that was how tcp worked...until I noticed missing packets and figured I just remembered incorrectly.

I just checked again and I was only printing out what was being passed to LUBE, not what was actually getting sent. On printing the actual socket:

Code: Select all

276 nil nil
5678 nil ni
nil timeout 0 <--------Problem packets
nil timeout 0 <--------Problem packets
I can fill the tables with more bytes of data, which causes the "nil timeout 0" errors to appear even earlier in the transfer's progress. Once the first problem packet appears, any data sent this frame after the problem packet will also fail, which makes me think I'm sending too much data too fast.
User avatar
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:

Re: Sending "Large" Amounts of Data

Post by bartbes »

I'm guessing the buffers can't be emptied fast enough.
User avatar
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm

Re: Sending "Large" Amounts of Data

Post by Lap »

bartbes wrote:I'm guessing the buffers can't be emptied fast enough.
Is this a problem that I should go into LUBE and try to modify the basic functions or is this something that I should deal with more in the amount/frequency in which I send data?
User avatar
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:

Re: Sending "Large" Amounts of Data

Post by bartbes »

It's OS level, and even there I'm not sure changing a setting would actually improve it, fighting the symptoms is rarely a good way. I guess you should look into reducing traffic.
User avatar
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm

Re: Sending "Large" Amounts of Data

Post by Lap »

Hhhmmmm...I guess I'll just have every send command go to a queue of some sort and then a couple of times a second it'll send a few packets out. Since the sockets are actually returning timeouts I could also simply have every packet that times out just be automatically resent after a second.

Super lazy fix

Code: Select all

SendQueue = {}

function AdvancedSend(data, id)
	if lube.server:send(data, id) == nil then
	print('adding to queue')
		table.insert(SendQueue, {data, id})
	end

end

LastSendQueue = 0

--called from love.update(dt)
function ProcessSendQueue(dt)
	if LastSendQueue > 1 then
		for k,v in ipairs (SendQueue) do
		AdvancedSend(v[1],v[2])
                table.remove(SendQueue, k)
		end
		
	
	end
LastSendQueue = dt + LastSendQueue
end
Double solved:

I also noticed that large tables were often being cut in half so I set up a system to rerequest tables and combined it with the resends. Still, it seems like Making every single lube.server:send() go through a queue is the only way to stop this problem at the source. There's always a chance that all the clients will request data at the same time and inevitably overload the server.

Final Overkill Solution

-Use tcp [EDIT='There's still some problem I have with TCP where disconnecting from a server and trying to reconnect in the same session causes all incoming packets to get blocked on the client].
-Recheck large multipacket table size to check for missing packets
-Check for sending errors and resend.
-Allow clients to rerequest specific tables
-Have all sends go through a queue to prevent traffic jams.
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 54 guests