[ENet-discuss] Bandwidth/latency optimization
Jérémy Richert
jeremy.richert1 at gmail.com
Tue Dec 3 13:45:49 PST 2013
Hi all,
I am currently developing a multiplayer FPS, and I am wondering how to tune
ENet to have a network engine as optimized as possible on a bandwidth usage
and a latency point of view.
I would like to have your thougts on the following aspects:
----------------
1. Packet size
----------------
I was wondering whether it was better to send large packets or to divide
them into small packets.
Pros of the large packets :
- Less overhead due to the protocols (8 bytes for UDP, 20 bytes for IPv4,
10 for ENet)
Pros of the small packets :
- When a packet is lost, less data is lost
- When a reliable packet is lost, resending it requires less bandwidth
- Lower latency
>From what I have read, most people agree that it is recommended to send
small packets to avoid packet splitting. This means that the application
has to ensure that the size of the data sent does not exceed the MTU. Also,
as the MTU depends on the router, some people recommended to have packets
that will never be splitted, i.e. <= 576 bytes.
What is your experience on this point?
For now I have capped the data size to 1500, but I am thinking of reducing
it for a better latency. Does anyone know the typical packet loss rate?
----------------
2. Compression
----------------
Has anyone used compression in a network engine? If yes, which compression
algorithm? What was the average gain?
I have read that John Carmack used the Huffman compression in the Quake 3
network engine because it was well suited for network data compression, but
I still need to find some time to implement it in my program and do some
tests.
----------------
3. Channels
----------------
What is your network channel policy? How many channels do you use? What do
you send on each channel?
At the moment I have 2 channels: one for sending unreliable data (a lot),
another one for reliable data.
I have chosen this organization to avoid blocking the unreliable data while
waiting for an ACK for the reliable data. I am thinking of adding another
channel to send high priority reliable data, but I am not sure of the
benefit as I already group the reliable data before sending them to limit
the blocking. It may be useful if the packet loss rate is too high.
----------------
4. Server cycle
----------------
I know this aspect depends on the game type, but I would be interested in
knowing how your applications work on this.
On my side, based on Valve's introduction to network concepts (+ some
readings on the UT and Quake network engines), I have decided to implement
a 50-ms cycle on the server side. This means that the servers only updates
the simulation and notifies the clients each 50 ms. Meanwhile, it only
reads the network messages to empty the network event buffer and to handle
the reliable packet sending.
On the client side, I have introduced a voluntary delay of 70 ms, which
goes unnoticed on a user point of view, but helps a lot for overall
fluidity as the client will almost always have the following world update
(unless a packet is lost).
I will soon try to increase the server cycle time to 60 ms to gain 10-20%
of bandwidth. I also plan to separate the display delay of the player (it
will stay to 70 ms) and the rest of the world (increased to 100 ms).
However, I am afraid of the impact on reactivity.
Does anyone have a similar architecture? What is your experience on timings?
If not, has anyone already developed a FPS or a game very dependant on the
network speed? If yes, what are your advices?
----------------
5. Other
----------------
If anyone has some useful advices on how to improve the network engine of a
game/application, I would be pleased to hear it (well, to read it at least).
Thanks in advance for all your experience sharing.
Best regards,
Jeremy Richert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cubik.org/pipermail/enet-discuss/attachments/20131203/16feacb3/attachment.html>
More information about the ENet-discuss
mailing list