[ENet-discuss] 1.2.2 and 1.3.0 *PRE*-release
M. Rijks
enet at forge.dds.nl
Thu May 20 16:00:01 PDT 2010
The 1.3 edition sounds excellent, Lee - it already includes three
wanna-haves for me. =)
I've tried catching up on range coders using a Wikipedia article, but
I have to admit that the inner workings are entirely unclear to me. :(
So I'll go with some questions I think I can handle the answers of:
1. What's the minimum size for a packet for this kind of compression
to become effective?
2. What kind of data is best compressed with it?
3. Is there a size penalty for attempting to compress data that can't
be compressed?
4. Do I understand it correctly that the hosts must be set for
compression on both ends of a connection for this to work correctly?
Or are compressed packets somehow flagged so Enet 1.3 can recognize
and decompress them when coming in?
5. Wouldn't it have been more convenient to decide compression per
packet (in which case packets *would* need to be flagged, of course) ?
I expect there is little sense in compressing very small packets
especially because there may be overhead in terms of size and
processing...
Thanks!
Martin
Quoting Lee Salzman <lsalzman1 at cox.net>:
> On packet sizes under ENet's default MTU (1400), the range coder beat
> gzip almost all the time, and was faster to boot. Of significant note
> is that I made the range coder table-less, i.e. it requires no
> initializing of any tables at all before compressing a packet, and
> never does any allocations at all and operates within a small fixed
> amount of memory. I wanted to avoid any setup/teardown time per packet
> that would have detracted from performance.
>
> Lee
>
> Philip Bennefall wrote:
>> Hi Lee,
>>
>> Thank you very much for all the time and effort you are putting
>> into this library. I am about to integrate it into a game scripting
>> engine, and these new features will really come in handy. I'll be
>> testing it over the next few days, and will report any bugs or
>> issues that I find. I do have a quick question though. How does the
>> built-in ENet compressor compare to using an external library such
>> as zlib, both in terms of compression speed and ratios?
>>
>> Kind regards,
>>
>> Philip Bennefall
>> ----- Original Message ----- From: "Lee Salzman" <lsalzman1 at cox.net>
>> To: "Discussion of the ENet library" <enet-discuss at cubik.org>
>> Sent: Thursday, May 20, 2010 11:56 PM
>> Subject: [ENet-discuss] 1.2.2 and 1.3.0 *PRE*-release
>>
>>
>>> So, I was playing around with packet compression for Sauerbraten
>>> using an adaptive range compressor. It turned out that did not
>>> work out so well on Sauerbraten's data, because I had already so
>>> tightly quantized it that the gains were small. But since the
>>> packet compressor was still good on other data besides
>>> Sauerbraten's and was of higher quality than other similar
>>> performing range compressors I could find, I decided to keep it in
>>> ENet.
>>>
>>> This also gave me a chance to break the protocol and introduce
>>> some various things, thus there will be a dual release of 1.2.2,
>>> which does not contain any protocol or API changes, and 1.3.0,
>>> which contains the packet compression changes amongst others. In
>>> 1.3.0 I also changed how session disambiguation works, to the
>>> effect that I cut down on packet header size (by 4 bytes) unless
>>> the user enables checksums. Since I was free to break the API a
>>> bit, I did the specify channel limit on host creation thing, and
>>> also added a connect data field to connect events since someone
>>> wanted that a while ago. Also noteworthy is that even in 1.2.2,
>>> packet checksums can be enabled by setting a callback, no longer
>>> breaking binary compatibility of the longer amongst same-numbered
>>> builds, which should make it easier on those Linux distributions
>>> that distribute it as a shared library.
>>>
>>> So now I think the feature set is mostly complete, and I would
>>> like people to test the pre-release packages to make sure there
>>> are no issues with them, after which I will do a real release if
>>> everything is okay.
>>>
>>> 1.2.2 pre-release: http://lee.fov120.com/enet-1.2.2-not-released.tar.gz
>>> 1.3.0 pre-release: http://lee.fov120.com/enet-1.3.0-not-released.tar.gz
>>>
>>> Note that CVS currently only contains the 1.2.2 changes. The 1.3.0
>>> pre-release was taken from my private Sauerbraten tree, and will
>>> only be stuffed into CVS when I am ready for the final release.
>>>
>>> Proposed ChangeLogs:
>>> ENet 1.3.0 (May 20, 2010):
>>>
>>> * enet_host_create() now requires the channel limit to be specified as
>>> a parameter
>>> * enet_host_connect() now accepts a data parameter which is supplied
>>> to the receiving receiving host in the event data field for a connect event
>>> * added an adaptive order-1 range coder as a built-in compressor option
>>> which can be set with enet_host_compress_with_range_coder()
>>> * added support for packet compression configurable with a callback
>>> * improved session number handling to not rely on the packet checksum
>>> field, saving 4 bytes per packet unless the checksum option is used
>>> * removed the dependence on the rand callback for session number handling
>>>
>>> Caveats: This version is not protocol compatible with the 1.2 series or
>>> earlier. The enet_host_connect and enet_host_create API functions require
>>> supplying additional parameters.
>>>
>>> ENet 1.2.2 (May 20, 2010):
>>>
>>> * checksum functionality is now enabled by setting a checksum callback
>>> inside ENetHost instead of being a configure script option
>>> * added totalSentData, totalSentPackets, totalReceivedData, and
>>> totalReceivedPackets counters inside ENetHost for getting usage
>>> statistics
>>> * added enet_host_channel_limit() for limiting the maximum number of
>>> channels allowed by connected peers
>>> * now uses dispatch queues for event dispatch rather than potentially
>>> unscalable array walking
>>> * added no_memory callback that is called when a malloc attempt fails,
>>> such that if no_memory returns rather than aborts (the default behavior),
>>> then the error is propagated to the return value of the API calls
>>> * now uses packed attribute for protocol structures on platforms with
>>> strange alignment rules
>>> * improved autoconf build system contributed by Nathan Brink allowing
>>> for easier building as a shared library
>>>
>>> Caveats: If you were using the compile-time option that enabled checksums,
>>> make sure to set the checksum callback inside ENetHost to enet_crc32 to
>>> regain the old behavior. The ENetCallbacks structure has added new fields,
>>> so make sure to clear the structure to zero before use if
>>> using enet_initialize_with_callbacks().
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ENet-discuss mailing list
>>> ENet-discuss at cubik.org
>>> http://lists.cubik.org/mailman/listinfo/enet-discuss
>>
>> _______________________________________________
>> ENet-discuss mailing list
>> ENet-discuss at cubik.org
>> http://lists.cubik.org/mailman/listinfo/enet-discuss
>>
>
> _______________________________________________
> ENet-discuss mailing list
> ENet-discuss at cubik.org
> http://lists.cubik.org/mailman/listinfo/enet-discuss
More information about the ENet-discuss
mailing list