The UK Home Automation Archive

Archive Home
Group Home
Search Archive


Advanced Search

The UKHA-ARCHIVE IS CEASING OPERATIONS 31 DEC 2024


[Message Prev][Message Next][Thread Prev][Thread Next][Message Index][Thread Index]

RE: xAP EOM identifier in xAP v1.3



------=_NextPart_000_0049_01CA37AF.24DDCA80
Content-Type: text/plain;
charset="US-ASCII"
Content-Transfer-Encoding: 7bit

How big are the buffers on your device?

What is it BTW? I'm sure we're all curious now (or have I missed/forgotten
an earlier post)?



From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Neil Wrightson
Sent: 17 September 2009 13:55
To: xAP_developer@xxxxxxx
Subject: RE: [xAP_developer] xAP EOM identifier in xAP v1.3





That's why I had length=nnnn    (4 digit field length, padded 0's as per my
header example)

Actually, I was thinking in HEX not decimal so nnnn hex = 65535 not 9999.

This style of thing would work with TCP or UDP.

The UDP protocol does support smaller message sizes for devices that have
small buffers i.e. 256 byte buffer.

This would then mean that a typical large xAP message, say 1000 bytes would
be in 4 separate UDP messages.



Regards,

Neil Wrightson.
Skype : Neil_Wrightson





_____

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Edward Pearson
Sent: Thursday, 17 September 2009 10:47 PM
To: xAP_developer@xxxxxxx
Subject: RE: [xAP_developer] xAP EOM identifier in xAP v1.3



The trouble is that to generate such a message you need to know how long
it's going to be before you generate it and then inserting the length field
could change the length.  What you need is a double-<lf> message
terminator!

Are you actually seeing any cases of fragmentation?

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Neil Wrightson
Sent: 17 September 2009 13:32
To: xAP_developer@xxxxxxx
Subject: RE: [xAP_developer] xAP EOM identifier in xAP v1.3



Just out of curiosity, how many things would break if the xAP header had a
message length?

I.e. add a length=nnnn line.

xap-header
{
v=12
hop=1
uid=FF123400
class=xap-temp.notification
source=ACME.thermostat.lounge
length=0128
}
temp.current
{
temp=25
units=C
}

In my embedded world, when I receive a UDP message, I also get a message
size.

If I could compare this against the message size in the header area, I
would
know whether I had a complete xAP UDP message or it had been fragmented.

Apart from that, it also makes the loop decoding of the message a bit
easier.

Regards,

Neil Wrightson.
Skype : Neil_Wrightson

_____

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Edward Pearson
Sent: Thursday, 17 September 2009 10:16 PM
To: xAP_developer@xxxxxxx
Subject: RE: [xAP_developer] xAP EOM identifier in xAP v1.3



Sigh.

You misread. I said (or at least intended to say) enforcing the transport
wrapper on >UDP< would break everything (obviously). My experience of
it on
TCP was good.

There's no absolute need for the delimiter but it would help implementers
especially early adopters and adoption is a good thing. If it is cruft,
it's
only one little byte of it.

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 17 September 2009 10:51
To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3



Cool!

I'm not sure the transport wrapper over TCP 'breaks everything' - I've used
it for TCP routing of xAP over the internet framed using the transport
wrapper for several years now (I even demoed it as a means of remotely
accessing my home intranet via xAP at the inaugural HA weekend meet many
moons ago), but perhaps I'm missing something from your requirements? All
the tcp hub has to do is strip off the start and end delimiter before
rebroadcast, and likewise frame any received udp messages with an
<stx>,
<etx> if operating bi-directionally. Admittedly it's an additional
(minimal)
cost that isn't present with the double-0A scheme, but the benefits of
being
able to detect the start of frame, and the benefits of being able to apply
this approach to any async transport, not just tcp/ip, outweigh that in my
mind. And it is in the spec already ;-)

Personally, I don't see a need for a delimiter at the end of a message in
UDP xAP. I'm sure I'm going to be outvoted, but look at all the other UDP
protocols out there - it's not common practice because it's not technically
necessary, and as such it's just adding unnecessary cruft to xAP!

Cheers
Patrick





2009/9/17 Edward Pearson <edward.mailgroup@xxxxxxx>

You're right, to get a connection in good time that's not statically
configured it'd need a request/offer message pair.

The experimental hub I have here is designed to address one specific
problem
only - unreliability over WiFi.

In runs as a master slave arrangement. In master mode it accepts incoming
tcp connections on port 3639. In slave mode (deployed on the WiFi machine)
it's configured (statically) with the IP address or hostname of the master
hub to which it connects over tcp. The tcp connection is used one-way only,
from master to slave. The master forwards all xAP messages it receives (on
udp) to all its local udp clients (as usual) and to any connected tcp
clients. The slave forwards all messages received on its tcp connection to
its local clients on udp. Works a treat. Now I can lie on the sofa
debugging
Viewer v4 (it'll be out really, really soon now I promise) on my WiFi
laptop
without wondering where 15% of the messages have gone.

I mention all this mainly (and at risk of getting back to Kevin's original
point of this thread which I'm feeling guilty of hijacking horribly)
because
I needed to address the problem of the segregation  of messages in a stream
to get this to work. So I experimented with several schemes  getting a good
feel for the practical pro's and cons of each. Initially I went for the
STX/ETX method from the transport wrapper since it's what's in the existing
spec (but opting out of the CRC). Later I used the double LF termination.
The transport wrapper is somewhat easier to work with. For instance, it's
easier to re-sync to the a message start with transport wrapper; just wait
for the next STX (a single byte), vs. needing match all of
"xap-[headerhbeat]<lf>". Message ends are similar: ETX (1
byte) vs <lf><lf>
(2 bytes).  But all this is irrelevant in the udp world. Introducing
STX/ETX
here would break everything while double <lf> is nicely backwards
compatible.

(and finally, my point relevant to the thread start)

I don't think double <lf> is the right solution for Kevin's
developers'
wireless app problem; it'd help them out somewhat but I think it more than
likely that it's 'papering over' a bug in or around their network stack.
But
there is a valid need lurking in here; without message-level delimiting in
the original spec, implementers of xAP commonly run up against the problem
of knowing where the end of a message is while parsing messages. The double
<lf> definitely helps here. So given that it's easy to add, helps in
some
folks, eases adoption and causes no backwards compatibility issues, I'm
happy that it should go into the v1.3 spec.

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 16 September 2009 12:00


To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3



I agree that checksums are not appropriate to TCP connections, the
transport
wrapper already designates them as optional.

I don't think hubs currently send heartbeats? If they do, it's not explicit
in the spec. (Mine doesn't). We also need a mechanism to differentiate
between multiple hubs (since most people will generally run at least one
hub
per vm).

Is a TCP hub going to relay all traffic to all TCP endpoints? That won't
scale very well... but it's also not clear how the filtering etc. will
work.
Administering filters manually works fine for the odd serial segment, but
it's not going to be viable for a large number of endpoints.

Heartbeats are broadcast every minute or so typically, so that's going to
result in start up times of up to two minutes for a tcp end point. Is that
acceptable? If not, do we reduce the new tcp hub heartbeat interval? Or is
there a mechanism for an endpoint to poke a hub and elicit a heartbeat?

And lastly, is there merit in some kind of referrer system to allow for
load-balancing and automated failover of tcp connections across more than
one hub? If so, how would this work?

Patrick

2009/9/15 Edward Pearson <edward.mailgroup@xxxxxxx>

I don't mean that packet fragmentation or re-ordering are myths - just that
when dealing with xAP at the IP stack (socket) interface you are dealing
with datagrams not data packets so you don't need to worry about those
packet aspects. There was a design decision to limit xAP datagrams to 1500
bytes to improve the likely coverage of correctly sent datagrams in a world
of IP stacks of varying quality. It's no more important than that. Readers
of the spec seem to attach more importance to it than it needs (there's the
myth aspect).  To program xAP you don't need to worry about fragmentation
and reordering - just keep your datagrams 1500 bytes or less and let the
stack do its thing.

Sometimes an app goes a bit wild (xAP news has occasionally been a useful
test source) and big xAP messages are generated - and they get delivered
too! It's normally the input buffer of the receiving app (eg hub) that
breaks first.

On the tcp framing, I'd suggest that implementing the CRC part (irrelevant
on a reliable stream) is a waste of most people's time; by far more use
will
be made of tcp than any kind of serial/485/etc networks so they'd be
sharing
development of an implementation with nobody.

Discovery can be done simply with the existing hub heartbeat message. Just
need to agree on an extra block that advertises the port number of the tcp
service - which I assume, by default would be 3639.

Having read the 802.11 spec, I now understand why broadcast udp from an AP
to a NIC is so un-reliable. And ad-hoc mode is a real disaster!

From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 15 September 2009 14:15


To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3



Hi Edward,
Please see in-line responses...

2009/9/15 Edward Pearson <edward.mailgroup@xxxxxxx>

The double 0x0a terminator works for me, it's simple to implement - already
done for my stuff.

...but it isn't robust unless the message is sent as a single datagram, and
if it is sent as a single datagram, the os should deliver it as such -- and
if it doesn't, adding an eom marker isn't going to help.

For reliable streams, such as TCP, I generally frame the messages with STX
and ETX.

I thought all this business about 1500 bytes was somewhat urban myth (at
least as it applies to xAP). The data packet size is not what's important;
it's how the particular IP stack implementation deals with
>datagrams<
that's the key.


I don't understand what you are syaing here? The data packet size is
inextricably linked to the IP stack. What aspect is it that you consider to
be an urban myth?


I only have experience of the two most recent Windows stacks (XP and Vista)
which I agree are likely more capable than old embedded stacks. My
experiments with Wireshark show that those stacks will happily deal with
fragmentation and defragmentation. 64KB in a single datagram? No problem if
your receive buffer is long enough (even if you force a small MTU).

Yes, agreed, OSes perform fragmentation for you. The individual fragments
have a maximum size as determined by the MTU. For pragmatic, practical
reasons, xAP needs to define a maximum overall message size, and for
convenience's sake that was set as equal to the 'standard' MTU size.
Devices
which use a smaller MTU should fragment and reassemble seamlessly provided
that the correct socket options have been set to define the maximum UDP
packet size. By electing to use a single MTU's worth of data, we we avoid
the overhead associated with fragmentation and reassembly (principally
memory buffering) which is a good thing. When reassembled, the os should
deliver a datagram as a complete entity in one go (irrespective of the mtu
size, assuming that the datagram falls within the maximum datagram size
defined for the stack). If the sender doesn't send a message as a single
datagram, then the whole thing falls apart because effectively you are then
doing fragmentation and reassembly at the application layer, and that won't
work because the ordering across datagrams is not guaranteed and individual
datagrams may get lost.


If anything goes wrong (eg, missing fragment) the whole datagram is
dropped.
I can't see how packet re-ordering can happen on a single LAN for broadcast
UDP - there are no multiple paths and no retry mechanism. Certainly never
observed it - I assume the stack would again just drop the entire datagram.


Re-ordering of datagrams can occur for multiple reasons on a single lan. A
sender may send individual UDP packets in any order it chooses. This
commonly occurs with fragmented packets originating from a multi-threaded
sender, which can be interleaved with smaller, non-fragmented datagrams as
required (optimising transient memory use, as soon as they are sent, the
buffer can be released). A switch is not obliged to retransmit packets in
the order in which they are received. And most fundamentally, the specs do
not require UDP packets to be ordered so, even if you don't observe it, it
can happen, and sooner or later interoperability issues will arise if the
working assumption is made that they always arrive in order.

A bigger issue for me, and the reason for me experimenting a few times with
TCP stream serving from hubs, is datagram loss over WiFi networks. This is
greatly accentuated for UDP when you use broadcast (as we do) from wired to
wireless (fine the other way as it's not really a broadcast packet till it
gets to the AP) as the access point and NIC can not longer apply their
ack/nak procedure at the transport level. I commonly see xAP datagram loss
rates from wired to wireless connections of 20%. So I'd like us to agree on
a standard transport wrapper for TCP streams which a lot of platforms would
find useful.

I'd suggest using the same framing as the "transport wrapper", as
this then
allows for code to be shared across transports. If xAP was extended to
support TCP, then that should also include a formal discovery mechanism by
which the IP address/characteristics of the hub can be discovered (over UDP
xAP?)

Patrick


From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 14 September 2009 14:19
To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3



So the xAP delimiters for serial are defined in the 1.2 xAP spec here:
http://www.xapautomation.org/index.php?title=Protocol_definition#Transport_W
rapper

To the best of my knowledge, the MTU size for an ethernet frame is 1518
bytes, which leads to a UDP packet MTU of 1500 bytes and this is the size
that is adopted by the majority of operating systems. Internet routers
(i.e.
ISPs) sometimes use an MTU of 576 bytes, but this wouldn't be relevant to
xAP since the traffic doesn't pass over the wider net, or if it does, it's
generally gatewayed via a TCP/IP connection.

If a device is receiving fragmented UDP packets, I think the same issues
arise as those related to extending the xAP message size beyond 1500 bytes
-
what happens if a fragment gets discarded.

If you take the scenario:

Part 1(a), Part 1(b), Part 2(a), Part 2(b), Part 3(a), Part 3(b)

--- first there is no guarantee that the parts will be delivered in order,
and second, if part 1(b) was dropped, and you were blindly assembling
messages based on the proposed double-0a terminator, you'd end up with a
message comprising part 1(a), 2(a) and 2(b) which is not only obviously
corrupt, but also possibly larger than the maximum xAP message size,
blowing
away your buffers.

I think the solution is to probably parse incoming messages on the fly,
byte-by-byte. You can then at least reset your state when you encounter the
xap-header, and if you count open and close curly braces, you can tell when
you have an apparently complete message. This won't solve the issue of UDP
fragments being potentially received out of order, but so long as we are
dealing with a single LAN, and fragmentation occurs at the receiver, we
will
be ok I think.

It is absolutely possible for UDP packets to be discarded, and the way we
deal with this in xAP is to accept that this can happen to xAP messages,
and
layer application level acknowledgements where knowing that a message has
been received is critically important - whether explicitly or implicitly
through status messages. There are various schemes that could be adopted to
allow a receiver to detect lost messages (e.g. sequence numbering), but
they
quickly become quite onerous, and assume that the originator keeps a copy
of
the original message or is able to reconstruct it - which is non-trivial
for
the many xAP nodes.

Perhaps you can share more of the specific details of the device(s) in
question (manufacturer, docs, o/s etc), and their specific behaviour, which
seems a bit anomalous?

Patrick

2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx>

One of the issues seems to be that there is conflicting views as to the
length of a a UDP data packet payload.  Some people cite 500 or 512
characters and some 1500.  Regardless in some low memory/performance
devices it is being reported, that even with UDP,  packets are being
received from the buffer either truncated or appended back to back.
The latter I assume is due to the speed of the device in servicing the
buffer.

We have an opportunity to protect against this with v1.3 and the double
'0A' seems the most compatible.   I would be loathe to support anything
that wasn't backward compatible.

K


Patrick Lidstone wrote:
>
>
> I will dig it out - it included an optional checksum I think, and IIRC
> was framed by stx and etx (a kind of pseudo industry standard). I
> certainly used it with the PIC serial stuff and the xAP-serial bridge.
> Re.: long message truncation and concatenation: If we need to support
> messages that are larger than one UDP packet, then there are
> additional complexities and the proposed scheme won't work as intended
> as the ordering of UDP messages is not guaranteed. I'm happy to help
> refine a spec to support these capabilities, but it is moving away
> from the basic ethos of xAP somewhat, as devices would have to be able
> to buffer received messages, and that raises the bar considerably.
> Perhaps there is scope for co-existence of a long and standard message
> protocol though?
>
> Patrick
>
> 2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx

> <mailto:yahoogroupskh@xxxxxxx>>

>
>     ... Oh ... where is that in the spec ?  it might be all we need.
>
>        This is also tied in with some aspects of long message
truncation
>     and concatenation of messages received in UDP receive buffers
>     though...
>
>      K
>
>     Patrick Lidstone wrote:
>     >
>     >
>     > The original xap spec provides extensions for framing a
message over
>     > async serial which also delimit the start of the message -
you don't
>     > need this 'hack' if you follow the original spec for non-UDP
>     transports.
>     >
>     > Patrick
>     >
>     > 2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx
>     <mailto:yahoogroupskh@xxxxxxx>

>     > <mailto:yahoogroupskh@xxxxxxx

>     <mailto:yahoogroupskh@xxxxxxx>>>
>     >
>     >       We have been asked on several occasions how to detect
the
>     end of a
>     >     xAP messager as there is no unique EOM character. 
Typically
>     in any
>     >     reasonable sized packet structured transport eg UDP then
the
>     packet
>     >     provides such an indication but on systems with small
packets or
>     >     non eg
>     >     asynchronous serial this is not useable.
>     >
>     >       In discussing this with the specification team we must
>     consider
>     >     backwards compatability and so we do not wish to alter
the
>     >     specification
>     >     to include a unique EOM character.  What we do propose
>     however is that
>     >     xAP v1.3 will specify that all messages should end with
two
>     >     consecutive
>     >     chr(10)'s immediately after the closing '}'
>     >
>     >     ie  ..... 7D 0A 0A
>     >
>     >      Some apps, even v1.2 ones,  already do this.  We don't
>     believe this
>     >     will cause any backwards compatibility issues and it will
>     always be
>     >     unique within a xAP message.
>     >
>     >      So, in developing xAP v1.3 apps could you therefore
append
>      two 0A's
>     >     at the end of your messages , and of course handle
incoming
>     messages
>     >     containing such.
>     >
>     >       K
>     >
>     >
>     >     ------------------------------------
>     >
>     >     Yahoo! Groups Links
>     >
>     >
>     >        mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>
>     >     <mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>>
>     >
>     >
>     >
>     >
>     >
>     >
>
>
>
>     ------------------------------------
>
>     Yahoo! Groups Links
>
>
>        mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>
>
>
>
>
>
>



------------------------------------


xAP_Development Main Index | xAP_Development Thread Index | xAP_Development Home | Archives Home

Comments to the Webmaster are always welcomed, please use this contact form . Note that as this site is a mailing list archive, the Webmaster has no control over the contents of the messages. Comments about message content should be directed to the relevant mailing list.