The UK Home Automation Archive

Archive Home
Group Home
Search Archive


Advanced Search

The UKHA-ARCHIVE IS CEASING OPERATIONS 31 DEC 2024


[Message Prev][Message Next][Thread Prev][Thread Next][Message Index][Thread Index]

RE: xAP EOM identifier in xAP v1.3



------=_NextPart_000_0057_01CA37BB.1F59C7B0
Content-Type: text/plain;
charset="US-ASCII"
Content-Transfer-Encoding: 7bit

> Is a TCP hub going to relay all traffic to all TCP endpoints? That
won't
scale very well...



Yes it would forward all traffic. How does that create a scalability
problem
Patrick? Obviously there's more traffic, now o(n) rather than o(1), but how
many clients do you reckon you'd need to have before there'd be a problem
and where do you think the problem might occur?



> is there merit in some kind of referrer system to allow for
load-balancing
and automated failover of tcp connections across more than one hub?



I see no more reason for those kinds of features in a tcp capable system
than there are in the current (one hub per vm, udp forwarding)
architecture.





From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 16 September 2009 12:00
To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3





I agree that checksums are not appropriate to TCP connections, the
transport
wrapper already designates them as optional.

I don't think hubs currently send heartbeats? If they do, it's not explicit
in the spec. (Mine doesn't). We also need a mechanism to differentiate
between multiple hubs (since most people will generally run at least one
hub
per vm).

Is a TCP hub going to relay all traffic to all TCP endpoints? That won't
scale very well... but it's also not clear how the filtering etc. will
work.
Administering filters manually works fine for the odd serial segment, but
it's not going to be viable for a large number of endpoints.

Heartbeats are broadcast every minute or so typically, so that's going to
result in start up times of up to two minutes for a tcp end point. Is that
acceptable? If not, do we reduce the new tcp hub heartbeat interval? Or is
there a mechanism for an endpoint to poke a hub and elicit a heartbeat?

And lastly, is there merit in some kind of referrer system to allow for
load-balancing and automated failover of tcp connections across more than
one hub? If so, how would this work?

Patrick

2009/9/15 Edward Pearson <edward.mailgroup@xxxxxxx>



I don't mean that packet fragmentation or re-ordering are myths - just that
when dealing with xAP at the IP stack (socket) interface you are dealing
with datagrams not data packets so you don't need to worry about those
packet aspects. There was a design decision to limit xAP datagrams to 1500
bytes to improve the likely coverage of correctly sent datagrams in a world
of IP stacks of varying quality. It's no more important than that. Readers
of the spec seem to attach more importance to it than it needs (there's the
myth aspect).  To program xAP you don't need to worry about fragmentation
and reordering - just keep your datagrams 1500 bytes or less and let the
stack do its thing.



Sometimes an app goes a bit wild (xAP news has occasionally been a useful
test source) and big xAP messages are generated - and they get delivered
too! It's normally the input buffer of the receiving app (eg hub) that
breaks first.



On the tcp framing, I'd suggest that implementing the CRC part (irrelevant
on a reliable stream) is a waste of most people's time; by far more use
will
be made of tcp than any kind of serial/485/etc networks so they'd be
sharing
development of an implementation with nobody.



Discovery can be done simply with the existing hub heartbeat message. Just
need to agree on an extra block that advertises the port number of the tcp
service - which I assume, by default would be 3639.



Having read the 802.11 spec, I now understand why broadcast udp from an AP
to a NIC is so un-reliable. And ad-hoc mode is a real disaster!



From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 15 September 2009 14:15


To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3





Hi Edward,
Please see in-line responses...

2009/9/15 Edward Pearson <edward.mailgroup@xxxxxxx>



The double 0x0a terminator works for me, it's simple to implement - already
done for my stuff.



...but it isn't robust unless the message is sent as a single datagram, and
if it is sent as a single datagram, the os should deliver it as such -- and
if it doesn't, adding an eom marker isn't going to help.

For reliable streams, such as TCP, I generally frame the messages with STX
and ETX.



I thought all this business about 1500 bytes was somewhat urban myth (at
least as it applies to xAP). The data packet size is not what's important;
it's how the particular IP stack implementation deals with
>datagrams<
that's the key.


I don't understand what you are syaing here? The data packet size is
inextricably linked to the IP stack. What aspect is it that you consider to
be an urban myth?


I only have experience of the two most recent Windows stacks (XP and Vista)
which I agree are likely more capable than old embedded stacks. My
experiments with Wireshark show that those stacks will happily deal with
fragmentation and defragmentation. 64KB in a single datagram? No problem if
your receive buffer is long enough (even if you force a small MTU).

Yes, agreed, OSes perform fragmentation for you. The individual fragments
have a maximum size as determined by the MTU. For pragmatic, practical
reasons, xAP needs to define a maximum overall message size, and for
convenience's sake that was set as equal to the 'standard' MTU size.
Devices
which use a smaller MTU should fragment and reassemble seamlessly provided
that the correct socket options have been set to define the maximum UDP
packet size. By electing to use a single MTU's worth of data, we we avoid
the overhead associated with fragmentation and reassembly (principally
memory buffering) which is a good thing. When reassembled, the os should
deliver a datagram as a complete entity in one go (irrespective of the mtu
size, assuming that the datagram falls within the maximum datagram size
defined for the stack). If the sender doesn't send a message as a single
datagram, then the whole thing falls apart because effectively you are then
doing fragmentation and reassembly at the application layer, and that won't
work because the ordering across datagrams is not guaranteed and individual
datagrams may get lost.


If anything goes wrong (eg, missing fragment) the whole datagram is
dropped.
I can't see how packet re-ordering can happen on a single LAN for broadcast
UDP - there are no multiple paths and no retry mechanism. Certainly never
observed it - I assume the stack would again just drop the entire datagram.


Re-ordering of datagrams can occur for multiple reasons on a single lan. A
sender may send individual UDP packets in any order it chooses. This
commonly occurs with fragmented packets originating from a multi-threaded
sender, which can be interleaved with smaller, non-fragmented datagrams as
required (optimising transient memory use, as soon as they are sent, the
buffer can be released). A switch is not obliged to retransmit packets in
the order in which they are received. And most fundamentally, the specs do
not require UDP packets to be ordered so, even if you don't observe it, it
can happen, and sooner or later interoperability issues will arise if the
working assumption is made that they always arrive in order.



A bigger issue for me, and the reason for me experimenting a few times with
TCP stream serving from hubs, is datagram loss over WiFi networks. This is
greatly accentuated for UDP when you use broadcast (as we do) from wired to
wireless (fine the other way as it's not really a broadcast packet till it
gets to the AP) as the access point and NIC can not longer apply their
ack/nak procedure at the transport level. I commonly see xAP datagram loss
rates from wired to wireless connections of 20%. So I'd like us to agree on
a standard transport wrapper for TCP streams which a lot of platforms would
find useful.



I'd suggest using the same framing as the "transport wrapper", as
this then
allows for code to be shared across transports. If xAP was extended to
support TCP, then that should also include a formal discovery mechanism by
which the IP address/characteristics of the hub can be discovered (over UDP
xAP?)

Patrick


From: xAP_developer@xxxxxxx [mailto:xAP_developer@xxxxxxx]
On Behalf Of Patrick Lidstone
Sent: 14 September 2009 14:19
To: xAP_developer@xxxxxxx
Subject: Re: [xAP_developer] xAP EOM identifier in xAP v1.3





So the xAP delimiters for serial are defined in the 1.2 xAP spec here:
http://www.xapautomation.org/index.php?title=Protocol_definition#Transport_W
rapper

To the best of my knowledge, the MTU size for an ethernet frame is 1518
bytes, which leads to a UDP packet MTU of 1500 bytes and this is the size
that is adopted by the majority of operating systems. Internet routers
(i.e.
ISPs) sometimes use an MTU of 576 bytes, but this wouldn't be relevant to
xAP since the traffic doesn't pass over the wider net, or if it does, it's
generally gatewayed via a TCP/IP connection.

If a device is receiving fragmented UDP packets, I think the same issues
arise as those related to extending the xAP message size beyond 1500 bytes
-
what happens if a fragment gets discarded.

If you take the scenario:

Part 1(a), Part 1(b), Part 2(a), Part 2(b), Part 3(a), Part 3(b)

--- first there is no guarantee that the parts will be delivered in order,
and second, if part 1(b) was dropped, and you were blindly assembling
messages based on the proposed double-0a terminator, you'd end up with a
message comprising part 1(a), 2(a) and 2(b) which is not only obviously
corrupt, but also possibly larger than the maximum xAP message size,
blowing
away your buffers.

I think the solution is to probably parse incoming messages on the fly,
byte-by-byte. You can then at least reset your state when you encounter the
xap-header, and if you count open and close curly braces, you can tell when
you have an apparently complete message. This won't solve the issue of UDP
fragments being potentially received out of order, but so long as we are
dealing with a single LAN, and fragmentation occurs at the receiver, we
will
be ok I think.

It is absolutely possible for UDP packets to be discarded, and the way we
deal with this in xAP is to accept that this can happen to xAP messages,
and
layer application level acknowledgements where knowing that a message has
been received is critically important - whether explicitly or implicitly
through status messages. There are various schemes that could be adopted to
allow a receiver to detect lost messages (e.g. sequence numbering), but
they
quickly become quite onerous, and assume that the originator keeps a copy
of
the original message or is able to reconstruct it - which is non-trivial
for
the many xAP nodes.

Perhaps you can share more of the specific details of the device(s) in
question (manufacturer, docs, o/s etc), and their specific behaviour, which
seems a bit anomalous?

Patrick

2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx>

One of the issues seems to be that there is conflicting views as to the
length of a a UDP data packet payload.  Some people cite 500 or 512
characters and some 1500.  Regardless in some low memory/performance
devices it is being reported, that even with UDP,  packets are being
received from the buffer either truncated or appended back to back.
The latter I assume is due to the speed of the device in servicing the
buffer.

We have an opportunity to protect against this with v1.3 and the double
'0A' seems the most compatible.   I would be loathe to support anything
that wasn't backward compatible.

K


Patrick Lidstone wrote:
>
>
> I will dig it out - it included an optional checksum I think, and IIRC
> was framed by stx and etx (a kind of pseudo industry standard). I
> certainly used it with the PIC serial stuff and the xAP-serial bridge.
> Re.: long message truncation and concatenation: If we need to support
> messages that are larger than one UDP packet, then there are
> additional complexities and the proposed scheme won't work as intended
> as the ordering of UDP messages is not guaranteed. I'm happy to help
> refine a spec to support these capabilities, but it is moving away
> from the basic ethos of xAP somewhat, as devices would have to be able
> to buffer received messages, and that raises the bar considerably.
> Perhaps there is scope for co-existence of a long and standard message
> protocol though?
>
> Patrick
>
> 2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx

> <mailto:yahoogroupskh@xxxxxxx>>

>
>     ... Oh ... where is that in the spec ?  it might be all we need.
>
>        This is also tied in with some aspects of long message
truncation
>     and concatenation of messages received in UDP receive buffers
>     though...
>
>      K
>
>     Patrick Lidstone wrote:
>     >
>     >
>     > The original xap spec provides extensions for framing a
message over
>     > async serial which also delimit the start of the message -
you don't
>     > need this 'hack' if you follow the original spec for non-UDP
>     transports.
>     >
>     > Patrick
>     >
>     > 2009/9/13 Kevin Hawkins <yahoogroupskh@xxxxxxx
>     <mailto:yahoogroupskh@xxxxxxx>

>     > <mailto:yahoogroupskh@xxxxxxx

>     <mailto:yahoogroupskh@xxxxxxx>>>
>     >
>     >       We have been asked on several occasions how to detect
the
>     end of a
>     >     xAP messager as there is no unique EOM character. 
Typically
>     in any
>     >     reasonable sized packet structured transport eg UDP then
the
>     packet
>     >     provides such an indication but on systems with small
packets or
>     >     non eg
>     >     asynchronous serial this is not useable.
>     >
>     >       In discussing this with the specification team we must
>     consider
>     >     backwards compatability and so we do not wish to alter
the
>     >     specification
>     >     to include a unique EOM character.  What we do propose
>     however is that
>     >     xAP v1.3 will specify that all messages should end with
two
>     >     consecutive
>     >     chr(10)'s immediately after the closing '}'
>     >
>     >     ie  ..... 7D 0A 0A
>     >
>     >      Some apps, even v1.2 ones,  already do this.  We don't
>     believe this
>     >     will cause any backwards compatibility issues and it will
>     always be
>     >     unique within a xAP message.
>     >
>     >      So, in developing xAP v1.3 apps could you therefore
append
>      two 0A's
>     >     at the end of your messages , and of course handle
incoming
>     messages
>     >     containing such.
>     >
>     >       K
>     >
>     >
>     >     ------------------------------------
>     >
>     >     Yahoo! Groups Links
>     >
>     >
>     >        mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>
>     >     <mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>>
>     >
>     >
>     >
>     >
>     >
>     >
>
>
>
>     ------------------------------------
>
>     Yahoo! Groups Links
>
>
>        mailto:xAP_developer-fullfeatured@xxxxxxx
>     <mailto:xAP_developer-fullfeatured@xxxxxxx>
>
>
>
>
>
>



------------------------------------


xAP_Development Main Index | xAP_Development Thread Index | xAP_Development Home | Archives Home

Comments to the Webmaster are always welcomed, please use this contact form . Note that as this site is a mailing list archive, the Webmaster has no control over the contents of the messages. Comments about message content should be directed to the relevant mailing list.