summaryrefslogtreecommitdiff
path: root/src/bootchart/svg.h
diff options
context:
space:
mode:
authorTom Gundersen <teg@jklm.no>2015-07-28 02:32:24 +0200
committerTom Gundersen <teg@jklm.no>2015-08-03 14:06:58 +0200
commit9df3ba6c6cb65eecec06f39dfe85a3596cedac4e (patch)
tree48ed4bc61722465155aef8e7bc3cfd95e4307d57 /src/bootchart/svg.h
parent240b589b143311fda721701312ec15021e96caf9 (diff)
resolved: transaction - exponentially increase retry timeouts
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start at a tenth of those values and increase exponentially until the old values are reached. For LLMNR the recommended timeout for IEEE802 networks (which basically means all of the ones we care about) is 100ms, so that should be uncontroversial. For unicast DNS I have found no recommended value. However, it seems vastly more likely that hitting a 500ms timeout is casued by a packet loss, rather than the RTT genuinely being greater than 500ms, so taking this as a startnig value seems reasonable to me. In the common case this greatly reduces the latency due to normal packet loss. Moreover, once we get support for probing for features, this means that we can send more packets before degrading the feature level whilst still allowing us to settle on the correct feature level in a reasonable timeframe. The timeouts are tracked per server (or per scope for the multicast protocols), and once a server (or scope) receives a successfull package the timeout is reset. We also track the largest RTT for the given server/scope, and always start our timouts at twice the largest observed RTT.
Diffstat (limited to 'src/bootchart/svg.h')
0 files changed, 0 insertions, 0 deletions