An area I don't understand about *DSL. When I had a 2wire, I could verify the bit-loading graph changes frequently (now have NVG589).
Either actual data traffic, and/or periodic auto-generated for-calibration-only data (or something else) must be used to measure line characteristics and update the bit-loading, both # bits per frequency, and (perhaps?) setting bits=0 to totally avoid/exclude a frequency.
But... how fast, how often does it occur, what are its limitations? How continuous and dynamic is it? Is it faster at adjusting #-of-bits per freq. than entirely excluding a frequency? During the reaction time or lag of this on going monitoring, the new line characteristics will rack-up greater FEC and/or CRC errors, right?
A misnomer to say the modem is re-syncing continuously, all the time, right? The on-going dynamic adjustments during live use are somehow weaker or less capable than a real full "sync" (which happens at boot time and can be requested thru the RG's GUI any time) during which no user data transmits: clearly there's a major difference.
So exactly which line calibration-like features are specific or special to the real full sync, versus those that do operate continuously, dynamically? Can a freq. get locked out during continuous calibration? If so, would it never be reenabled except/until next real full sync (requiring interruption of service)?
If freq. lock outs only occur in the real full sync, and you have big noise that hops around to different frequencies over time that's a major problem: the continuous calibration won't/can't do a good job?
Does it help in theory to determine when your line is experiencing "a" near worst case noise event (short of no signal all together), to force a sync at that time, in effort to maximize the capture of a (hopefully) superset of the "bad" frequencies? Then if you lose power or ATT triggers late night sync's once a month on a schedule, it likely won't capture the worst noise scenario.
↧