Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: RF Timing



Richard T,

My entire communication is based on 1904 and related to transporting CPRI. CPRI is all about moving samples for each antenna carrier. These samples are multi-bit and all have the same presentation time. CPRI is not a continuous stream of bits, it's a continuous stream of samples, today it's typically a few hundred bits per presentation time. For 5G the sample rate looks to increase to say 1/153.6Msps but there will still be the a large number (2 orders more for massive MIMO) of bits which all have exactly the same presentation time.

Best,

Richard M



Sent with Good (www.good.com)


-----Original Message-----
From: Richard Tse [richard.tse@xxxxxxxxxxxxx]
Sent: Thursday, April 21, 2016 01:57 AM Pacific Standard Time
To: 'stds-1904-3-tf@xxxxxxxx'
Subject: RE: RF Timing

Thanks RichardM, I did not realize that we had converted the conversation to a non-continuous stream. 

 

Are we talking about IEEE 1914.1-styled radio data now?  It is difficult to talk about this because nothing has been defined by 1914.1 yet.  But, here are my thoughts:

·         Just like CPRI has done, the 1914.1 function would need its own structures to convey any parallel timing information.  The RoE stream is serial and the presentation time is chronologically linear so it cannot, on its own, convey the same presentation time for multiple bits.  

·         RF is formed by a continuous stream of I/Q data.  To regenerate the continuous I/Q data stream, the 1914.1 RF recovery functions would have an algorithm that fills in all the data that it had deleted when it mapped the I/Q into Ethernet.  So, we have a continuous data stream again, just like CPRI. 

·         Because the 1914.1 recovered I/Q is a continuous stream, jitter on the RoE presented data will not corrupt the relative location of the I/Q samples. 

·         The time of presentation from RoE sets the phase of the recovered I/Q stream(s) and affects the recovered bit rate of the I/Q stream(s).

 

Regardless of whether or not my thoughts on 1914.1 are correct, I don’t believe timestamping at a bit-level granularity will ever be practical. 

·         As the maximum speed of timestamping logic goes up with technological advances, so will the bit rates.  The bit rate will always be faster. 

·         Work for the IEEE 1588 standard has been done (and implemented) for improving the time alignment resolution to sub-nanoseconds.  However, this is still only realizable on a small and very specialized network with custom components and better-than-SyncE physical syntonization, running at GE rates.  This could not currently be realized at a commercial level and, even with all its specialization and customization, would not satisfy bit-level timestamping resolution for 10GE rates, let alone the Nx25Gbps or Mx50Gbps rates that are used for 40GE and 100GE streams.

 

Rich

 

From: Richard Maiden [mailto:rmaiden@xxxxxxxxxx]
Sent: Thursday, April 21, 2016 8:44 AM
To: 'patrick diamond'; Richard Tse
Cc: 'stds-1904-3-tf@xxxxxxxx'
Subject: RE: RF Timing

 

Richard T,

Maybe I can see where the confusion is here? What we are transporting is not a continuous stream. Sure the serdes will serialize and deserialize the data but I don't think that's important. What we are sending (in most use cases) are samples. It's the presentation time of the samples that is important. Each sample will be something like 30bits (15I 15Q), and often we will have samples for more than one antenna and more than one carrier. Say we have 2 antennas and 2 carriers, we'd have 120bits which all must have exactly the same presentation time. If we assume LTE 20MHz, the next set of samples needs to go out 1/30.72e6 later ~32ns later - and all together. This is the kind of granularity we have to work with.

CPRI solves this by using the K28.5 and basic frames to chop up the data so that samples all arrive together and this K28.5, along with HFN and BFN act as the indicator of time. Ultimately the K28.5 is what indicates where we are in the grouping of bits to samples. In RoE we can (but don't have to) chop up the data in a similar way, however we do not have K28.5, HFN or BFN. We have to rely on timingInfo. If the timestamp cannot exactly (without rounding) tell the radio when to send out all those bits - together, the system won't work. It's not acceptable to round to the nearest 0.25ns.

Maybe I'm off base here, but whilst I think everything you say would be true for a continuous stream. That's not what we have.

Thanks

Richard M



Sent with Good (www.good.com)


-----Original Message-----
From: patrick diamond [pdseeker@xxxxxxx]
Sent: Wednesday, April 20, 2016 07:05 PM Pacific Standard Time
To: Richard Tse
Cc: stds-1904-3-tf@xxxxxxxx
Subject: Re: RF Timing

Richard

 

You are right on both points. Totally agree.

Pat Diamond


On Apr 20, 2016, at 21:55, Richard Tse <richard.tse@xxxxxxxxxxxxx> wrote:

First, my apologies to KevinB and RichardM for the multiple emails that they received from me as a result of the change in my email address. 

 

Kevin, my response to each of your points follows.

 

Your hypothetical example:

·         If the timestamping is done on a continuous bit stream, then the presentation time is also for a recovered continuous bit stream.  At the ingress side, if any data is removed or compressed after the ingress timestamp, it would have been done using some algorithm.  At the egress side, this data must be reinserted and decompressed by the analogous algorithm before it can be presented.  One should not use the presentation time to implement the reinsertion/decompression algorithm.     

·         Also, the jitter and wander on any clock and on any serial bit stream, regardless of bit rate, could cause variation over time of several nanoseconds (or much more for bad implementations), relative to a perfect clock.  This intrinsic behaviour would also prevent your hypothetical example of using the presentation time to reinsert the missing bitsfrom working.

 

Presentation time accuracy:

·         The timestamping resolution just needs to be good enough to meet the latency requirements of the application (e.g. ~8ns for CPRI, ~65ns for 3GPP).  This requirement is independent of the bit rate of the datastream. 
Will radio latency requirements ever need to be better than 0.25ns?  Perhaps, but by then, I think that 1904.3 will have been replaced by some new standard.

·         A problem with using a timestamp with 30-bits of picoseconds is that this format is not directly compatible with the time-counters that are based on 1 second time periods (GPS, PTP, NTP, UTC, TAI).
While the GPS and PTP time counters roll over when they reach 1,000,000,000ns (0x3B9A CA00ns), the picosecond time counter would roll over when it reaches 1,000,000,000ps = 1,000,000ns (0xF4240ns).  The picosecond formatted implementation could not use the GPS/PTP time counter directly for RoE timestamping.  Instead, one would have to create a new picosecond time counter and align it to the GSP/PTP time counter. 
This is more of an inconvenience than a drop-dead problem, but I think it is an unnecessary inconvenience because such accuracy is not needed, as I explained above.

·         I think we agreed that 1ms, not 1µs, could be the maximum transit time.  Your picosecond counter could satisfy this 1ms limit, but not much more. 
While I believe this limit could remain true even after 1914.1 defines the radio protocol layering split, we cannot be 100% sure of this.  The HARQ mechanism, which is the cause of the current max latency limit, could be placed at the radio by 1914.1.  In this case, some other mechanism would become the limiter for the max latency and this limit could be much higher.  This would ease the latency and PDV requirements of the RoE network and/or increase the physical size limit of the RoE network.

 

 

Rich

 

 

From: Jouni Korhonen [mailto:jouni.korhonen@xxxxxxxxxxxx]
Sent: Wednesday, April 20, 2016 8:54 PM
To: Kevin Bross
Cc: Richard Tse; Richard Maiden; stds-1904-3-tf@xxxxxxxx
Subject: Re: RF Timing

 

I still do not get the need for this. For the radio phase alignment the current tightest system level requirement is 65ns. Assuming that everything gets tighter (as we have seen) say 5x times, we would still be at >10ns. And you would always be doing the alignment on certain frame boundary over a longer period, not per each sample. If we could even reach close 1ns for real I would say most of the radio people would be extremely happy.

 

If the rate alignment is the concern then again I do not think there is a need to look at individual sample. You would average over some longer period and some frame structure at minimum.

 

What kind of equipment you are looking at to be able to provide timestamping services that are sub-ns accurate in reality?

 

Last, the fronthaul latency requirements out there are way more than 1us. So having max 1us window is not enough.

 

- Jouni

 

 

 

On 19 Apr 2016, at 22:30, Bross, Kevin <kevin.bross@xxxxxxxxx> wrote:

Richard,
 
I agree, insofar as the LPF is able to maintain the proper bit spacing.  However, I don’t think the LPF will be able to tell which bit position some data is supposed to start on…
·        Hypothetical example:  suppose there’s a period of successive 0x00 bytes that is suppressed from the front haul to handle quiescent periods.  At some point after all these zeroes, there will be a non-zero byte.
o   When the next set of data is supposed to go out on the radio, you need to know when the first bit for that next byte is supposed to go out on the radio
o   If you send the next byte out one bit position early, the radio will interpret the data incorrectly, as bits will effectively be shifted left
o   If you send the next byte out one bit position late, the radio will also interpret the data incorrectly, as bits will effectively be shifted right
·        I think you need to be able to specify the presentation time to sufficient accuracy that the RoE device can know the theoretically precise time for when that leading bit should go out.
o   For each ¼ ns period specified by the current timestamp granularity, there could be ~6 different bit position that would match a given timestamp.  How is the RoE node supposed to know which of these 6 different bit positions is the right time to push out the first bit of that new radio data?
o   With future 100 Gbps links, there would be ~25 possible bit positions for each timestamp.
o   This is why I proposed re-defining the units of the RoE timestamp to be in picoseconds.  This still allows specifying a time up to 1 µs in the future (what all seemed to agree was well beyond the expected transit time for any 4G or 5G fronthaul), while still providing enough precision to specify the timestamp for future links up to 1 Tbps.
 
--kb
 
------------------------------------------------------------------
Kevin Bross                         Modular Systems Architect
2111 NE 25th Ave, M/S: JF3-466      Phone: 503-696-1411
Hillsboro, OR  97124                mailto:kevin.bross@xxxxxxxxx
=================================================================== 
 
From: stds-1904-3-tf@xxxxxxxx [mailto:stds-1904-3-tf@xxxxxxxx] On Behalf Of Richard Tse
Sent: Thursday, April 14, 2016 12:05 AM
To: Maiden, Richard (Altera) <rmaiden@xxxxxxxxxx>; stds-1904-3-tf@xxxxxxxx
Subject: RE: RF Timing
 
Richard:

The “push out” of the radio data at the presentation time is a timing event that affects the PLL that  regenerates the radio’s clock.  This PLL will have a LPF to average out the jitter of these “push out” events.
 
Rich
 
From: stds-1904-3-tf@xxxxxxxx [mailto:stds-1904-3-tf@xxxxxxxx] On Behalf Of Richard Maiden
Sent: Wednesday, April 13, 2016 3:58 PM
To: stds-1904-3-tf@xxxxxxxx
Subject: RF Timing
 
Hi all,
 
Please find a short presentation attached where I try to frame the requirements from an RF perspective. Ultimately I’d argue that the only thing that matters is that the appropriate sample is put on the air at the appropriate time. The block which determines this is the egress buffer (FIFO) in the radio. This FIFO has 2 controls on the egress side, clock and read address. There are so many ways to generate these signals that I think this topic is out of scope for 1904.3. We just need to make sure that we do not do anything to box ourselves in with the packet definitions which preclude any particular mechanism.
 
Kevin brings up a good point with the accuracy of our timestamp field. The granularity of its accuracy means that if it is regularly used (rather than a one-time start mechanism), it will be tricky to ensure that we don’t introduce jitter every time we receive a timestamp. Effectively we’d have a moving time-quantization error. For LTE 20MHz for example, we have 30.72MSps for each AxC or 32.5520833333’ns per sample interval. Right now our timestamp granularity is 0.25ns. If we perform a one-time timestamp, we’ll have some quantization error on when we actually send but that would not be a problem. However, on the next presentation time timestamp, we will likely have a different quantization error and so we will in effect jump forwards or backwards in time on when we actually transmit that sample – that’s a problem. Adding more granularity (fractional bits) would reduce this error but I’m not sure how far we’d have to go.
 
Thanks,
 
Richard
 
 
Richard Maiden
ALTERA (an Intel Company)
101 Innovation Drive
San Jose, CA 95134
 
Tel:  +1 (949) 382-5402