How the sausage is made: Chorus Fibre Lab tour

I've long had an interest and fascination with how infrastructure works. All the bits of kit and equipment which we take for granted, but most people don't even notice.

Water delivery? Check. Hydro power stations? Sign me up. Transmission grids? Sure. Wikipedia the heck out of that.

The Cobb Dam in Takaka, just up from where The Gathering was held

(image from Jillian Hancock)

So, I jumped at the opportunity to have a look around the Chorus Fibre Lab.

For those not in NZ, Chorus was split out from Telecom NZ, which was the old "Post Office" up until the 80s. In the 90s, Telecom was split into retail (Telecom, now Spark) and wholesale (Chorus), with Chorus running the copper network. When the fibre rollout got underway in the late 2000's, Chorus was a major player in the build out in a number of areas, including Auckland.

The Fibre Lab was built before the fibre rollout really kicked off, as a way to explain to people (ahem mostly politicians) why we needed a fibre to the home (FTTH) network with that much bandwidth.

As a beta tester of the new Quic network rollout, we were offered an exclusive opportunity to get a tour and to ask a bunch of, well, super geeky questions of people who I presume are used to very non-geeky questions.

First up, thanks to Brent and Bobby for the tour. It was awesome :)

The lab itself is a combination of "this is what you can do in your home" - think big 8K TVs, Wifi7, lots of IOT - and "this is how we roll out a network". There is a working, production exchange in one of the rooms, and lots of other networks to compare - 5G, 4G, Starlink, Fiber, etc.

The part I found the most interesting, tho, was the fibre build out and what all the bits are. I have a really good mental model of layer 3 - IP, how a packet goes from my router to the wider internet. The bit I'm missing is how it goes from my router - physically - to the ISP. I can SEE the cable outside on the road, but what happens there??

So thats what I'm going to focus on here - lets trace things from my router thru to the BNG (Border Network Gateway) at Quic, which is the next hop for an IP packet.

Quic and Vetta are kinda the same, in terms of network

Step 1 - Getting to the exchange

So, I live on Waiheke, and we have about 9000 full time residents, and that peaks to about 45,000 in summer. We have a phone exchange here, as most places this size do. The exchange is in Belgium St, which is about 700m away from my house, right next to Woolworths and the Auckland Council offices.

Fun fact: Chorus have a spare, unused 768-fibre cable - about 5cm across - going from Waiheke to Auckland incase the Waiheke Exchange burns down. Backups of backups. Lots of redundancy when it takes weeks or months to deploy physical gear.

Its kinda obvious when you go past - but my fibre line doesn't run straight there.

First hop is from my ONT (Optical Network Termination - the box on the wall) to the power pole outside. These allow for easy connection, but are directly connected back to the pit (more on that in a second) where a single input fibre is split into 16.

Photo by ssamjh

Tracing that back to the bottom of my road is a "Airblown Fibre Flexibility point", or Pit, which for me looks like a manhole / water mains cover at the end of my road. This has a stack of fibre connections and splitters, and connects back to the exchange over a smaller group of fibres.

Photo by ssamjh

The metal bits at the top of the tray - which are very small - split a single fibre into 16 fibres, and one of those 16 is "mine".

From https://community.fs.com/article/how-to-design-your-ftth-network-splitting-level-and-ratio.html - Chorus only does 1:16 splits
Up until now, all of this is passive. It can get flooded (the AFFPs are weather proofed), it doesn't matter - unlike an old copper cabinet which would need replacing.

Step 2 - Exchange to ISP

This reaches the OLT - Optical Line Terminal. Up until now I have a single, split, piece of glass from my house to the exchange. I share this piece of glass with my neighbours - up to 16 of them (1:16 split). This arrives at a single port in the exchange. The port is setup to handle all of our ONTs with different time-based setups, so I get 125 microseconds, my neighbour gets the following 125 microseconds etc.

This is also how some other shared network media works - you get some time on the line to yourself, and it's split between all the users. Others can use a collision detection and retransmission scheme.

This single port on an OLT can handle a number of customers (which depends on how its configured, but 64 is common internationally, but Chorus uses 16 per port) and it's core function is to handle both the TDM (time division multiplexing) as well as FDM (frequency division multiplexing) over a single piece of fibre.

TDM is used within a single service - think up to gigabit (GPON). So I share a single frequency with all my neighbours on the same port/fibre line. This is only used on the upstream part of the connection.

FDM breaks it out between difference services - GPON (up to 1G) vrs XGPON (Hyperfibre 2/4/8G) which operates at a different frequency.

You can think of normal fibre operating on red light, while Hyperfibre is on green light. The fibre itself can carry both "colours" at the same time without conflict.

The different ONTs (normal vrs hyperfibre) can't see the other colours, so you can use them both at the same time without interference.

But everyone on a single Gig Fibre port can "see" each others traffic - sort of. The light physically reaches each ONT in the group, but each ONT (and the source OLT port) is setup so that they can only use their specific slice of time. And most likely, there is some encryption in here too, where my ONT and the OLT share a key - so my neighbour can't decrypt my traffic even if they wanted to.

Which is likely another reason why the ONT is not user-configurable.

The exchange is where we first see powered, network-looking gear.

3 OLTs - enough for about 6000 customers

"My" port might be in the top left of that top piece of gear. It's a single fibre in, and goes into the rest of the OLT for routing/sending.

In terms of power, each one of these OLTs (there are 3 in this picture) consumes about 2kw of power, which is a normal kettle running at full tilt.

Sounds like a lot, but each one can handle around 2000 customers, so the per-customer power usage is about 1W, which is, to be honest, bugger all. 1/8th of a fairly dim LED lightbulb.

In a simpler world (and keep in mind that I don't have any knowledge of where Quic has their gear), each ISP would have gear in every exchange, and once traffic comes out of my port and is reassembled into "my" data, it'd be sent off to the ISPs gear, and onto their network.

However, there are 80-odd ISPs of various sizes, and a lot of exchanges in New Zealand, so there isn't enough space (or likely, capital) to put everyones gear in every exchange. So Chorus has a number of Points of Interchange, where ISPs can put their own gear and provide backhaul and connectivity from there. You can see where Quic have theirs on PeeringDB.

Chorus also have a service called Tail Extension Services, which allows an ISP to only have gear in a few POI's, and have traffic send to them - handy, as the video on that page shows, for areas where you have very few customers. These obviously cost more to the ISP as they are not providing the backhaul themselves.

From the IP point of view, my next hop is the QUIC BNG. All of this so far has happened below layer 3 (IP). If you want to think smaller, the Chorus network is a big ethernet, like you have in your house, with many interconnected switches, and this is moving packets and data from machine to machine. Except on a regional scale, and with a lot more control over how data moves.

This means that QUIC - and every other ISP - doesn't need to have gear in every single exchange, and Chorus doesn't have to host 100s of rack units of ISPs hardware. It's elegant, and knowing how it works, I don't begrudge Chorus (and the other WSPs) their monthly fee. We have it pretty good here - the model might not be perfect, but it could be a whole lot worse.

So in summary, the flow:

  • Your router
    • Your ONT
    • A 1:16 split back to the exchange
    • The exchange
    • Various bits of the Chorus network, ending in a Ethernet handover point for your ISP
  • Your ISPs BNG
  • 'teh Interntz

How big IS the network?

Ah yes, the big question. How much can this network handle?

The answer is ... well... more than we use right now. Or more than when Fortnight release a big update

On a normal day, the network hits about 3.5Tbps at around 8 or 10pm. When Fortnight released their latest patch, it peaked at 5.2Tbps, which is a new record. For comparison, when we were all at home during the first COVID lockdown in 2020, the peak was around 2Tbps.

When asked, Chorus can't tell us what the network capacity is, because its basically a distributed network. The path from Waiheke is monitored, but if it happens to be overloaded, it doesn't affect the path from Mt Eden. So an overall network "capacity" isn't a meaningful metric - only network congestion (or how close it is to congestion) on a given link. And thats being monitored 24/7.

But the current peak of 5.2Tbps is not even close, I'm told. There's a lot of life in this (not at all) old girl yet.

For comparison, New Zealand's major external link - Souther Cross Cable - has a lit capacity of 92Tbps, and the Hawaiki cable is 120Tbps. SCC was only expected to have 20Tbps in 2020, but the technology at each end keeps getting upgraded which adds more capacity without having to touch the actual cable. Chorus and the other WSPs could do the same locally - hence why 5Tbps is likely not even touching the sides of the physical capacity of the network. They might have to upgrade the optics at each end (the ONT and the ports in the OLT) but the cables have a huge capacity available with plenty of built in redundancy.

One of the TVs had a big graph showing some network-wide stats, most going from the start of the rollout to now

Some of the takeaways:

  • The average connection speed on the network is about 444Mbps, so given that plans are 50, 300 and 950, it seams a lot more households, on average, have gigabit than other plans. Very few have hyperfibre, and the majority are on 300/100.
  • Monthly usage rises with plan speed, but is generally flattening out
    • 10Mbps ADSL used about 10G (politicians: why do we need to roll out fibre then!?)
    • 50Mbps is using around 270G; 300Mbps uses around 530; 950Mbps uses about 1TB and Hyperfibre hits around 3.3TB a month.
  • The peaks of the second covid lockdown were normal throughput about 18 months later, but the over-investment for the Rugby World Cup (2019) ment that the network was more than prepared for the first lockdown.

All up, it was a great tour, and lovely to geek out with a bunch of like-minded folks.

Nic Wise

Nic Wise

Auckland, NZ