Personal
It has been difficult for me since my Father passed. It isn’t that I interacted with him that much, it is that the safety net is gone. In addition, it turns out that my brother is pretty darn evil.
This is as close as I’ve come to talking about his actions in public.
In the midst of this, a client I work with stepped up as a friend. The 30 minutes of dumping and ranting made things a little better.
Thank you my friend.
What you don’t know (Nerd)
I started doing network administration in 1985 time frame. We were using 10base2 and X.25. Most of our equipment communicated with the mainframes via 9600 baud connections.
Having blazing fast 5Mb connections was spectacular. We used NFS extensively.
Our long haul communications were done via a 56Kbit connection.
When I started work in Maryland, we were still using 10base2 with a few 10baseT hubs. It was “fast enough”.
Later, some of our machines started showing up with high-speed networking, 100baseT. With jumbo packets, we were starting to get there.
Still later, we started using Fiber, this got us up to around 155Mb. This meant, for the first time, our network was faster than our local drives. Using NFS was no longer a bottleneck, for those machines that were fiber attached to each other.
The house network has been a 1Gbit network for a few years. I found out in the last couple of weeks that my primary machine is actually 2.5Gbit. Unfortunately, all the switches and routers in the house top out at 1Gbit.
Today I installed my first 10Gbit switch. It has 4 2.5Gbit RJ45 ports and 2 10Gbit SFP+ ports. This has 3 ceph nodes plugged into it. Those nodes will get NIC upgrades in the future to take them up to 10Gibit speeds.
I have one SFP+ module, it is a 10Gbit RJ45 connector. This means that it connects back to the main house switch at 10Gbit. The main house switch only supports 1Gbit today.
So what is the plan? I will be deploying a dual network system in the house. The server boxes/nodes will have 10Gbit NICs in them, each with two ports. One will connect to the high-speed network, the other to the 1Gbit network.
The 10Gbit net will handle all the Ceph and Docker traffic. Locally mounted ceph file systems will use the loop back connection, or they will be attached to the 10Gbit network.
This will make the ceph file systems seem much faster.
This will be accomplished with 3 4+2 switches and one 8 SFP+ switch. It should all just work.
Except, I had to learn all about fiber. I’ve decided to connect these switches with fiber. After far too many pages of documentation, I’ve decided on LC to LC connectors on OM4 cables. Some cables will be rated for outdoor, underground. This is basically an armored cable. The others will be properly rated for the areas they are in.
According to my reading, these OM4 cables should be good to around 40Gbit with the right transceivers and switches.
It is all Trump’s Fault
I’m getting disgusted by leftist idiots thinking that everything is Trump’s fault. Somebody shoots Trump? It is his nasty words and tweets that are the cause. Somebody sets up an ambush for Trump? His fault for pointing out that illegal immigrants are eating pets.
Trump is doing a meet and greet at a grocery store. The lady checking out loses track of the total and goes over budget. Trump peels off some bills and hands it to the cashier to take care of that lady’s shortfall, as well as others in the store.
The left accuses him of buying votes.
The Supreme Court respects the law, the outcome-driven leftists on the court spit and sputter and the left screams that Trump, who is too stupid to tie his shoes, foresaw these cases and picked justices to rule in this way, per his bidding.
If Trump were to run into a burning building to save a child, the media, and the left would scream he was stealing jobs from hardworking Firefighters.
In the same vain, a group representing the immigrants in Springfield, Ohio have filed suit against Trump and Vance for defaming the poor hardworking immigrants.
Note, they are “legally” in Springfield because they entered the US via a port of entry and claimed asylum.
Take a look at —No. 114 Dominic Bianchi v. Anthony Brown, No. 21-1255 (4th Cir.) to see the probable quality of these immigrants.
Assassination Attempts
This is getting old. Trump is currently averaging two assassination attempts per month. This does count Iran posting a fantasy about how they are going to use their super high-tech equipment to kill Trump.
Skills
Ally was doing her look through Craig’s list and such when she noticed that somebody was giving away a floor loom.
We are now the proud owners of a 4 shaft, 6 treadle 40″ floor loom in excellent shape. We will need to replace the reed, get some shuttles, and make a raddle and then dress the loom.
I will need to dig up my weaving books and likely purchase a few. Ally wants to make some period dishrags and a Hudson Blanket. Both of those sound like fun projects.
It Wasn’t My Fault!
I’ve been fighting some new infrastructure and deployment things. In physical premises, we use physically different networking gear for isolation and redundancy. If we want to get fancy, we can set up VPCs and pretend that one physical network is multiple logical networks.
I’ve been using VLANs to accomplish the isolation I want.
On the cloud, I would like to use VPCs. The datacenter I use doesn’t support VPCs. They do offer VLANs. I choose to use them.
The magic of their VLAN is that you create them on the fly. In the GUI, you say “attach a VLAN to interface ETH1”. It then asks you to name the VLAN. All other nodes then use the same name, and they are added to the same VLAN.
The downside is that there is no explicit method to delete a VLAN. If all nodes that were using the VLAN detach the VLAN, then the VLAN is deleted.
For testing, I have a script that deletes all my nodes and all the volumes associated with those nodes. This only takes a few minutes to run.
After I verify that the nodes and volumes are gone, I can start the ansible script to provision the needed nodes, configure them, boot them, configure the OS, install ceph on 4 nodes, docker swarm on 3, and then install the database engines.
Pretty cool. The process of provisioning an instance includes saying that I want a VLAN with a particular tag.
When I ran the playbooks, everything worked correctly. Except that one of my nodes refused to talk to the other nodes on the VLAN.
After escalating, it turns out that some nodes were attached to the old VLAN, which was in the process of being deleted, and the others were in the new VLAN.
It is my belief this was cased by a race condition. Some nodes were assigned the old VLAN while the VLAN was being deleted. The other nodes requested the old VLAN and instead were granted a new one, with the same name as the old one.
*ARGH.* That was many wasted hours.
When was your last range day? What did you take with you