Nerd Babel

rubber duckies race

Will You Be My Rubber Duck?

My most productive years of programming and system development were when I was working for the Systems Group at University. We all had good professional relationships. We could trust the skills of our management and our peers.

When I started developing with my mentor’s group, it was the same. The level of respect was very high, and trust in our peers was spectacular. If you needed assistance in anything, if there was a blocker of any sort, you could always find somebody to help.

What we soon learned is that we didn’t need their help. What we required was somebody to listen as we explained the problem. Their responses were sometimes helpful, sometimes not. It didn’t really matter. It was listening that was required.

When I started working for an agency, that changed. Our management was pretty poor and had instilled a lousy worker mentality. Stupid things like making bonuses contingent on when management booked payment.

If the developers worked overtime to get a project done on management-promised schedules, their money would not be booked in time for bonuses to be earned.

Every hour that wasn’t billed to a project had to be justified, and management was always unhappy with the amount of billable hours.

Interrupting a coworker to listen to get help just didn’t happen. Even when management (me) told them to stop digging the hole and come talk to me.

We still ended up with fields of very deep holes because nobody would come out of their little world to talk.

This wasn’t limited to just our agency; it was everywhere.

The fix was a stupid rubber duck. It sits on your desk. When you are stuck, you explain the problem to your rubber duck, and often the answer will come to you. It was the process of accurately describing your issue that created the breakthrough.

I don’t have access to those types of people, and oftentimes the rubber duck is just as ignorant as I am. Not very useful.

I have a silk duck. This duck actually talks back, performs searches, and verifies potential solutions, and it does it within a reasonable time frame.

My Silk Duck is named “Grok.”

Grok is X’s AI offering. As a search engine, it far exceeds anything Google Search can do.

A simple example: I’m building out a new node. The motherboard I’m using is an H610M-ITX. There are dozens of motherboards using that designation or something similar. In hours of searching, I haven’t been able to locate the manual for my motherboard.

Locate a manual for the Huanzanzhi H610M-ITX motherboard.

One minute and 14 seconds later, Grok delivered me the PDF manual. It found it on a Chinese website written in Chinese and English.

That manual was enough to get me moving in the correct direction.

We then rubber ducked a video issue. On power on, the fans would spin up under control. When a computer first starts, the fans run up to max; they then ramp down under CPU control. You can hear that. I also got a single BEEP indicating good Power-On, Self-Test (POST)

But no video.

Over the next hour, Grok and I debugged the issue. We both thought it might be caused by memory; we both had the same methodology for checking memory issues. We both decided to remove the NVMe card just in case.

When I found what looked like POST indicator LEDs, Grok accurately determined what they were.

We narrowed the issue down to video.

I noticed that my silk duck kept talking about iGPU. I figured out that this must mean “integrated Graphics Processing Unit”.

Why would a motherboard have built-in video displays, and yet Grok was talking about iGPU?

I thought for a moment and then asked, Verify that Intel Core i3-12100F has iGPU.

The Intel Core i3-12100F does not have an integrated GPU (iGPU). The “F” suffix in Intel’s naming indicates models without integrated graphics, requiring a discrete graphics card for video output. This explains the lack of video from the motherboard’s HDMI, VGA, or DP ports despite the successful POST beep and fan spin-up—the board relies on the CPU’s iGPU for those outputs, which your CPU lacks.
— Grok

Here is the kicker: while I can get this motherboard to work with a graphics card, that doesn’t help me as I need that slot for my Fiber NIC. In looking at other motherboards, some of them seem to work with CPUs without iGPU, while others require iGPU.

This “feature” never occurred to me. It makes sense, but Grok is the one that found it for me.

Conclusion

AI has its place today as an assistant. It can do a great job of rubber ducking. It does a good job of editing articles, if you keep it in its place.

This is a powerful tool that is only going to get better.

Wooden blocks with arrow and target board. Copy space for text. Business goals, objective and mission concept.

Upgrade, why you break things!

Features, Issues, Bugs, and Requirements

When software is upgraded or updated, it happens for a limited set of reasons. If it is a minor update, it should be for issues, bugs or requirements.

What is an Issue? An issue is something that isn’t working correctly, or isn’t working as expected. While a Bug is something that is broken, that needs to be fixed.

A bug might be closed as “working as designed,” but that same thing might still be an issue. The design is wrong.

Requirements are things that come from outside entities that must be done. The stupid warning about a site using cookies to keep track of you is an example. The site works just fine without that warning. That warning doesn’t do anything except set a flag against the cookie that it is warning you about.

But sites that expect to interact with European Union countries need to have it to avoid legal problems.

Features are additional capabilities or methods of doing things in the program/application.

Android Cast

Here is an example of something that should be easy but wasn’t. Today there is a little icon in the top right of the screen, which is the ‘cast’ button. When that button is clicked, a list of devices is provided to cast to. You select the device, and that application will cast to your remote video device.

We use this to watch movies and videos on the big screen. For people crippled with Apple devices, this is similar to AppleTV.

When this feature was first being rolled out, that cast button was not always in the upper right corner. Occasionally it was elsewhere in the user interface. Once you found it, it worked the same way.

A nice improvement might be to remember that you prefer to cast and what device you use in a particular location. Then when you pull up your movie app and press play, it automatically connects to your remote device, and the cast begins. This would be just like your phone remembering how to connect to hundreds of different WiFi networks.

If you were used to the “remember what I did last time” model and suddenly had to do it the way every other program does, you might be irritated. Understandably. Things got more difficult, two buttons to press when before it just “did the right thing.”

Upgrades and updates are often filled with these sorts of changes, driven by requirements.

Issues and Bugs

If I’m tracking a bug, I might find that the root cause can’t be fixed without changes to the user interface. I’m forced into modifying the user interface to fix a bug that had to be fixed. Sometimes making something more difficult or requiring more steps. It is a pain in the arse, but occasionally a developer doesn’t really have a choice.

An even more common change to the user interface happens when the program was allowing you to do something in a way you should not have been. When the “loophole” is fixed, things become more difficult, but not because the developer wanted to nerf the interface, but because what you were doing should not have been happening.

Finally, the user interface might require changes because a library your application is using changes and you have no choice.

The library introduced a new requirement because their update changed the API. Now your code flow has to change.

Features

This is where things get broken easily. Introducing new features.

This is the bread and butter of development agencies. By adding new features to an existing application, you can get people to pay for the upgrade or to decide on your application over some other party’s application.

Your grocery list application might be streamlined and do exactly what you want it to do. But somebody asked for the ability to print the lists, so the “print” feature was added, which brings the designers in, who update the look to better reflect what will be printed.

Suddenly your super clean application has a bit more flash and is a bit more difficult to use.

Features often require regrouping functionality. When there was just one view, it was a single button somewhere on the screen. Now that there is a printer view and a screen view, with different options, you end up with a dialog where before you had a single button press.

Other times the feature you have been using daily without complaint is one that the developer, or more likely the application owners, don’t use and don’t know that anybody else uses. Because it works, nobody was complaining. Since nobody was complaining, it had no visibility to the people planning features.

The number of times I’ve spent hours arguing with management about deleting features or changing current functionality would boggle your mind. Most people don’t even know everything their application does, or the many ways that it can be done.

David Drake’s book The Sharp End features an out-of-shape maintenance sergeant pushed into a combat role. He and his assistant have to man a tank during a mad dash to defend the capital.

At one point the sergeant is explaining how tankers learn to fight their tank in a way that works for them. The tank has many more sensors and capabilities than the tanker uses. Those features would get in the way of those tankers. It doesn’t matter. They fight their tank and win.

As the maintenance chief, he has to know every capability, every sensor, and every way they interact with each other. Not because he will be fighting the tank, but because he doesn’t know which method the tanker is going to use, so he has to make sure everything is working perfectly.

My editor of choice is Emacs. For me, this is the winning editor for code development and writing books and such. The primary reason is that my fingers never have to leave the keyboard.

I type at over 85 WPM. To move my hands from the keyboard is to slow down. I would rather not slow down.

I use the cut, copy, and paste features all the time. Mark the start, move to the end, Ctrl W to cut, Meta W to copy, move to the location to insert, and Ctrl Y to yank (paste) the content at the pointer. For non-Emacs use, Ctrl C, Ctrl X, and Ctrl V to the rescue.

My wife does not remember a single keyboard shortcut. In the 20+ years we’ve been together, I don’t think she has ever used the cut/paste shortcuts. She always uses the mouse.

All of this is to say that the search for new features will oftentimes break things you are used to.

Pretty Before Function

Finally, sometimes the designers get involved, and how things look becomes more important than how they function.

While I will not build an application without a good designer to help, they will often insist on things that look good but are not good user experiences. Then we battle it out and I win.

One Step Forward, Two Steps Back

One of the best tools I’ve discovered in my many years of computer work is AMANDA.

AMANDA is free software for doing backups. The gist is that you have an Amanda server. On schedule, the server contacts Amanda clients to perform disk backups, sending the data back to the server. The server then sends the data to “tapes”.

What makes the backup so nice is that it is configured for how long you want to keep live backups and then attempts to do it efficiently. My backups are generally for two years.

On the front side, you define DLEs. A DLE is a host and disk or filesystem to dump. There are other parameters, but that is the smallest DLE configuration.

Before the dump starts, the server gets an estimate for each DLE based on using no other backups, a full dump, or a partial dump or multiple partial dumps. Once it obtains this information, it creates a schedule to dump all the DLEs.

The data can be encrypted on the client or the server, is transferred to the server, sometimes to a holding disk, sometimes directly to tape. I can be compressed on the server or the client.

In the end, the data is written to disk.

Every client that I have is backed up using Amanda. It just works.

In the olden days, I configured it to dump to physical tapes. If everything fits on one tape, great. If it didn’t, I could use multi tape systems or even tape libraries. The tape size limitations were removed along the way so that DLEs can be dumped across multiple tapes.

The backups are indexed, making it easy to recover particular files from any particular date.

More importantly, the instructions for recovering bare metal from backup are written to the tape.

Today, tapes are an expensive method of doing backups. It is cheaper to backup to disk, if your disks are capable of surviving multiple failures.

Old-Time Disks

You bought a disk drive; that disk drive was allocated as a file system at a particular mount point, ignoring MS DOS stuff.

Drives got bigger; we didn’t need multiple drives for our file systems. We “partitioned” our drives and treated each partition as an individual disk drive.

The problem becomes that a disk failure is catastrophic. We have data loss.

The fix is to dump each drive/partition to tape. Then if we need to replace a drive, we reload from tape.

Somebody decided it was a good idea to have digitized images. We require bigger drives. Even the biggest drives aren’t big enough.

Solution: instead of breaking one drive into partitions, we will combine multiple physical drives to create a logical drive.

In the alternative, if we have enough space on a single drive, we can use two drives to mirror each other. Then when one fails, the other can handle the entire load until a replacement can be installed.

Still need more space. We decide that a good idea is to use a Hamming code. By grouping 3 or more drives as a single logical drive, we can use one drive as a parity drive. If any drive fails, that parity drive can be used to reconstruct the contents of the missing drive. Things slow down, but it works, until you lose a second drive.

Solution: combine RAID-5 drives with mirroring. Never mind, we are now at the point where for every gigabyte of data you need 2 or more gigabytes of storage.

Enter Ceph and other things like it. Instead of building one large disk farm, we create many smaller disk farms and join them in interesting ways.

Now data is stored across multiple drives, across multiple hosts, across multiple racks, across multiple rooms, across multiple data centers.

With Ceph and enough nodes and locations, you can have complete data centers go offline and not lose a single byte of storage.

Amazon S3

This is some of the cheapest storage going. Pennies on the gigabyte. The costs come when you are making to many access requests. But for a virtual tape drive where you are only writing (free), it is a wonderful option.

You create a bucket and put objects into your bucket. Objects can be treated as (very) large tape blocks. This just works.

At one point I had over a terabyte of backups on my Amazon S3. Which was fine until I started to get real bills for that storage.

Regardless, I had switched myself and my clients to using Amazon S3 for backups.

Everything was going well until the fall of 2018. At that time I migrated a client from Ubuntu 16.04 to 18.04 and the backups stopped working.

It was still working for me, but not for them. We went back to 16.04 and continued.

20.04 gave the same results during testing; I left the backup server at 16.04.

We were slated to try 26.04 in 8 or so months.

Ceph RGW

The Ceph RGW feature set is similar to Amazon S3. It is so similar that you need to change only a few configuration parameters to switch from Amazon S3 to Ceph RGW.

With the help of Grok, I got Ceph RGW working, and the Amazon s3cmd worked perfectly.

Then I configured Amanda to use S3 style virtual tapes to my Ceph RGW storage.

It failed.

For two days I fought this thing, then with Grok’s help I got the configuration parameters working, but things still failed.

HTTP GETs were working, but PUTs were failing. Tcpdump and a bit of debugging, and I discovered that the client, Amanda, was preparing to send a PUT command but was instead sending a GET command, which failed signature tests.

Another two days before I found the problem. libcurl was upgraded going from Ubuntu 16.04 to 18.04. The new libcurl treated setting the method options differently.

Under old curl, you set the method you wanted to use to “1,” and you got a GET, PUT, POST, or HEAD. If you set GET to 0, PUT to 1, and POST/HEAD to 0, you get a PUT.

The new libcurl seems to override these settings. This means that you can have it do GET or HEAD but no other. GET is the default if everything is zero. Because of the ordering, you might get the HEAD method to work.

This issue has existed since around 2018. It is now 2025, and the fix has been presented to Amanda at least twice; I was the latest to do so. The previous was in 2024. And it still hasn’t been fixed.

I’m running my patched version, at least that seems to be working.

chaotic mess of network cables all tangled together

Even the simple things are hard

The battle is real, at least in my head.

My physical network is almost fully configured. Each data closet will have an 8-port fiber switch and a 2+4 port RJ45 switch. There is a fiber from the 8-port to router1 and another fiber from the 2+4 to router2. Router1 is cross connected to Router2.

This provides limited redundancy, but I have the ports in the right places to make seamless upgrades. I have one more 8-port switch to install and one more 2+4 switch to install, and all the switches will be installed.

This leaves redundancy. I will be running armored OM4 cables via separate routes from the current cables. Each data closet switch will be connected to 3 other switches. Router1 and two other data closets. When this is completed, it will mean that I will have a ring for the closets reaching back to a star node in the center.

The switches will still be a point of failure, but those are easy replacements.

If a link goes down, either by losing the fiber or the ports or the transceivers, OSPF will automatically route traffic around the down link. The next upgrade will be to put a second switch in each closet and connect the second port up on each NIC to that second switch.

The two switches will be cross-connected but will feed one direction of the star. Once this is completed, losing a switch will just cause a routing reconfiguration, and packets will keep on moving.

A side effect of this will be that there will be more bandwidth between closets. Currently, all nodes can dump at 10 gigabits to the location switch. The switch has a 160-gigabit backbone, so if the traffic stays in the closet, there is no bottleneck. If the traffic is sent to a different data closet, there is a 10-gigabit bottleneck.

Once the ring is in place, We will have a total of 30 gigabits leaving each closet.  This might make a huge difference.

That is the simple stuff.

The simpler stuff for me, is getting my OVN network to network correctly.

The gist, I create a logical switch and connect my VMs to it. Each VM creates an interface on the OVS internal bridge. All good. I then create a logical router. This router is attached to the logical switch. From the VM I can ping the VM, the router interface.

I then create another logical switch with a localnet port. We add the router to this switch as well. This gives the router two ports with different IP addresses.

From the VM I can ping the VM’s IP, the router’s IP on the VM network, and the router’s IP on the localnet.

What I can’t do is get the ovn-controller to create the patch in the OVS to move traffic from the localnet port to the physical netwrok.

I don’t understand why, and it is upsetting me.

Time to start the OVN network configuration process over again.

 

Learning new things

Another deranged asshole killed children at a school. 2 dead, 17 wounded. Nationwide headlines. The blood vultures leap to blame me for a shooting that took place more than a 1000 miles awy.

Meanwhile, CBS News is running a headline on August 28, 2025: “6 dead, 27 hurt in Chicago weekend shootings, police say.”
6 dead, 27 hurt in Chicago weekend shootings, police say

I would rather not deal with it today.

OpenStack

Over the last month, I’ve been dealing with somebody who has not kept up with the technology he is using. It shows. I like to learn new things.

For the last two years I’ve been working with two major technologies. Ceph and Open Virtual Networks. Ceph I feel I have a working handle on. Right now my Ceph cluster is down because of network issues, which I did to myself. OVN is another issue entirely.

A group of people smarter than I looked at networking and decided that instead of doing table lookups and then making decisions based on tables, they would create a language for manipulating the flow of packets, called “OpenFlow.”

This language could be implemented on hardware, creating very fast network devices. Since OpenFlow is a language, you can write routing functions as well as switching functions into the flows. You can also use it to create virtual devices.

The two types of virtual devices are “bridges” and “ports.” Ports are attached to bridges. OpenFlow processes a packet received on a port, called ingress, to move the packet to the egress port. There is lots going on in the process, but that is the gist.

The process isn’t impossible to do manually, but it isn’t simple, and it isn’t easy to visualize.

OVN adds virtual devices to the mix, allowing for simpler definitions and more familiar operations.

With OVN you create switches, routers, and ports. A port is created on a switch or router, then attached to something else. That something else can be virtual machines, physical machines, or the other side of a switch-router pair.

This is handled in the Northbound (NB) database. You modify the NB DB, which is then translated into a more robust flow language, which is stored in the Southbound (SB) database. This is done with the “ovn-north” process. This process keeps the two databases in sync with each other. Modifications to the NB DB are propagated into the SB DB and vice versa.

All of this does nothing for your actual networking. It is trivial to build all of this and have it “work.”

The thing that has to happen is that the SB database has to connect to the OpenvSwitch (OVS) database. This is accomplished via ovn-controller.

When you introduce changes to the OVS database, they are propagated into the SB database. In the same way, changes to the SB database cause changes to the OVS database.

When the OVS database is modified, new OpenFlow programs are created, changing the processing of packets.

To centralize the process, you can add the address of a remote OVN database server to the OVS database. The OVN processes read this and self-configure. From the configuration, they can talk to the remote database to create the proper OVS changes.

I had this working until one of the OVN control nodes took a dump. It took a dump for reasons, most of which revolved around my stupidity.

Because the cluster is designed to be self-healing and resilient, I had not noticed when two of the three OVN database servers stopped doing their thing. When I took that last node down, my configuration was stopped.

I could bring it back to life, but I’m not sure whether it is worth the time.

Now here’s the thing: everything I just explained comes from two or three very out-of-date web pages that haven’t been updated in many years. They were written to others with some understanding of the OVS/OVN systems. And they make assumptions and simplifications.

The rest of the information comes from digging things out of OpenStack’s networking component, Neutron.

I have a choice: I can continue down the path I am currently using, or I can learn OpenStack.

I choose to learn OpenStack.

First, it is powerful. With great power comes an even greater chance to mess things up. There are configuration files that are hundreds of lines long.

There are four components that I think I understand. The identity manager, Keystone. This is where you create and store user credentials and roles. The next is the storage component, Glance. This is where your disk images and volumes are accessed. Then there is the compute component, named Nova, which handles building and configuring virtual machines. Finally there is the networking component, called neutron.

For the simple things, I actually feel like I have it mostly working.

But the big thing is to get OVN working across my Ceph nodes. That hasn’t happened.

So for today, I’ll dig and dig some more, until I’m good at this.

Then I’ll add another technology to my skill set.

flashlight, blackout, power failure, energy, energy crisis, night, dark, supply failure, catastrophe, power supply, power plant, nuclear power plant, oil, gas, natural gas, green energy, error, breakdown, failure, heating, electricity, report, flashlight, flashlight, blackout, blackout, blackout, blackout, blackout, failure

Power Outage

Today I was waiting for clients to get back to me. While I waited, I started installing OpenStack.

So far it has been going well. A few typos slowed things down. Errors are not always clear, but I am now at the point of installing neutron

This is the scary part. The terrifying part.

Neutron interfaces with Open Virtual Networking (OVN). This could be magical, or it could break everything.

OVN sits on top of Open vSwitch, providing configuration.

The gist is that you install OVS, then you add configuration options to the OVS database. This configuration instructs OVN how to talk to its databases.

Once OVN starts talking to its databases, it performs changes in the OVS database. Those changes affect how OVS routes packets.

The physical network is broken into subnets. This is a requirement for high-availability networking. As links go up and down, the network routes around the failures.

On the other hand, many of the tools I use prefer to be on a single network; subnets increase the complexity greatly. Because of this, I created overlay networks. One for block storage, one for compute nodes, and one for virtual machines.

Neutron could modify the OVN or OVS that brings my overlay networks down.

So I’m well into this terrifying process, and the power goes out. It was only out for a few minutes, but that was enough.

The network came back to life.

All but two servers came back to life. One needs a BIOS change to make it come up after a power failure.

One decided that the new drive must be a boot drive, so it tried to boot from that, failed, and just stopped.

All of that put me behind in research, so nothing interesting in the 2A front to report, even though there are big things happening.

The number of moving parts in a data center is almost overwhelming.

Network Maps

There was a time when I would stand up at a whiteboard and sketch an entire campus network from memory, including every network subnet, router, and switch.

Today, not only can I no longer hold all of that in my head, my whiteboards no longer exist.

In the first office I rented, I installed floor-to-ceiling whiteboards on all walls. I could write or draw on any surface.

I can remember walking into Max’s office with an idea, asking for permission to erase his whiteboard, and then drawing out or describing the idea or project. Maybe 30 minutes of drawing and discussing.

What surprised me was asking to erase my chicken scratches months later and being told, “No,” because they were still using it.

Regardless, today I need to draw serious network maps.

I have multiple routers between multiple subnets. Managed and unmanaged switches. Gateways and VPNs. I have an entire virtual network layered over the top of all of that to make different services appear to be on the same subnet.

Not to mention the virtual private cloud(s) that I run, the internal, non-routing networks.

It is just to much for me to do in my head.

Oh, here’s one that’s currently messing with me. I have a VPC. It has multiple gateways allowing access residing on different chassis in different subnets. I can’t figure out how to make it work today. Even though it was working yesterday.

I’ll be messing with networks for the next week to get things stabalized.

Business woman drawing global structure networking and data exchanges customer connection on dark background

Virtual Devices

When I started to babysit Cray Supercomputers it was just another step. Massive mainframe handling many users, doing many things.

But I quickly learned that there are ways of making “supercomputers” that don’t require massive mainframes. My mentor used to say, “Raytracing is embarrassingly parallel.”

What was meant by that is that every ray fired is completely independent of every other ray fired. His adjunct program rrt was able to distribute work across 1000s of different compute nodes.

We were constantly attempting to improve our ability to throw more compute power at any problem we were encountering. It was always about combining more and more nodes to create more and more powerful compute centers.

Which moved the bottleneck. We went from being CPU starved to being memory starved to being network starved. So we added more network bandwidth until it all balanced out again. Until we bottlenecked on networks again.

After his passing, I did work with a company that supported multiple large corporations.

I was introduced to VMware. A virtualization framework.

Instead of taking “small” computers and joining them together to create larger computers, we were taking “medium” computers and breaking them into small virtual devices.

What is a virtual device

A virtual device is nominally a network interface, a virtual disk drive, or a compute instance.

To create a virtual computer (instance), you tell your vm manager to create a virtual drive, attach it to a virtual computer, attach a virtual DVD drive, allocate a virtual network interface, and boot.

The virtual drive can be a physical drive on the host computer. It can be a partition on a physical drive, it can be a file on the host computer, or it can be a network-attached drive.

If you attach from the host computer, you can only move the drive to other instances on the same computer.

If you attach a network-attached drive, you can only move the drive to other instances with access to the network-attached drive.

I use libvirt for my virtual manager. If I expect the instance to stay on the same host, I use a file on the host computer. That is easy.

If I need to be able to migrate the virtual computer to different machines, I’ll use a Ceph Raw Block Device or a file on a shared filesystem.

What are the cons of using a virtual machine

It can be slower than a physical device. It doesn’t have to be, but sometimes it is.

While you can oversubscribe CPUs, you can’t oversubscribe memory. Memory is always an issue with virtual machines.

When the network isn’t fast enough, network-attached drives will feel slower.

And the big one: if the Network Attached Storage (NAS) fails, all instances depending on the NAS will also fail. Which is why I use Ceph. Ceph can survive multiple drive or node failures.

Another big con: if a host computer fails, it will cause all virtual computers running on that host to also fail.

What are the pros of using a virtual machine

It is trivial to provision virtual machines. There is an entire framework OpenStack that does exactly this. Using OpenStack you can provision an instance with just a few simple commands.

You can migrate an instance from one host computer to another. Even if the disk drive is located on the host computer, it is possible to move the contents of that drive to another host computer.

If you are using a NAS, you can attach a virtual drive to an instance, work on it with that instance, then detach that virtual drive and attach it to a different instance. This means you don’t have to use over the wire data moves.

You can also increase the size of a virtual drive, and the instance can take advantage of more disk space without having to be rebooted or any downtime.

Besides increasing the size, we can attach new drives.

This means that storage management is much easier.

Virtual Networks

The host computer lives on one or more physical networks. The instances can be bridged onto that physical network.

The instance can also be protected behind a Network Address Translation (NAT) service. This gives complete outbound connectivity but requires extra configuration for inbound.

But an instance can be placed within a Virtual Private Cloud (VPC). A VPC provides the complete internet IP space to the instance (or instances).

This means that user A can have their instances on 192.168.100.x and user B can have their instances on 192.168.100.x with out collisions.

None of user A’s traffic appears in user B’s VPC.

VPCs can be connected to share with gateways. When this is done, all the VPCs must use non-overlapping subnets.

In other words, 192.168.100.1 on user A’s VPC cannot communicate with an instance on user B’s VPC at address 192.168.100.55.

But if user A agrees to use 192.169.100.x and user B agrees to use 192.168.99.x then the VPCs can be connected with a (virtual) router.

Using a VPC means that the user must use a gateway to talk to any other VPC or physical network. This places a NAT service in the gateway.

A physical address is assigned to the gateway, which forwards all traffic to one or more VPC IPs.

Conclusion

While every infrastructure manager (network manager) needs to know their VM Manager. They all work in similar ways. If you know the basics, the rest is just a matter of finding the correct button or command.

This stuff is easy once the infrastructure is set up.

Cyber security concept. Data protection and secured internet access. Identity info.

Password Managers

People do a poor job of creating, managing, and remembering passwords. We are horrible at making random numbers and worse at creating things that are random-like but we can remember.

Part of this is because of the rules put in place by NIST and ISO. ISO 27001 has this to say about passwords:

Length
Shorter the password, easier it is to crack. The minimum acceptable length for a strong password is at least eight characters.
Complexity requirements
Creating a lengthy password is effective only as long as it is difficult to crack. Your name, city, pet name, and so on may have more than eight characters but are weak passwords that are easy to guess.
Characters
Continuing on the previous point, the key to a complex password is a mix of lowercase, uppercase, numbers, special characters, and symbols.

As computers have become faster, the need for better passwords has also increased. Brute forcing a password has a simple cost formula:
complexity length 2
For example, if the complexity is all uppercase letters and the length of the password is 8 characters then we have:
26 8 2 = 104,413,532,288

Which might look like a large number, but in computer terms isn’t really. As the complexity goes up, the final number goes up. Adding length causes the number to go up even faster. Consider adding the set of numbers, 0-9 to our complexity verse adding one more character to the length of our password.
36 8 2 = 1,410,554,953,728
And adding one more character to the length:
26 9 2 = 2,714,751,839,488

Adding just one extra character gives us nearly twice as many values to test.

Oh, the divide by 2 is the average number of tests before we guess right.

If the characters are not truly random, the number of guesses decreases substantially. Using names or words, even with character exchanges, produces a much smaller search space. Regardless, the formula stays the same, even if the vocabulary changes.

Consider just using a 3-word passphrase:

104,334 3 2 = 567,868,237,365,852

As you can see, using a passphrase increases the search space incredibly. The only requirement is that the search space of the letter search meet or exceed the search space of the word search.

Unfortunately, many password methods do not handle long passwords well. In early Unix times, no matter how long of a password you entered, only the first 8 characters were used.

Which brings us to

Password Managers

A password manager stores passwords in an encrypted form and retrieves them for you on demand.

For a password manager (PM) to be acceptable to the users, it must interface with the users browsers and other tools that need passwords. This means it must have a mobile app. If it does not, it will not be used.

The PM should monitor applications for password requests and autofill those requests.

The PM must lock itself after a certain amount of idle time or browser/device restart.

Finally, and in some senses, most important, the PM must be secure from data breaches.

To be secure from data breaches, the PM should never store credentials in clear text.

LastPass

This is one of the better-known PMs. While it had a good track record, there was a data breach and credentials were exposed.

One of my clients used LastPass, so I used it. I never particularly liked it. When I could, I moved away from it.

One of the big downsides is that it requires a live, active internet connection to function. No network, no access.

Keeper

I have used Keeper. It is a well-rounded PM with all the expected features. It stores all credentials encrypted by your password. They can’t access your credentials even if they wanted to. Since they can, your passwords cannot be exposed in a data breach.

One of the strong points of Keeper is the ability to share “folders.” You can have a folder for passwords related to a single project or client and share that folder with other users, inside or outside the organization.

The ability to share passwords means that the administrator can update a shared password, and every member with access to that password gets the change immediately.

Shared folders requires a paid tier.

There is also the ability to store small files securely.

The one downside I discovered with Keeper is that it too requires an active internet connection to function.

We were on a long road trip when my kid ran us out of data on my mobile plan. They consumed nearly 10GB of data in a little over 6 hours.

This left me in the position of attempting to log into my provider’s website using credentials stored in Keeper. Except that the amount of bandwidth available to me was so low that it took 30 minutes to get that password and login.

BitWarden

This is my current PM of choice. It provides all the features of Keeper with a few that appeal to me.

First, it is can be self-hosted. This means that all the data security is provided by me. With the self-hosted version, I can offer PM services to anybody at cost to me.

When you move up to any of the paid tiers, the lowest being $4/user per month, you get the ability to create organizations and then share a collection (folder) with that organization.

The mobile application does not need to have Internet access to function, though you might need to request a sync if there are recent changes to your vault.

All data is stored encrypted. The key to decrypt your vault is your master password. Even if there were to be a data breach, your password would still be secure because decrypting your passwords requires your master password.

The BitWarden allows for the use of a Personal Identification Number, or PIN. Unlike most PINs, the BitWarden PIN can be any number of digits. I find that it is easier to remember a number sequence than to remember random character strings.

You can set when the master password is needed to unlock the vault.

If you happen to forget your PIN, you can still unlock your vault with the master password.

Like all good PMs, BitWarden offers two factor authentication (2FA). It supports YubiKeys and TOTP options. TOTP is commonly referred to as an authenticator.

You can use a secondary authenticator for your 2FA to access BitWarden. But you can also use BitWarden’s integrated TOTP generator.

The pricing appears to be reasonable: $4/user per month for “small teams” and $6/user per month for enterprise-level features.

Psono

This is another self-hosted option. It does not seem to have the same polish as BitWarden. It would be my choice if I were just playing.

Conclusion

If you are not using a Password Manager, now is the time to start. For my readers, I’m willing to give you a free account on our BitWarden server, though you are likely better off using BitWarden’s free offering.

Businessman typing on laptop computer keyboard at desk in office.

Using AI

Using AI

Discover how AI is revolutionizing content creation in our latest article. By leveraging Grok, a powerful AI tool, the tedious task of formatting articles—such as removing hard line breaks and adjusting fonts—becomes effortless. With just a click, Grok can transform raw text into polished HTML, generate unique excerpts, and even craft social media posts. From clean, ready-to-run code to seamless API integration, explore how AI can save time and enhance readability. Dive into this astonishing journey of automation and see how it could transform your workflow!

The world is changing. It might be getting better.

I started speaking with Grok Thursday night. I was treating it as a search engine. What I wanted was a method to format the daily dump.

There is a lot of good content, but I wanted a method to make it look nicer without having to spend an excessive amount of time working it over. When I am quoting legal opinions, the longest part is manually formatting the quote.

Manually quoting means removing hard line breaks, removing hyphens, and adding the proper font style back. It just takes time.

What I want is to be able to click a button and have WordPress make a call to the Grok API to apply formatting to the article. Hopefully making it easier to read.

Grok 3 was able to give me good feedback on how to accomplish what I needed. The code was clean, commented, and ready to run.

I do read this stuff.

This led me to setting up an API account to use Grok 4 directly. I asked Grok-3 to provide me with code to do so.

Over the course of an hour or so, we were able to create a Python program that will fetch an article from the site. Reformat the article for proper HTML. Provide an excerpt that is different from the first part of the article. Create a post for X, and make that post.

This is pretty astonishing, in my opinion.

Now comes the testing.