I’m at the Fort this weekend (if you’re in the area, come on over and visit!), presenting life in the early spring in a cold environment. I’ll be staying all weekend, with no running water (it’s turned off until all danger of frost is gone) and little electricity (the gift shop has some). I decided that the food I was going to make should reflect the environment I’ll be in, and so these meals are ones that conceivably could have been served at the Fort in the spring of 1750.
Soup Meagre
I’ve adapted this from Hannah Glasse’s recipe of 1765. I find it amusing how closely it resembles the Green Soup that I made a couple of weekends ago for a Viking reenactment I did. There’s never much food in the spring, and what you can get your hands on has to “make do” until you can plant and harvest crops. It’s a tough time of year! This is a very plain soup, but with the seasonings, would probably have been quite the treat. Early greens in New England would include ramps, asparagus, watercress, fiddleheads, dandelion greens, and things we consider weeds like stinging nettle, onion grass, and dock.
Ingredients:
2 tbsp butter
1 onion, finely chopped
2 celery stalks, cut into 1- to 2-inch lengths (“half as long as your finger”)
6-8 oz mixed greens, (spinach, lettuce, arugula, etc), chopped if large
3 tbsp parsley, finely chopped
1 tbsp flour
2 to 4 cups broth
1/2 tsp salt, plus more to taste
1/2 tsp black pepper, plus more to taste
1/4 tsp ground mace and/or nutmeg
Melt the butter in a large kettle or Dutch oven over medium heat. When the bubbling has subsided, add the onions and cook for about five minutes, until transparent.
Add the celery, greens, and parsley, stir, and cook over medium heat for about 10 minutes. Sprinkle the flour over the greens and stir to blend. Add the broth, salt, pepper, and mace, and stir well. Simmer the soup over medium-low heat for about 30 minutes.
Taste and add more salt and pepper, if desired. Serve warm, with bread if you have it.
Notes from Mistress Allyson: If you want to add a bit of protein to this meal (something that would have been in high demand in the 1750s in spring), try some beans or a bit of salt pork. Beans get added right before simmering. Salt pork should go in with the butter at the beginning.
When working with git, there are several areas we might be discussing. The most important areas are “working directory”, “staging”, “local repository”, and “remote repository”.
A repository without a working directory is called a “bare repository”.
The working directory is a location where your files live, with a reference back to the repository. It is possible to extract the files without keeping a reference to the repository.
Most working directories, and what we will assume for our discussion, have the bare repository within the .git directory (folder).
A remote repository is normally a bare repository located somewhere apart from your local working directory. This could be a different directory on the local machine, or it could be located on a remote, network connected system.
Creating a Local Version of a Remote Repository
This is what the remote repository looks like. A pretty simple version.
We don’t copy it, we clone it. The implication is that what we have in the local version is the same as what is in the remote version.
git clone ssh:git@github.com/author/book.git
This creates a directory named “book”, it clones/copies the bare repository from GitHub and places it in “book/.git”. It creates a “remote” reference within “book/.git/refs/remotes” named “origin”. With “origin” it creates a copy of all the branches that are in the remote repository, in our example, just “master”
The clone command then checks out the working directory into “books”. This would be the files “chapOne.md”, “chapTwo.md”, and “chapThree.md”. It creates a file in “books/.git/refs/heads” named master with the commit hash (identifier) of “0ccd79797”.
These two look the same, but notice that the last two commits have different values/hashes. This is because they are different.
Since you are done with your edit, you attempt to send your changes back to the remote repository named “origin” via a push command. git push origin main This fails because there would be two versions of the repository if you did this, there can be only one.
To correct this, you first fetch an updated copy of repo.
We do another fetch, there is nothing to do as nothing else has been added. We then push our commits back to the remote repository. git push origin main
Because I’m not the program, there might be some small ordering issues in the final commit.
The point in all of this is that all of this magic happens behind the scenes. The program can do most merges with no assistance from you. In the rare cases where there is a merge conflict, it is relatively easy to manual merge the changes.
A merge conflict happens when two commits modify the same line of code. In your version, you had “Ciliorys hat” originally. You modified it to be “Billy-Bobs hat” Your editor had changed it to “Cilory’s hat”.
Now you have two edits to the same line. Git says, “You figure it out.” and shows you the two versions of the line, in context. You can pick one version or the other, or put in an entirely different version.
You choose the third option and put “Billy-Bob’s hat”. The world is good.
Conclusion
git is powerful. This discussion barely touches on the power of git.
There is an entire process of modifying code by “forking” a repository. When you are finished with your modifications, you can contribute them back to the original repository with a “Pull Request”.
Git has multiple methods of inserting code review and other tools into the process.
It is so powerful, It can be used to create a full wiki, on the fly. The raw files are served as wiki pages.
There is a method of doing a binary subdivision to find bugs that were introduced in the past. There is a method of tracking who introduced an errant line of code.
There are tools for pulling a commit out of the middle of a branch and applying it to a different branch, without taking the rest of the modifications.
In general, there only about a dozen commands that a user needs to know to work with git.
If you would like to work with git, there are communities ready to help you, there are multiple cloud providers that will allow you to host your repo on the web.
My introduction to source code control came at University. The name of the program was “update”. It took an “update deck” which described lines to remove, by line number, and lines of code to insert.
This format allowed us to inspect the code that was actually being changed, as well as the surrounding code. Every line of code I wrote for the Systems Group that was installed went through three levels of code review and QA testing before going live in the system.
Having those change decks helped in the review process. As a side note, the author’s initials were attached as a note to the right of every line of code we modified. Easy stuff.
After a change deck was accepted, it became part of the “installed version” of the software.
One of the powerful features of working with change decks is that two (or more) people could be working on the same piece of code and unless their changes overlapped, they could be applied independently.
RCS
When I left University, I started working with the BRL CAD project. This introduced me to the RCS system.
RCS was something like “update” but not quite. And you didn’t think in terms of “change decks”. That was handled behind the scenes.
You had a directory (folder) in which you had your code. You also had hidden files that stored the RCS history of the code.
By default, files were stored read-only. You could read them, you could compile from them, but you could not modify them.
To modify a file, you needed to first check out the file. When you checked out a file, it was “locked” to you and nobody else was allowed to modify the file.
You made the changes you wanted to the checked out files, then you tested. When you were happy that your code worked, you checked in the file you had checked out.
This is great when modifying a single file, but if you are modifying more than one file to accomplish your fix or enhancement, you have to check in each file in a separate operation.
There was no linkage between the files to indicate that all the changed files needed to be processed as a gestalt.
When you were ready to make a release, you had to do some magic to mark each file as being part of that particular tag. Then, at a later time, you could check out that entire tree and work on it as if it was the day of the release.
RCS did magic behind the scenes to figure out the “delta” between the checked out code and the original. This was equivalent to the “update deck” I was used to from University Days.
To work in a collaborative methodology, you would have a single “working directory” with everybody on the team having read/write privileges to the directory. If you were working across multiple machines, each machine had to use the same shared directory via a network file system. (NFS at the time)
At one point, I was working on BRL CAD on my home machine. I did not have enough space on the drive to copy the entire RCS tree to my local drive, so I was using NFS over a 28.8k dial-up modem.
Compile times ran about 3 days. And if anybody changed one of the “big” include files, I would have to start the build over again.
If you were working on a copy of the source code, you would extract a patch file from RCS to submit back to the master RCS directory.
It felt easy at the time, but it wasn’t as easy as it seamed. We just didn’t know what we didn’t know.
CVS
CVS was the first major paradigm change in source code control for us. The basic use was the same as with RCS, but they had changed the layout.
You now had an explicit directory, CVS, which contained the history files. When you checked out files, the lock was done in the CVS directory.
In addition, you could check out the files read-only (no lock) remotely from the CVS directories and then checkout with a lock, edit on the remote system, then check in your changes.
This was a game changer. We no longer required a network file systems.
Unfortunately, we had some of the same issues as we had with RCS. The main one being that only one person could check out/lock a file at a time. With team members working nearly 24 hours per day, it was a pain when the morning dude wasn’t available at 2237 to release a lock.
SVN
SVN solved most of the known problems with CVS. It had the concept of a remote repository, it allowed multiple people to work on the same file at the same time. It had better branch and tag capabilities.
All in all, it was a vast improvement.
The two primary weaknesses were no gestalt for files and very slow check out of branches and tags away from the main trunk.
I remember using SVN. I had to use it just a couple of weeks ago. I don’t think I ever fell in love with it. It was a step-wise improvement over CSV.
git
Git is my favorite source control system. I understand that there is another SCS, but I can recall its name at this point. I’ve not used it.
Git changed the paradigm we use for changing the repository. Whereas all the previously discussed SCS’s work on a file by file basis, git works on a “commit” basis.
Even if you are working in a collaborative environment, you work on your personal repository (repo). We will get to collaborative environments shortly.
In the simplest form, you create a “working directory” which you populate with your code. That could be a book, a program, an application, or a web page. It doesn’t matter. Git doesn’t care what the files contain, only that they be text files.
Git can work with binary files, but that is not our focus.
Once you have your initial contents, you create your repo with git init. With this magic command, git creates all the required files to track the history of your project.
Let’s say you are working on a book. You have placed each chapter of the book in a separate file. One of your characters is named Cillary Hlinton. Your editor tells you that the name is just too close to a real person, and he would rather not be sued. He asks you to change the character’s name.
Under update, RCS, CVS and SVN, you would check out individual files, change the name to “Billy Boy” and then check in your changes. When you have made all the changes, you are happy.
The issue is that there Chapter One is on revision 44, Chapter Two is on revision 37, and Chapter Three is on revision 48. How do you figure out the revision from just before you made the changes?
With git, you do not check out files and lock them. Instead, all files are ready for you to modify. You just edit the files and change the name.
Now you have chapters one, two, and three that have been modified. You group them into a single commit by adding them to the staging area. git add chap1.md chap2.md chap3.md
You can do this on one git add or multiples, in one session or multiple sessions. At some point you will be satisfied with your collection of changed files.
At that point, you commit the changes. You will be required to supply a message.
Each of the following circles represents a commit.
Before Name change
After the name change
If we want to see the version before the name change, we can check out commit 4. When we do, all the files are changed back to the version before adding your name changes.
This makes it easy to find one particular point where the state of the book is one way and in the next commit, all the changes have taken place across the entire book.
The other major improvement that git brought was fast branches.
Branches
Here we see two branches added to the repository. The first “HEAD” is a special branch. It represents the commit associated with the working directory. It is manipulated implicitly instead of explicitly.
“master” is the default branch until “rrracist” was applied, so some repos now use “main” instead of “master” branch.
This ability to create branches rapidly allows us to make and destroy branches at will.
We are going to create a new branch, “editor” for our editor to work on. Meanwhile, you are continuing work on chapter four.
Editor and Master branches
And here is where git shows another of its powers, the merge. With the ‘master’ branch checked out, we merge the editor branch, fixing all the little grammar and spelling errors. git checkout master; git merge master
After Merge
With this merge completed, the master branch contains all the work done in the editor branch, but the editor branch does not have any of the new work done on master. To synchronize the editor branch with the master branch we do git checkout editor; git merge master.
After merging master into editor branches
If there is no more editing to be done, it is acceptable to delete the editor branch. No code will be lost.
Because the ability to branch and merge is so quick and powerful, it is normal procedure to start a new branch for each issue being addressed in a project. When the issue is resolved, the new code is merged into master or discarded.
Remote Repositories
Is a tale for another time.
Conclusion
If you can use a source code control system to track your work and changes, do so. It makes life so much easier in the long term.
I’m watching the snow melt outside. It’s SLOWER than watching paint dry. Ah well. It’ll be gone soon, and then I can get to work on outdoor stuff. For now, it’s time to plan the outdoor garden space, and decide which things are getting direct sowed. In other words, which things go right into the ground (or raised bed/outdoor container/plant tower/etc) versus those that get started indoors because they’re too delicate for the cooler weather?
The first seeds that I’ll be direct sowing will be radishes, beets, carrots, peas, and spinach. These are all hardy crops, and they like the cold and damp that come along with early spring and late fall. They’re also staples around here. Well, not the beets so much. I like them, but most of the rest of the family doesn’t. That’s fine; more for me.
You’ll note that the beets and carrots and radishes are all what we call “root crops.” This means the edible part is under the ground. Generally speaking, for early spring crops you want to look for ones that say, “Plant seed outdoors as soon as the soil can be worked.” This means that a late frost in the spring won’t destroy your plants, and that’s a very good thing when you live in the northern part of America, or any part of Canada. Most root crops can be planted early, but always check the seed packets (or online if you don’t have the packets).
Before you can sow seeds directly into the soil outdoors (regardless of whether it’s in the ground, in a raised bed, or in a container of some kind), you have to prepare the garden bed. This takes several stages, and is best started as soon as you can get into your garden area. I can’t yet, because we still have snow deep enough to cause issues and I’m not shoveling out the garden. You can speed this up by covering your garden beds with black plastic each fall right before the snow flies. This keeps down on weeds, and also allows the beds to warm up earlier. Once your beds are defrosted and workable, you can begin planting. This is one of the main joys of any kind of raised bed.
The short of this is that I’ve been building PCs for years. They are LEGO blocks. You make sure the parts will fit together, and it all just works.
As an example, I “knew” that LGA sockets were for Intel CPUs. Last night I learned that LGA just means the motherboard socket has the pins. PGA means the CPU holds the pins.
How did I learn this? I was researching AMD CPU sockets and learned that the AM4 socket was of the PGA style, while the AM5 socket is of the LGA type.
I didn’t know what I didn’t know.
We run a local data center. It is still a work in progress. We have enough disk space, but not enough redundancy. We have some compute servers, but not enough.
We try to do some upgrade every month, trying to improve things. The last improvement was another node in the Ceph Cluster.
After spending weeks researching, I found a 4 bay NAS enclosure that took Mini-ITX motherboards. This felt just about perfect.
It uses a flex style power supply, which is balanced for the actual load of 4 HDD and a motherboard. 350 Watts is what I went with. Thus, it draws less power than older machines.
Finding a Mini-ITX board was another research hell. What I wanted was MB with 4 SATA 3.0 ports, 1 or more SFP+ ports, one gigabit Ethernet port, at least 16 GB of memory and NVMe support for 512 GB of storage.
I couldn’t find one. I haven’t given up, but I haven’t found one yet.
After searching, I found a Mini-ITX MB with an LGA 1155 socket, 4 SATA2.0 ports, a 10/100 Ethernet Port, 2 DDR3 slots (16 GB), and a PCIe slot.
This might seem low end, but it meets our needs. HDDs only require 3 GB/s to keep up. We would need 3.0 if we were using SSDs.
The 10/100 is useless for moving data, but meets our needs for a management port. All in all, a good choice.
When all the parts arrived, I couldn’t get the MB installed. The fan was too tall. I got a better cooler that was a low profile style. When that came in, I installed the board. It was painfully tight getting everything in. Took me over an hour to get all the cables hooked up just right.
Everything went well until I went to put the cover back on. At that point, I found the cover didn’t fit “because the case had the motherboard too close to the edge.”
I fixed that in the machine shop. Grinders and cut off wheels to the rescue.
Everything goes together.
After everything is configured and running, I slap a drive into the case and it works. Wonderful. Final step? Install the SFP+ network card.
It doesn’t line up. The damn thing doesn’t line up with the slot in the back.
After mulling it over for way to long, I made the cut-out in the back wider and moved the standoffs. Machine shop to the rescue.
Except I had a bad network card. Easily fixed via a replacement. No big deal.
After over a month of fighting this thing, making massive changes to the case. Taking it entirely apart to get the motherboard in, the machine is now in production.
Yesterday the motherboard for an upgrade arrived. The case I bought to hold it had the PCI slot moved over. This looks like it will all just work.
Except that when I go to install the MB, I can’t get it to fit into the case. No big deal, I’ll take this case apart too.
But the board doesn’t line up. It doesn’t line up with the standoffs. It doesn’t line up with the back slot. It doesn’t even line up with the onboard I/O baffle.
At that point, I measured my Mini-ITX board. It should be 170mmx170mm. This board is not. It is 0.8 inches to wide. It isn’t a Micro-ITX nor is it a Mini-ITX. It is some none standard PoS.
I’m spitting mad at this point. I’ll put everything back in boxes until the new MB arrives. When it does arrive, I’ll be able to retire an older box that has been holding this data center back.
Everything now fits.
It wasn’t the case that was the issue with the last build. It was the motherboard. Time to update the reviews I wrote.
If you live in any of the Plant Hardiness Zones that are 1a through 6b, then you need to know how to start your seedlings indoors. This is something that can be a lot of fun, but it’s a lot of work as well. Doing it right takes effort and time. The end results are worth it, though! Of course, you could simply buy “starts” (ie seedlings) at your local farm store, but what if TEOTWAWKI has happened, and there are no more farm stores? That’s right, you need to know how to do this.
There are various methods for starting seeds, but the one I’m going to talk about today is indoor sowing. The basics of it are fairly simple: fill containers with soil, add seeds, care for them, and voila, you’re ready to plant as soon as the ground is warm enough. This can give you as much as 45 days of extra growing time for vegetables, and that gets important when you’re in New England or any of the northern states.
Common plants to start indoors include tomatoes, broccoli, cauliflower, leeks, eggplant, kale (and other large, leafy greens), sweet and hot peppers, cabbage, most flowers, and most herbs. This is obviously not an exhaustive list, but I picked the most common ones to start indoors. Tomatoes are definitely the most popular, with peppers coming in a close second. All of these plants will transplant well from indoors to your outdoor garden later in the spring and early summer.
On the other hand, there are a variety of plants that should NOT be started indoors. The reasons vary, but generally speaking it’s because they either grow quickly, have incredibly sensitive roots and will die if transplanted, or they like the cold. Common plants that go direct to the garden include beans, beets, carrots, radishes, some lettuces, peas, squash, corn, spinach, and root crops like potatoes and sweet potatoes.
The first thing you’re going to need is a list of what you want to grow. For a typical first garden, I suggest the following: bush beans, peas (either snap peas or shelling, your choice), tomatoes, cucumbers, zucchini (if your family likes them), broccoli, kale and/or cabbage, spinach, and one or more of beets, carrots, potatoes, sweet potatoes, and winter squash. It seems like a small list to me, and it may seem huge to you, but this is a very small but decent kitchen garden for a first year. Add to that your herbs, and you have the beginning of a new hobby that will engulf your life.
Yesterday, the Supreme Court heard oral arguments in this case.
It is difficult to actually conceive of how long the battle for our Second Amendment rights has been going on. It started in 1792 and has continued through tomorrow.
In the founding era, there were a number of racist and religious exceptions. These were designed to keep arms out of the hands of Negros, mixed race people, Indians, certain religions, and other deplorable. By the 1870s, all of these exceptions were found to be unconstitutional, leaving very few infringements that would survive constitutional muster.
At this time, temporarily denying the right to people that have been adjudicated violent in a court of law is the only one I know of. See: —Opinion, United States v. Rahimi, 602 S.Ct. ____ (U.S. 2024)
In the early 1900s, New York City decided to ignore the Constitution and passed the Sullivan Act. The Sullivan Act was designed to disarm those that would stand up to the corrupt bosses who controlled the city. They used a permitting system.
They claimed that this was constitutional because some people did get permits and everybody could beg for permission from the government for that permission slip. This continued until 2022, with the Bruen decision, the corrupt NYC permitting scheme was shutdown. For all of 10 seconds.
The Bruen response bill attempted to create a statewide “sensitive” places replacement.
After the Sullivan Act, the infringers decided to ban handguns, machine guns, and short shotguns. They did this by placing a tax on these guns that was so outrageous that The People could no longer afford them.
They did not accomplish this. What they got instead was a functional ban on Short Barreled Rifles, Short Barreled Shotguns, Machine guns, and Silencers. By 1936, this was the accepted law of the land.
Using a saying that had not yet been published, in the late 1960s the infringers took advantage of a crisis to stop mail order gun sales. The GCA of 1968 created FFLs and required in person sales of firearms.
The claim was that those FFLs wouldn’t sell to bad people.
When bad things kept happening, they tried more gun control. Mostly permitting schemes that made it nearly impossible for The People to get permission.
Using another crisis, they got the Brady Act passed. Thank goodness, the NRA was fighting for some level of a win. The original intention was to create a system where buyers would have to get permission from the government for any gun purchase.
This was in the form of a “background check” with no limit on how long it took or how intrusive it might be. The NRA got the NICS system for us. Along with a “not denied is proceed”. It put the onus on the government to complete the check rapidly.
In 1986, we got a win with a poison pill. This was the Firearms Owner Protection Act. This was designed to protect firearm owners from being persecuted by the ATF.
There was a time when describing the internal workings of a machine gun was being construed by the ATF as manufacturing a machine gun. Selling a gun or two could get you sent to prison for not having an FFL. It was bad. There are stories of ATF agents hanging around gun shows seeking people to arrest or FFLs to bust for trivial things.
The bad part of the Firearms Owner’s Protection Act was the Hughes Amendment. The infringers had realized that the NFA had outlived its usefulness.
In 1934, the $200 surcharge for transferring a machinegun was unreachable for most of The People. When a M3 machinegun was selling for under 30 dollars, $200 was nearly impossible. An ad for a Colt M16 shows a price of $236.00 plus $5.00 for shipping. By the mid-1980s, the price was around $1800.
At $1800, a $200 surcharge wasn’t as bad.
One of the problems that started happening after 1986, when the NFA was closed to new machineguns, was a price boost of $200 every time a NFA item changed hands.
Consider buying a silencer today. The can costs $500 + $200. If you want to sell the can, you would like to get $700, to recover your costs. Now, this doesn’t work. Given the choice of a used can for $700 + $200 tax or a new can for $500 + $200 tax, you buy new. Thus keeping the costs of silencer’s down.
After 1986, there were no new machineguns. This means that every transfer increases the cost of that gun by at least $200.
At this point, the infringers moved to stop the sale of all firearms. The method they decided on was to sue firearm retailers and manufacturers out of business.
What they did was they found a bloody victim and then sued the FFL that sold the gun. They knew they would not win the case, but the cost of litigation was punishment enough.
In 2005, bipartisan legislation was passed to stop this lawfare. The Protection of Lawful Commerce in Arms Act (PLCAA) was designed to protect entities in the lawful commerce in arms from frivolous lawsuits.
And it worked.
Until Sandy Hook.
They sued Remington Arms because they owned Bushmaster who manufactured the rifle that the asshole used to murder children and teachers.
What they claimed was that Bushmaster produced ads that caused the asshole to decide to murder his mother. Steal her keys to the safe. Open the safe. Steal the AR-15 within. To drive the car he stole from his mother to the school. And there murder children and teachers.
It was all the fault of the manly man ads that Bushmaster used to sell guns.
The lower state court dismissed the case based on the PLCAA. It was appealed up to the Connecticut Supreme Court. They decided the case could move forward. That was appealed to the Supreme Court, who denied cert.
Remington was bleeding money, and this case didn’t help. They went bankrupt. The hull of the company had no assets and no people. The insurance companies were on the hook for the money involved in the suit.
They settled. No gun people were involved in that disaster. It was a purely money motivated decision.
Which brings us to this case. Sorry for this long history.
Mexico was approached by the usual suspects. They filed in Massachusetts claiming that all the gun manufacturers were causing horrible things in Mexico.
The argument goes something like this:
The Cartels get guns from an illegal gun dealer. That illegal gun dealer purchased that gun from an illegal gun smuggler. The illegal gun smuggler purchased the gun from a straw purchaser. The straw committed felonies when they filled out the 4473 and when they sold the gun. The FFL knows that some of the guns he sells are being sold to straw purchasers. The distributor knows that the retailer knows that he is selling some guns to straw purchasers. The manufacturer knows that they are selling to distributors that know that the FFL is selling some guns to straw purchasers.
Therefore, the gun manufacture is guilty of adding and abetting murder in Mexico.
Yeah, it is that bad.
The lawyer for the petitioners (good guys) gave his opening statement explaining this. He then stated that the path between crime and manufacture had too many intermediate steps to make them responsible. This is known as “proximate cause analysis”.
He didn’t say anything about PLCAA.
Thomas started the questioning. The conservatives asked the right types of questions.
Then Sotomayor stepped up to the plate. And asked good questions. Not great, but good.
After Gorsuch and Barrette, Kagan asked questions. Again, not great, but good.
Then the surprise of the day.
Jackson started asking questions. And her leading question was, “Why wasn’t this stopped by PLCAA?”
It was a Good question.
I’m looking forward to reading the court’s opinion. At this point, I am finding myself thinking that this maybe a 9-0 opinion.
When I started writing, regularly, for Miguel, I took it upon myself to cover legal cases. Since that time, I’ve learned more than I really wanted to about our justice system.
As my mentor used to say, “The justice system is just a system.” As a systems’ person, that allowed me to look at cases through the lens of my experience analyzing large systems.
One of the first things I noticed was that most people reporting on cases didn’t provide enough information for us to look up what was actually written or said.
CourtListener.com has come to my rescue for most legal filings in the federal system. If you know the court and the docket number you can find that case on CourtListener.
Once you have the docket located, you can start reading the filings. These are stored as PDFs. Most of my PDF tools allow me to copy and paste directly from the PDF.
What isn’t available on CourtListener is Supreme Court dockets. I’ve talked to Mike and others, the issue seems to be something about scrapping the Supreme Court website as well as other stuff. I’m not sure exactly what.
I want to be able to keep up on all the current cases in the Supreme Court, what their status currently is, what has been filed. They entirety of the case. I’m not concerned about most of the cases, but often it is easier to get all than a selected portion.
To this end, I have code that uses patterns to pull cases from the Supreme Court docket without have a listing of cases.
This tool will have search capabilities and other tools shortly, for now, it works well enough.
I am using the PySide6, which is a python implementation of the Qt framework. For the most part, I’m happy with this framework. There are parts I don’t like, which I work around.
My most recent success was figuring out how to allow me to click on hyperlinks in text to bring up my PDF viewer. This was not as simple as I wanted it to be, but it is working.
The other night, I wanted to write about a current case. I had the case docket in my tool. I pulled up the docket, clicked on the link, and John Roberts’ order popped up in my viewer, exactly as it should.
I started writing. Went to pull the quote and nothing.
Copy and paste does not seem to be functional in my tool.
Which takes me to the rant, which @#$)*&@$) coordinate system should I be using to get the right text!
Qt is built around widgets. Every widget has its coordinate system. In addition, there is the global coordinate system.
Each widget also has a paintEvent() which is when it paints itself.
To start the process, I capture mousePress, mouseMove, and mouseRelease events. While the mouse button is down, I draw a rectangle from the place clicked to the current location of the mouse.
I attempt to draw the rectangle and nothing shows up on the screen.
Through debugging code, I finally figured out that I am not updating the right widget.
The QPdfView widget properly renders the PDF document in a scrollable window. I have made a subclass of QPdfView so I am catching all paint events. But even though I’m telling the system that I have to redraw (update) my widget, there are no paint events being sent to my widget.
Turns out that my widget only cares about update signals that require the framing content be redrawn. I.e. if the scroll bar changes, then I get a paint event. Once I figured this out, I was able to tell the viewport that it should update and things started working.
So now I can draw a frame on the screen. But what I want is to get the text from within that frame.
I asked the QPdfDocument for a new selection from point_start to point_end. It tells me nothing is selected.
Where do I currently sit? I have my frame in my PDFViewer coordinate system. I have the PDF document in a different coordinate system. The PDF coordinate system is modified by the scroll bars or viewport. The scroll bars and scroll area modify the actual coordinate system of the viewport contents.
Somehow, I need to figure out which of these coordinate systems is the right coordinate system to use to get the text highlighted by my mouse.
Planning out your garden beds is important, because where you put your plants matters. Some plants can’t go near one another. Others love to be close together and help one another. It’s a complex dance, and you need to learn a lot to do a good job at growing enough food to at least supplement your stores.
Luckily, garden beds can be made out of anything. As I mentioned last week, I have beds made out of planks (sort of the standard, and one I actually would no longer suggest), buckets, bins, and tires. Some folks will tell you that tires leach chemicals that can get into your veg, but I have not seen any real evidence of that. Most of the leachable chemicals in the rubber are gone long before tires end up in landfill (which is where you can usually find them, often for free). All items used to grow stuff in should get a good wash before use, and anything small enough to allow it should get at least a rinse every year. I find using Dr. Bronner’s soaps (peppermint or tea tree) work best because they’re biodegradable, won’t harm your plants, and are concentrated so you don’t need a lot.
My garden, circa 2015.
In-Ground Garden
If you have a very large, square (or rectangular) sized patch, you may want to just till it up and use it as-is. It would be a miniature farm field, basically. With no sides, it takes longer to warm up in the spring, but it allows you to rearrange your garden each year (which is good, as you don’t want to plant the same thing in the same space, year after year). When making a very large garden of this sort, you will need to put down rocks, stones, or planks of wood to walk along between rows. While you can just leave the ground as it is, you will find that weeds come up very quickly and will threaten to overtake the whole garden. Also, walking on the dirt compresses it in ways that can negatively affect your plants. Walking on boards or beams, or on a brick path, will keep the garden from being compressed so much, while also keeping weeds down.
Generally, you want to make an in-ground garden into rows and/or blocks, depending on what you’re growing. Vegetables like peas, beans, and tomatoes are best planted in rows. Potatoes, squash, and corn do better in blocks. You can plan out the garden to keep companion plants together, and keep your veggie foes apart.
Requirements for an in-ground garden are a large, regular shaped space with enough sun, and the ability to till the soil in some way. While tilling can be done by hand, it’s not easy. You can rent or purchase a rototiller at most hardware stores these days, and there are expensive ones and cheap ones.
This past weekend, I had the pleasure of attending, vending at, and cooking at the Northfolk Nightmarket in Phillipston, MA. This was its first year, and wow, it was amazing. I did pretty well, and I had a blast. Since this event is Viking themed (though “fantasy” Viking more than historical, they delved into the mythology of Beowulf in a day-long roving play), I decided to both dress as and cook as a Viking woman would. That meant coming up with meals that could have been served in Grylla’s mead hall. I decided to make a pork roast with apples, and a green soup. The soup was delicious, but the pork… It was divine. The following was food for about four or five people (but we were hungry from being out in the cold all day).
Ingredients:
1.5 lb pork loin, plain
4 apples, rough chopped
2 red onions, rough chopped
24 oz beer or ale (light, NOT dark)
salt, pepper, oregano, marjoram, about 1/4 tsp each
1 tbsp dried rosemary
Get your fire quite hot and make a good bed of coals to cook in (alternatively, set your oven to 350° F). Over a quick flame (stove burner set to medium high), heat up some olive oil and toss in the apples and onions. Saute them until they begin to soften, but before they start to crumble. Place the pork loin over the vegetables, and sprinkle with the salt, pepper, oregano, and marjoram. Add in the beer, a little at a time so it doesn’t bubble over, until the pork is almost covered (you may need to add more beer later if you don’t cover your pot). Sprinkle the rosemary liberally over the top of the roast, and pop it over the coals for 2 hours.
Check on your pork every 30 minutes or so (or every time a patron asks you what you’re cooking and why does it smell so damn good?), turning it so that every side spends time under the liquid. If the liquid boils out, add more beer or some broth. Continue to cook until the roast is ready to fall apart when poked with a fork. If you’re cooking it in the oven, cook for 2 hours at 350°, then an hour hour or so at 250° while lidded, for the best result.
Remove the pork from the liquid and slice into coins. Using a slotted spoon, pull out the apples and onions and serve them alongside the pork, with a side of rice.
Notes:
I used old apples I’d found forgotten in our crisper drawer. They looked like apples that had been sitting around since autumn, which worked well for my event. Because of that, they were a little older, a little softer, and a little sweeter than a fresh apple. I highly recommend this, because the result was incredible. This came out moist, and absolutely bursting with the flavor of the beer and rosemary. It has a little bit of a sweet immediate taste, with a lovely savory flavor that hits you after.
If you can, I really do recommend cooking this one in cast iron over a fire. It was really easy, and it was very showy for when people came walking by. But the smell of it, and the slight background taste of smoke and ash, just really came together.
I will also say, we didn’t eat it with rice when we were at the market. We ate it with our fingers, dribbling juices into the snow and ice at our feet, and giving no f*’s. LOL… It was just so good!