• December 2022
    M T W T F S S
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  
  • Latest Posts

  • Latest Comments

  • Archives

Where did zachrys.org go?

Headline:

If you want some information that was on the site, something small and specific, I’d be happy to see if I can find it. But zachrys.org has no plans to be up and running again.

What was zachrys.org?

It was a site I started in the mid 90’s to learn more about web technologies and in 2001 I rewrote the entire site in a technology called .NET.

I thought I would accomplish two things at once:

  1. Make the contents of the book, THE ZACHRY FAMILY TREE “SOUTHERN BRANCH” By Clare H. Zachry available online.
  2. Lean about web technologies.

The site required login to protect people that were still living and registered users could input additional information they found.

The site was based on the GEDCOM data model 5.5. A model that was as popular as it was complex.

How is Karl Zachry in this context?

To start with, I am in the book and was fortunate enough to be one of the individuals that received a book from Clare. I also have managed software development for more than 30 years. While my degree is in Geophysical Engineering I have an aptitude for software development.

After watching many managers that didn’t understand the technologies they managed make colossal mistakes I realized I needed to go back to my technical roots. This was a project that let me combine the desire to understand the technology and provide this information to fellow members of the extended Zachry family.

I will state up front I know technology. I am NOT a genealogist. I know very little about how to gather records and provide proof.

So what happened to the site?

The short version: years ago I lost the source code and then my hosting provider upgraded their network in April 2020 and after that upgrade we could never get the site running again.

Which is understandable since the last release I had (which added the security) was in 2005. I was quite proud that software I write in 2005 was still running 15 years later.

I did managed to download the database before all that happened, so I do have the data. But GEDCOM is not a human readable format.

In addition other things changed in my life. When I started the site I just pushed paper around and needed a project in order to learn. In 2005 I actually moved into roles where I write software on a daily basis, so I no longer am motivated to write even more software in my spare time Smile 

But I did start to work on the site again and when doing research saw how far ancestry sites have come since I started writing that in the 90’s. I suspect that all the content that was on my website is available on other websites with infinitely more resources.

Closing

I would like to thank all those that contributed new information to the site and gave me encouragement. I look back fondly at those times.

Sincerely – Karl

Changed to Open Live Writer

Headline: I’ve finally moved from Windows Live Writer to Open Live Writer

Why the change?

I’ve been working to finally modernize my blog which has been part of the reason I’ve not posted much. I did not have an SSL Certificate for the site, so most search engines would just totally ignore me.

There are quite a few other changes that I’ll mention in another post, but as part of this I needed to setup my Windows Live Writer again and kept facing issues. I looked at many other authoring tools, but mainly ones that would let me author offline. Probably half the time I’m writing a blog I’m disconnected.

So I have Open Live Writer a try again and I really can’t tell whether I’m in Windows Live Writer or Open Live Writer… But, I can setup Open Live Writer on both my machines. I never could get Windows Live Writer to do that.

Many Thanks

I’m so thankful that the capable contributors worked to move the code and capabilities for Windows Live Writer.

Here is what it looks like:

OpenLiveWriter

Unicorn Series–Automated Testing

Headline: The only real way to be Agile is if you have healthy automated testing. (See Unicorn Series Overview for other topics.)

Results

Let’s start with some results of the practices described here:

  • Quality: Customer reported bugs range from 1 to 10 per year, despite thousands of users
  • Cost: There is no dedicated testing team
  • Speed: Our objective (usually met) is to fix a bug within 1 day

Yes – better, faster, cheaper.

Overview

The most agile teams seem to have short sprints. Ours are 1 week. Yes – 1 week. On any given week we could release a new version of the software fully tested and ready to go.

I once worked on a team that required 2 months of testing after the last line of code was changed. (Ok… we all know that at least a few lines of code change each week…)

Why did it take so long to perform the tests? They were mostly manual tests.

Automated Testing Is Required

So the only way to make sure you can release is to complete your test plans which is only fast when they are automated.

Our Automated Tests

We have several layers of automated tests:

  • C# Unit Tests
  • Jasmine (via Chutzpah) TypeScript tests for AngularJS code.
  • Web User Interface Tests (via Selenium)

All of the C# and TypeScript tests run with every CI (Continues Integration) build. After each successful build, we run a CD (Continuous Deployment) release we run all the Web tests before swapping the slots in the Azure environment. After every commit a developer makes, all tests are run from these three layers so we catch failures almost immediately.

After every commit a developer makes, all tests are run…

These automated tests (C# and TypeScript) cover more than 90% of the code base.

Other Benefits

Other real benefits from this strategy include:

  • It is fairly easy to refactor code as requirements change with this level of test coverage. (See A Strategic Case for Unit Testing)
  • It is faster and easier for new developers to the team to contribute because 1) because the unit tests document the code, and 2) they can code with limited fear of breaking things.

I’ll work to be a bit more active on the Unicorn Series in the months ahead.

Abner Zachry 1932-2019

In Loving Memory of

Abner S. Zachry III 1932 – 2019

Abner Shelton Zachry, III passed away peacefully on Friday, January 11, 2019 in Denver, CO at the age of 86. He was born to Abner and Ethelbert (Herring) Zachry on May 5, 1932 in Fort Worth, Texas. He grew up in Texas and New Mexico, and graduated from Carlsbad H.S. in 1950. He served in the army and later received both his bachelor’s and master’s degrees from Texas A&M University. On August 4, 1956, Ab married the love of his life, Nellie Lee Jefferies, at St. Andrew’s Presbyterian Church in Houston, Texas. They were married for 57 years.
Ab loved trying new things – tennis, golf, photography, gardening, music, and computers to name just a few. His children were inspired by his lifelong love of learning. As a devoted public school teacher, he passed this love on to his students in Brenham, TX; Crane, TX; Kermit, TX; and Grand Junction, CO.
Ab was preceded in death by his beloved Nellie. He is survived by son, Karl Scott Zachry and wife, Lael Wiseman of Denver, CO; daughter, Karen Louise Gardner and husband, Wayne of San Antonio, TX; eight grandsons, five great-grandchildren, and his brother, “Pete” Zachry.
In lieu of flowers, the family prefers memorial contributions be made in his memory to the Denver Hospice at
https://thedenverhospice.org/giving/give-donate/.

Static Code Analysis with .NET Core 2.1

Headline: I prefer Microsoft.CodeAnalysis.FxCopAnalyzers over StyleCop.Analyzers.

Why?

Here are the two primary reasons:

  1. I can use rulesets I’ve used in the past
  2. I can share a ruleset across many projects for consistency
  3. Ok… 3. And there are tons of overly restrictive, or at the very least –very different, rules in StyleCop.Analyzeers.

Conventions

I’ll just refer to these as StyleCop and FxCop for this post. FxCop was done in very much the same philosophy as the original FxCop.

Use Existing Rulesets

For almost 10 years now I’ve used either “Microsoft All Rules” (AllRules.ruleset) or “Microsoft Managed Recommended Rules” (ManagedRecommendedRules.ruleset). The first for code projects and the second for test projects (because we like to use unit test names with underscores for readability.).

I could not find a way to use existing rulesets with StyleCop.

I also have read Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (Microsoft Windows Development Series) 2nd Edition at least twice. And if you read that you fully understand the reasoning behind all the rules.

Sharing Rules Across Projects

I have one solution with over 30 projects. StyleCop uses a tree structure under the dependencies to select which rules you want. I could not find a way to “Save” that information and then reuse it across different projects.

Where there is not graphical interface like .NET Standard projects for selecting rulesets for static code analysis, you can merely put something like this in your project file (notice line 6):

From .csproj
  1.   <PropertyGroup>
  2.     <TargetFramework>netcoreapp2.0</TargetFramework>
  3.     <Description>Low level library for basic coding.</Description>
  4.     <NeutralLanguage>en-US</NeutralLanguage>
  5.     <Copyright />
  6.     <CodeAnalysisRuleSet>..\..\AllRules.ruleset</CodeAnalysisRuleSet>
  7.   </PropertyGroup>

So it is quite easy to share a common ruleset across projects.

Different Standards?

I won’t belabor the 3rd point… It’s a matter of opinion and I’m sure that there are people putting lots of hours in trying to make the world of coding a better place. But with all the thought that went into Framework Design Guidelines, and then more than a decade of use for these rules, I’m not ready to unleash a new set on existing code bases.

My experience shows that if you change the rules on a code base with 200,000 lines of code you will generate thousands if not 10’s of thousands of code analysis warnings. And at that point, everyone just ignores them and starts to inject some serious ones by comparison.

Steps I Took

  1. I used NuGet to install Microsoft.CodeAnalysis.FxCopAnalyzers in all my projects.

image

   2. I then unloaded the projects and added the <CodeAnalysisRuleset> line shown above.

   3. Then I just ran Code Analysis on the solution and all my warnings (put there on purpose to test) showed up!

Summary

Thank you Microsoft for developing Microsoft.CodeAnalysis.FxCopAnalyzers for .NET Core. It’s not as easy to use as Static Code Analysis in the past, but better than any alternatives I could find.

Since 2008 I’ve participated in many code bases that have zero code analysis warnings (unless suppressed in Source with a valid Justification) when using the Microsoft All Rules. And our code is much better for it. Thanks!

Getting a UPS package at Lombard Gate

UPS is great about delivering, but you do need to pick up your packages in a timely manner.

I signed up for UPS My Choice. It’s free and it’s great. So now I get an email that looks like this:

image

And if I go to that tracking number, I can sign up to be notified!

image

Clicking on the “Notify me with Updates” I get this:

image

I choose “All Packages”… There are quite a few options… and you can choose E-mail or SMS Text. The most important for me is “Delivery Confirmation”… Because then I know to go down and pick up the package.

image

I hope this helps…

Cheers,

Karl

Unicorn Series Overview

Headline: All the modern software development practices really do make significant business impact.

Contents

Disclaimer: As usual, there is not information in this series that would be considered proprietary information for C&C Reservoirs.

This is the overview and index for a series on software development practices the team I’m leading has been using for the past several years. But let me set some context.

This is called the “Unicorn Series” because several people that saw our practices said, “You’re a unicorn!” This mythical creature

that is said to exist, but now one has ever really seen it. Sometimes when someone says that they mean, “I don’t believe you.” But since I was able to demonstrate these practices, it clearly meant, “Wow! People say this, but not one really does all this!” (Someone else’s words, not mine.)

The overall objective was to take a content delivery and analysis software system for C&C Reservoirs and make it easy so we could "cross the chasm" – make it easy to use and appeal to the majority of the market. The team was given the liberty to re-write the application from the ground up. Now DAKS™ IQ is fast, easy to use, and high quality.

The key results are:

  • Better to use: In only 15 months we transitioned 100% of our customers from the previous software to DAKS IQ.
  • Available: Using cloud technology the uptime for this year so far is 99.95%.
  • Quality: The current rate of customer reported bugs is about 1 bug per month.
  • Fast: The use of cloud computing to place the application near the user and other techniques leaving the impression for most that it is a desktop application rather than a web application.

I started with key customer benefits which we all know are critical for business success. But there are other business benefits that aren’t customer facing. These help with costs and lead to the benefits above:

  • Unit Test Code Coverage: The code has greater than 90% coverage by unit tests.
  • Cloud Strategy: We don’t even own a server! We use Visual Studio Team Services (VSTS) for our source control, builds, and releases. And we use Azure for our deployments.
  • Continuous Deployment Builds: Each build runs all the unit tests, some web tests, and then deploys the build to a test site after every single check-in.
  • Mobile First: We started with a responsive application that would work on all devices from the first deployment.
  • Previews: We had 6 preview releases soliciting customer feedback before our first official release.
  • One Week Sprints: The team moves fast and is really ready to release on any day, but most certainly each week by the sprint meeting.
  • Technology Stack: We used the latest technologies. There are too many to mention here, but I’ll cover that in detail.
  • Automated Releases: When a release is kicked off all the databases are updated, each server around the globe is updated, and this is done with swapping slots to eliminate/minimize downtime.
  • Feature Focus: Each feature is hidden behind that feature and any modules can be created from a collection of features. This means that we can release the software with brand new features that customers don’t yet see.
  • No Branching: Due to the ability to construct features on the main branch and not release them there are no branches! And it has been that way for more than 2.5 years.
  • No Dedicated Testing Team: We do test, but various users in the company put it through the paces. Due to the extensive automated testing we don’t need a dedicated team like many software development teams.
  • Focus on Usability: We continually look at the usability for each feature and the consistency of the conceptual model that allows the user to easily navigate DAKS IQ.
  • Virtual Team: We had team members in Houston, Denver, San Francisco, Seattle, and China. Use of Visual Studio Team Services (VSTS), Skype for Business, and Slack allowed us to work together very effectively.
  • Security: We focused on security from the beginning. We don’t even store passwords and all content is delivered over https.
  • One Team: There isn’t a maintenance or bug fix team. We all work on features and what few bugs there are. The person most capable of fixing the bug works on it, which means that for the most part, the person that wrote the bug fixes the bug.
  • Dogfooding: We use DAKS IQ internally to capture the data for each of our fields and reservoirs.

I’ll explore many of our team practices including those mentioned that allow us to achieve these results. Let me know what you find the most useful.

Cheers,

Karl

Getting Started with Git in VS 2017

This talks about some basic steps for getting started with Git repositories in Visual Studio 2017. This doesn’t talk about overall usage, but rather just how to get some things setup.

Visual Studio 2017 already comes with basic support for Git. In this post I’m using Visual Studio 2017.3.3.

The first think I did was create a Git repository (repo) in VSTS (Visual Studio Team Services). You can see the + New repository option in the image below. I created the one called Playground.

image

After that I connected to that Repo and Cloned it.

image

After connecting to a Git repo then I see this on the Team Explorer tab:

image

When I click on Install, I see this disclaimer:

image

Then clicking that install, I go to a page that seems to download the right version for my by just landing on the page. I pressed Run.

image

Then when the Install started I clicked Next 9 times (took the default each time).

I then went to this page to get Git LFS: https://git-lfs.github.com/ 

And I clicked the similar link:

image

After that I did a Sync with the VSTS repo and all was well. Ok… That’s a lie. This process caused some local changes so the pull didn’t work. I had to do to the command prompt and type “git checkout .” to undo all the changes (3 of them) to my local copy. Then the Sync worked.

Some key notes…

– I had already added the items I wanted to track from another machine and committed those changes to the remote (VSTS) repo.

– If this is your first machine for the project that is using git LFS then you will need to run the track command to add them. As you can see, if I run just “track”, it shows which items I’m tracking with LFS:

image

You can add other files by running the git lfs track command, for example:

git lfs track “*.png”

adds the right information to the .gitattributes file.

Windows Live Writer 2012 in June 2017

Some day OpenLiveWriter will replace Windows Live Writer (I hope)… But for now I use the Paste As Visual Studio Code plug in, and at the time of this writing, there’s nothing like that for OpenLiveWriter.

Fortunately, at the bottom of this page is a location where you can download wlsetup-all.exe: https://answers.microsoft.com/en-us/windowslive/forum/livemail-wlinstall/windows-essentials-2012-microsoft-offline/0dbbd92a-991c-48d7-8157-26decd351da8

Then, there are many posts that show this command to actually install it without having the application call out to the web for updates, which results in an error message. Here is that command (all one line):

wlsetup-all.exe /AppSelect:Writer /q /log:C:\temp\Writer.Log /noMU /noHomepage /noSearch

I finally have a location where I’m keeping this file… Those guys did this right the first time.

No Startup Sound for Windows 10

At least by default it doesn’t seem there is a startup sound. This is GREAT!

I travelled this week and have no idea how many times I heard the Windows 7 startup sound. Why do you need a sound? The only real reason I can think of is because the startup is so slow. The user has gone on to other tasks and you need to let them know, “Hey! I’ve actually done something now.”

But Windows 10 on my Surface Pro 4 is pretty fast for booting. It’s so nice to open it up in a meeting and not have to lunge for the volume so you don’t disturb the meeting.

Stated a different way, LOTS of people this week disturbed others as they started up their devices. They respond with a sheepish and embarrassed smile, but it’s not really their fault – it’s Microsoft. Or was. But not any more.

Thank you Microsoft for realizing that just starting a device isn’t noteworthy of announcing the the entire room or airplane.

Yours truly,

An Introvert