<![CDATA[The Academic IT Tech]]>https://garrettyamada.com/https://garrettyamada.com/favicon.pngThe Academic IT Techhttps://garrettyamada.com/Ghost 3.17Tue, 04 Aug 2020 15:52:41 GMT60<![CDATA[It’s 2020, And I’m Still Getting Plain Text Credentials Via Email]]>https://garrettyamada.com/its-2020-and-im-still-getting-plain-text-credentials-via-email/5f29786087b88e0011c87f89Tue, 04 Aug 2020 15:08:36 GMT

Today, I noticed something concerning - I received an automated email invoice from my landscaping company (who I won’t name here, out of courtesy) which contained a plain-text version of my credentials for their payment portal.

While this had happened before, it previously contained an auto-generated set of credentials (so I elected to continue leaving those in place for a while so as to minimize the “blast radius” were their system to be compromised).

I had recently changed my username and password to something else (my email address and a randomized password generated using 1Password), however, thinking that changing it both would tell the system to use the new email address I provided for future correspondence and perhaps flag the system to tell it to stop sending my credentials via email.

Neither of those things happened. The latest email invoice I received went to the previous email address on file, and my new credentials were still sent to me in plain text.

An Investigation

The payment portal itself is suspiciously vague about what third-party companies are responsible for its maintenance and card processing, so I decided to investigate. You may notice I didn’t have a lot to go on:

It’s 2020, And I’m Still Getting Plain Text Credentials Via Email

A cursory once-over returns absolutely no information. The company running this portal is not named in the only valid link on the page (the Terms and Conditions). “Privacy” is not a link, and nor is “Web 2” (whatever that means).

A call to the phone number listed goes nowhere; it’s invalid. I mean, 555-1212? Come on, what is this, a cheesy hollywood movie?

The paragraph about “contact us by email by clicking here”? The linked email is “[email protected]”. That sure isn’t going anywhere, either.

Since the information provided on the page itself is completely bogus, I next turned to the domain name itself. The domain name for this payment portal is “manageandpaymyaccount.com”. A WHOIS lookup merely returns what I expected - the registration is private, masked by GoDaddy.

A dig command returns two A records - both for a CDN called “Incapsula”. Not a common CDN, so I looked a bit further. The site is HTTPS enabled (at least there’s that), so a quick search on Censys for “manageandpaymyaccount.com” returns this:

It’s 2020, And I’m Still Getting Plain Text Credentials Via Email

Nothing out of the ordinary so far (except that Incapsula apparently runs their CDN using IIS - really?). Next I dug into the first certificate in the chain:

It’s 2020, And I’m Still Getting Plain Text Credentials Via Email

Bingo! Now we’re getting somewhere. Some other interesting domains are listed here. As it turns out, “serviceautopilot.com” is the SaaS company my lawn care company uses to manage and run their business. A quick dig shows “backtell.net” doesn’t point anywhere but Google tells us Backtell, LLC is the registered parent company of Service Autopilot, both based out of Richardson, TX.

Some further searching turns up a couple of interesting things:

  • A search on Spyse appears to show 312 subdomains of manageandpaymyaccount.com; there are likely at least that many companies using this payment portal.
  • There is one other documented public mention of this practice of sending plain text passwords, on Reddit:

It’s 2020, And I’m Still Getting Plain Text Credentials Via Email

The Reddit comment is from two years ago. Service Autopilot has been sending credentials in plain text for their payment portal (which they apparently went to rather surprising lengths to hide that they run) for at least two years, possibly much longer.

So what’s next?

Before writing this post, I reached out via email to the owner of my lawn care company, recommending he change payment processors and pointing out that sending passwords in the clear is non-compliant with PCI-DSS standards.

I also reported the issue using Service Autopilot’s contact form.

Upon receiving notification of my report, my lawn care company responded as follows:

“Thanks for bringing this to my attention, we are going to make some changes moving forward with our current way of emailing usernames and passwords.  These emails containing the username and password will serve as a temporary username and password for first time clients and we will encourage our clients to change their password/username any time they receive this email.  We will not continue resending them monthly as we have been doing which should ensure a level of protection to our clients.”

This is a good start, as switching payment systems and business process flows is not an overnight process; they’re definitely limited by Service Autopilot’s technology though.

Service Autopilot responded with the following:

“Thank you for contacting us.  I am glad to assist you.  You are correct. SA is not PCI compliant, that is why we do not store credit card information. We store the token only.  We recommend that when you email the client portal login information to your customers you advise them to change the password after they login.

As for safety concerns, you can advise them there is no credit card information stored in the portal, only customer name, address and phone.  If someone were to access a customer's account there is no financial information stored on the site.”

This was very vague. Aside from the fact that PCI compliance also generally applies to the merchant, they seemed very cavalier about protecting their customers’ addresses and phone numbers. (Not to mention that the fact they can retrieve passwords in plain text means they’re not encrypted!) When pressed on protecting PII, and whether they would alter their portal to address this lax security practice, they stated:

“I have not heard of any upcoming changes. I will pass your feedback along for review. I have submitted product suggestion ticket (ticket #) on your behalf.”

Thanks for nothing!

Important Notes

  • In the course of this investigation, I only accessed publicly available data to determine and verify ownership of domains and companies involved in this situation.
  • Both companies involved were notified of the security issue in advance of publication.
  • No computer systems were accessed without authorization.
]]>
<![CDATA[SSH User Certificates With Azure AD & Smallstep]]>https://garrettyamada.com/ssh-user-certificates-with-azure-ad-smallstep/5ee9675daf4833001181474eWed, 17 Jun 2020 01:22:59 GMTAs I work through evaluating better security for infrastructure within my team, one of the areas I saw an opportunity ripe for automation is implementing SSH certificates. SSH certificates can provide a smoother experience than simple RSA keys, but normally require some specific domain knowledge about PKI. Certificates and PKI can be complicated; sometimes they create more headaches than they solve.

Enter Smallstep – they have created a great tool called step-ca for creating a really simple certificate authority and an excellent CLI called step for interacting with it. If you configure step-ca as a certificate authority for SSH, like so:

step-ca init -ssh

then it can issue both user and host certificates for use in your environment. Once you have a working CA, you can configure your ca.json with a new provisioner. Provisioners are the backbone of step-ca; they are how one is authorized to generate a certificate. Smallstep has built out an OIDC (OpenID Connect) provisioner which works with both Google and Azure AD.

[Note: Azure AD is the DaaS offering from Microsoft which is not the same as traditional Active Directory or ADFS.]

To set up an Azure AD provisioner, you'll need to first register Smallstep in your Azure AD tenant as an application. This can be done from the Azure CLI:

az ad app create --display-name "Smallstep SSH" --reply-urls https://garrettyamada.com:10000

[Replace "garrettyamada.com" with 127.0.0.1 here. My static site generator likes to replace any instance of it with my domain name (oops).]

The port 10000 is an example specific listening port when the step CLI launches a web browser tab for sign-in. This can be set in our ca.json, as we'll see in a moment. For specifics on the app registration and the credential details you need, see https://smallstep.com/docs/sso-ssh/azure-ad/.

After the app is registered, we'll add a provisioner to our step-ca configuration's ca.json.

{
	"type": "OIDC",
	"name": "AzureAD",
	"clientID": "your-app-id",
	"clientSecret": "your-client-secret",
	"configurationEndpoint": "https://login.microsoftonline.com/your-tenant-id/v2.0/.well-known/openid-configuration",
	"admins": [
		"[email protected]"
	],
	"domains": [
		"test.com"
	],
	"listenAddress": ":10000",
	"claims": {
		"maxTLSCertDuration": "8h0m0s",
		"defaultTLSCertDuration": "2h0m0s",
        "disableRenewal": true,
        "enableSSHCA": true
	}
}

Once ca.json is updated, restart step-ca. Then, from the client machine (assuming you have bootstrapped step), run:

step ssh login [email protected] --provisioner "AzureAD"

This will launch a web browser tab, you'll sign in to Azure AD, which will return a token to step. step will use that token to request an SSH certificate from your CA, and if granted, add the certificate to your ssh-agent.

That's all there is to it for setting up user certificates with the Azure AD provisioner. For more details, see the Smallstep documentation. Happy SSH-ing!

]]>
<![CDATA[Setting Up GitHub Actions Self-Hosted Runners]]>Today we set up a self-hosted runner for the newly-out-of-beta GitHub Actions. The process was relatively smooth, but since we were setting up a runner on Windows, as usual, there were a couple of bumps.

Bump #1: Custom Actions

We were setting up a runner to test the Terraform action

]]>
https://garrettyamada.com/setting-up-github-actions-self-hosted-runners/5ed5aa9de593240011fddeefTue, 02 Jun 2020 01:34:37 GMTToday we set up a self-hosted runner for the newly-out-of-beta GitHub Actions. The process was relatively smooth, but since we were setting up a runner on Windows, as usual, there were a couple of bumps.

Bump #1: Custom Actions

We were setting up a runner to test the Terraform action first, and quickly discovered something that does not immediately stand out in the self-hosted runner documentation: to use most actions, you must install Nodejs.

Luckily, that was one command away:

choco install nodejs

Bump #2: The Shell

As our luck would have it, the Terraform action uses the bash shell by default for all platforms (including Windows). On Windows, GitHub Actions uses the "Git Bash" shell. We already had that installed, so we were in good shape, right?

Wrong.

At least when installed via Chocolatey, the Git Bash shell does not get added to the PATH by default. You'll need to add the below to your PATH:

C:\Program Files\Git\bin

With these two small changes, you should find using your new Windows self-hosted runner with GitHub Actions to be much more straightforward!

]]>
<![CDATA[Windows Update Analytics Using Update Compliance]]>https://garrettyamada.com/windows-update-analytics-using-update-compliance/5ed5a8b6e593240011fddedfTue, 02 Jun 2020 01:19:11 GMTFor many years, my team has been using Windows Server Update Services (WSUS) to manage and control distribution of Windows Updates to our endpoints. Recently we decided that given our limited personnel resources, we should move to a more “hands-off” solution for Windows Updates. We set out to migrate our endpoints to Microsoft’s Windows Update For Business product.

Windows Update For Business pulls updates directly from Microsoft without a middleman WSUS server needed for management. “But can you control the flow of updates? Can you still get reports?” Yes, and yes.

Windows Update For Business allows you to develop “deployment rings” that roll out updates to different sets of endpoints over varied periods of time. For example, a set of your endpoints might get updates as soon as they are available (Ring 0), another set 3-5 days later (Ring 1), and another set within 7-10 days (Ring 2). All of these groupings and deferment times can be adjusted, and if there is a problematic monthly rollup released, you can “pause” updates across all rings for an administrator-defined time period.

Perhaps most importantly for us, we wanted similar or better reporting capabilities for getting data about the deployment status of Windows Updates each month. Microsoft offers a free solution for this called “Update Compliance”, which collects the Windows Update telemetry data and organizes it into nice, administrative-friendly dashboards accessible via the Azure portal. It provides default reports on Feature Update status and Security Update status, and additional reports can be created using Azure’s Log Analytics tools.

You can learn more about how to set up Windows Update For Business and how to configure Update Compliance on Microsoft’s documentation site. Rolling out these solutions to our environment was very simple, and was complete within about 2 weeks post-testing. It was satisfying to finally send our WSUS server to the grave!

]]>
<![CDATA[Implementing "Least Privilege" for Endpoints]]>https://garrettyamada.com/implementing-least-privilege-for-endpoints/5e63fcf5f5dc0b0012ea30dcTue, 23 Jul 2019 01:27:19 GMT

In "The Protection of Information in Computer Systems", two MIT researchers define the principle of least privilege like this:

"Every program and every user of the system should operate using the least set of privileges necessary to complete the job."

This principle is difficult to adhere to in academia, when higher privileges are often necessary for end users to perform their job, especially with regard to research. We've attempted to solve this problem in both our Windows and macOS environment to varying degrees of success.

There are several products in the market for "endpoint privilege management", one of which we utilize currently. What we've discovered in our use of this tool in a highly agile academic environment include several points of interest to the wider IT community:

  • No matter how many policies we may create to elevate certain binaries and runtimes, there will always be new ones
  • Utilizing software application vendors who practice good security and sign their code is important
  • Many developers provide separate applications which update their primary application, and these updater apps are often unsigned and run processes you do not expect
  • Some applications may provide no secure way to target them with a policy, which may necessitate elevation by checksum
  • Many applications start several child processes, some of which may also require elevated privleges

Hopefully, these lessons learned provide some guidance to IT pros evaluating how they should implement this principle in their environment. If you want to chat about endpoint security, come find me in the MacAdmins Slack – I'm @gyamada619.

]]>
<![CDATA[Lightning-Quick Maintenance With Puppet Bolt]]>https://garrettyamada.com/lightning-quick-maintenance-with-puppet-bolt/5e63fcf5f5dc0b0012ea30dbSun, 23 Jun 2019 14:48:34 GMT

In my quixotic quest for the perfect configuration management tool™️, I recently came across Puppet's Bolt.

As I read more into it, it seemed like an excellent solution to the problem I needed to solve that week, which was an accelerated rollout of Substance Designer and Painter's 2019 versions to our macOS lab environment.

I didn't want to have to create a custom payload-free pkg script to remove the old version, wait for that to deploy, and then finally deploy the standard vendor-provided installer to production. So, Bolt to the rescue!

Bolt behaves in much the same way as Ansible, in that its primary purpose is to run scripts, commands, or "plans" (similar to Ansible "playbooks"). We used Bolt and an inventory file with the FQDNs of our lab machines to deploy a quick script to uninstall the old versions of Substance Designer and Painter – and, boy, was it fast! Bolt connected to all ~50 hosts and uninstalled the applications in under 15 seconds.

The command looked something like this:

bolt script run myscript.sh --nodes "maclab"

Bolt's flexibility and ease of use mean this is one tool that's getting prime placement in our CPE toolbox. If you already have a repository of Bolt tasks & plans that you use in your environment, I'd love to see it! Come find me in the MacAdmins Slack – I'm @gyamada619.

]]>
<![CDATA[Learning Chef In 30 Days: Busboy to Sous Chef]]>https://garrettyamada.com/learning-chef-in-30-days/5e63fcf5f5dc0b0012ea30daThu, 02 May 2019 00:24:24 GMT

My spare cycles for the month of April were dedicated to learning all I could about Chef, an infrastructure-as-code configuration management tool. On April 1st, I knew next to nothing about how to write and structure cookbooks, the basic building block of using Chef.

Now, today, May 1st, I'm confident that I could put together a cookbook to do almost anything, given enough time to implement it.

Cookbooks are Chef Infra's core offering: write a cookbook for configuring a single server or thousands of them, and the chef-client will run it again and again to always ensure your nodes are still set up exactly the way you intended.

Learning Chef can look daunting at first – but don't fret! There are a lot of resources out there to help. I began learning Chef with a course on Lynda (Learning Chef). This set me up to understand the basics without introducing topics like kitchen just yet.

From there, I found a low-stakes application server in our organization that had not yet been automated with Chef. I started with this thought in mind:

If I were going to set up this server from scratch, what would I need to install and what settings would I need to configure to make this server operate exactly the same as it does now?

Once I'd done that, I began crafting a basic recipe to set up the server. A typical cookbook is made up of recipes and attributes. Attributes are like global variables, or preferences, the values of which get used in the recipes.

In my (Windows) recipe, I installed a Chocolatey package and set some very specific values in the registry. For this application (PDQ Deploy) that was all I needed to do.

Here's that cookbook over on GitHub.

Next, I challenged myself to locate a use case that the community at large might use as a dependency in other cookbooks to accomplish one step in a larger recipe.

This kind of cookbook exports a Chef custom resource. In this case, I built a resource that can perform an installation and configuration action for DelProf2.

And here's the results of that effort!

Hopefully this helps someone start down the path of learning Chef to benefit themselves and their organization's infrastructure.

And if you're already using Chef, perhaps I'll see you at ChefConf this year in Seattle!

]]>
<![CDATA[Automating Chocolatey Package Development With Azure DevOps]]>https://garrettyamada.com/automating-chocolatey-package-creation-with-azure-devops/5e63fcf5f5dc0b0012ea30d8Sat, 13 Apr 2019 16:26:40 GMT

As we began to roll out Chocolatey in our organization, we realized we needed to ensure consistency in the process of package creation and distribution to clients. To do this, we utilized Chocolatey package templates, custom tests for the .nuspec and chocolateyinstall.ps1, and some custom code to copy binaries down from our file share for the final choco pack and choco push steps. The best method we found to automate this turned out to be Azure DevOps's excellent CI/CD feature set.

The Chocolatey Package Template

We started by creating the package template. Our needs were very minimal, and the template we created reflected that. Here's the entire .nuspec:

<?xml version="1.0" encoding="utf-8"?>

<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">

  <metadata>

    <id>_REPLACE_</id>
    <version>_REPLACE_</version>

    <!-- == SOFTWARE SPECIFIC SECTION == -->
    <!-- This section is about the software itself -->

    <title>_REPLACE_</title>
    <authors>_REPLACE_</authors>
    <summary>_REPLACE_</summary>
    <description>_REPLACE_</description>

  </metadata>

  <files>
    <!-- Don't touch anything below this line -->
    <file src="tools\**" target="tools" />
  </files>

</package>

After creating the simple .nuspec, we customized the standard chocolateyinstall.ps1 to include a section like this (we want to embed the binaries inside our Chocolatey packages, ensuring that when choco install is run on a remote client outside our firewall, they can still install the package):

# To embed the binaries, place them in inside of the tools directory
$fileLocation = Join-Path $toolsDir '_REPLACE_'

# Replace with full name of binary below (example.msi)
$binaryfile = "\\share.yourdomain.com\chocolatey\_REPLACE_"

Writing Some Tests

Validating the Chocolatey packages being uploaded to the internal repository was important to us, so we wrote some tests – one for the metadata and a different check run against the chocolateyinstall.ps1.

Since the template we designed was used, we could ensure that metadata was present by simply checking for the string _REPLACE_.

For the chocolateyinstall.ps1, we decided to start by checking for a specific mistake we'd seen – forgetting to uncomment the necessary silent arguments for the actual installer (the .exe or .msi).

Download Binary For Build Steps

Once the build pipeline was running (after tests had passed) we needed the build agent to be able to download the binary and drop it inside the tools directory before actually building the package. We accomplished this by:

  1. Using a self-hosted build agent in Azure DevOps
  2. Ensuring the path to the binary was inside our chocolateyinstall.ps1 template (see above, in the template section)
  3. Adding a Copy-Item step to copy from that path to the local clone of our Chocolatey package repository

Tying It All Together In Azure DevOps

To keep our build pipeline modular, we broke out these steps into individual Azure Pipelines templates, and stored these templates inside a "build tools" repository separate from the main Chocolatey package mono-repo.

The build pipeline runs on a commit to any pkg/ branch name, and the pipeline keys on the branch name to know what package directory to run checks and build steps against.

For example, here is the metadata.yml for the _REPLACE_ check:

steps:
- powershell: |
    $packagestocheck = Get-ChildItem -Path "$Env:BUILD_SOURCESDIRECTORY" -Recurse -Include *.nuspec,*.ps1
    foreach ($pkg in $packagestocheck){
        $metadatastatus = Select-String -Path $pkg -Pattern '_REPLACE_'
        if (!$metadatastatus){
            Write-Output "All metadata is valid for $pkg."
        }
        else{
            Write-Error -Message "String _REPLACE_ is still in $pkg. Please input vaild data on the following lines: `n $metadatastatus."
        }
    }
    
  displayName: 'Check for _REPLACE_'
  errorActionPreference: stop

And here is how it all comes together in the main Chocolatey package repo's azure-pipelines.yml:

# This build pipeline tests and builds Chocolatey packages pushed to any branch called pkg/* 
# Built packages are pushed to the internal Chocolatey server.

trigger:
- pkg/*
- pkgupdate/*

pool:
  name: Default

resources:
  repositories:
    - repository: buildtools
      type: github
      name: nameofrepo

steps:
- template: metadata.yml@buildtools  
- template: silentarg.yml@buildtools
- template: getbinary.yml@buildtools
- template: build_publish.yml@buildtools

Assuming all of the tests pass, the resulting compiled nupkg is pushed to the internal Chocolatey package repository and is then available for clients to install!

If you're interested in more detail on how some of this came together or how we integrated this workflow into infrastructure that mostly exists on-premise, you can find me over on the #chocolatey channel in the MacAdmins Slack. Happy automating!

]]>
<![CDATA[Agility With ChatOps - Inventory Management]]>https://garrettyamada.com/agility-with-chatops-inventory/5e63fcf5f5dc0b0012ea30d7Sat, 30 Mar 2019 22:51:46 GMTThe Problem

Inventory (asset management) can be a pain to keep up with. With the exponential growth in configuration and management tools, IT professionals have more agents than ever running on computers we manage.

The question for us became:

How do we create an easy way to remove a computer that's being decommissioned from all of these tools?

Especially ones that don't have an automated pruning system in place of their own. As we implement systems to help us accomplish more specialized tasks (remote access, software package deployment, etc.), stale records have to be pruned – most often in tools that are licensed per-node.

The ChatOps Solution

We began creating a PoshBot module called nuke to perform this task for us. It's not quite finished yet – we're still waiting on API access to a particular tool – but its existing individual functions dutifully remove computers from various other tools we use.

We were able to use PowerShell code to remove records of computers from our ticket system's inventory CMDB, SCCM, Munki, and (newly minted) Gorilla.

Nearly all of this is done via a documented API or, in the case of Gorilla and Munki, by removing a manifest from a simple Git repository.

Even if a tool you're using doesn't have an easy, well-documented REST API, there is likely a way you can automate actions in it. Making it as easy as possible for service desk positions to perform these tasks through automation will save your organization lots of time and effort!

]]>
<![CDATA[What Is Client Platform Engineering?]]>https://garrettyamada.com/client-platform-engineering-what-is-it/5e63fcf5f5dc0b0012ea30d6Sat, 23 Mar 2019 18:12:52 GMTDefinition

One of the trending job titles in Big Tech I've noticed (and you may have too) is "Client Platform Engineer" as part of an "Endpoint Engineering" or "Client Engineering" team. But what does this position do?

In short: a Client Platform Engineer builds, tests, and deploys solutions to manage a fleet of "clients", or endpoints, at scale.

The Client Platform Engineers at companies like Facebook use tools like Chef, which were originally conceived as server/infrastructure configuration management tools, and put them to use to help solve another difficult engineering challenge:

As computing at large companies continues to grow and scale, how can we manage, and (more importantly) secure, the hundreds or thousands of endpoint devices?

What's Driving This Trend?

Many software products that helped IT administrators manage and secure endpoints have not scaled to meet the needs of these larger organizations over the last 3-5 years. Products like SCCM and PDQ Deploy, while robust, were primarily designed with on-premises endpoints in mind – desktop computers which never leave the corporate network.

Writing code, rather than using a pre-compiled vendor-provided solution, to configure endpoints at scale can help an organization ensure that company devices have a consistent baseline of software, policies, and security measures in place. But perhaps most importantly to technology-focused organizations, this code can be iterated on in an Agile development cycle and stored in version control, just like the software development process in other parts of the organization.

Ultimately, a Client Engineering team's role should be to find the best solution – whether that be an MDM, or writing code for a configuration management tool like Chef or Puppet, or both – and design and implement it to help the company maximize the overall productivity of its workforce.

]]>
<![CDATA[Install SketchUp Extensions for All Users]]>https://garrettyamada.com/install-sketchup-extensions/5e63fcf5f5dc0b0012ea30d5Sat, 23 Mar 2019 14:18:57 GMTInstall SketchUp Extensions for All Users
A model in SketchUp.

The Problem

Install SketchUp Extensions for All Users

Yesterday we were asked to find a way to install some specific SketchUp extensions for all users of some lab computers. At first, we could not locate an easy way to install them for all users of the given computer.

SketchUp's user forums noted it was storing extensions in:

C:\Users\%USERNAME%\AppData\Roaming\SketchUp\SketchUp 2018\SketchUp\Plugins

But the documentation did not list a way to manually place the extension files in such a way as to install them for all users, not just the current user.

Obviously manually adding extensions for an entire lab via the Extension Manager was impractical, and using a logon script of some kind to copy the extensions to the AppData folder was not our preference.

The Solution

We located a post that indicated we could use the %ProgramData% folder to accomplish this.

  1. Create a Plugins folder inside %ProgramData%\SketchUp\SketchUp 2018\SketchUp\
  2. Place the extracted contents of your .rbz archives inside it. You can use 7-Zip to extract the contents of the .rbz archives.

Once the extracted contents are inside the Plugins folder you created, launch SketchUp. You'll now see your extensions listed in the Extension Manager!

]]>
<![CDATA[Fix Trust Relationships for Macs Bound to Active Directory Using Centrify]]>https://garrettyamada.com/fix-trust-relationships-for-macs-bound-to-ad-using-centrify/5e63fcf5f5dc0b0012ea30d4Wed, 20 Mar 2019 01:18:43 GMTThe Problem

Sometimes computers bound to an Active Directory domain lose their trust relationship with it. This causes the computer (at least on Windows) to report:

"The trust relationship between this workstation and the primary domain failed".

However, while this exact scenario was what was occurring on one of our Macs using Centrify, we didn't know it, because the macOS loginwindow does not display these types of error messages.

The Solution

The first thing we usually try in this scenario is resetting the "computer machine password". This is the password that the computer itself uses to transparently authenticate to the domain in the background when a user logon occurs. But how could we do this using Centrify?

Using adkeytab, of course! (I kid. This is not an obvious name for this tool.) That said, running the command below should reset the computer machine password and restore the trust relationship.

adkeytab -r -u domainadminusername
]]>
<![CDATA[Exploring Docker for Windows]]>https://garrettyamada.com/exploring-docker-for-windows/5e63fcf5f5dc0b0012ea30d3Mon, 11 Feb 2019 03:20:31 GMTExploring Docker for Windows
The Docker logo. © 2019 Docker Inc.

The Problem

Exploring Docker for Windows

One of the core services educational IT administrators are asked to deliver and manage is software licensing at scale. Many software titles in academia do not utilize cloud-based licensing, and for use on an enterprise scale require a hosted license server.

In the academic world, this kind of license server typically lives on-premise.  

One of the inherent difficulties this scenario presents is how best to conserve resources, both compute and financial, and consolidate licensing for many titles.

It's hard to justify the cost of 10+ (for example) virtual machines or physical servers to run each individual vendor's license manager. This makes it equally hard to maintain high availability for licensing services in mid-size IT units with limited resources.

The resulting solution is typically 1-2 servers which host licenses for many titles at once. The obvious downside is any update to license files, or other maintenance, creates downtime for licensing on all of the software titles.

This is, clearly, a significant interruption in service to customers that we want to avoid.

The Solution

Enter Docker. Docker allows applications to run inside "containers" on the same host operating system, while still utilizing the host's kernel and other drivers. Docker containers are small, lightweight, and can be easily controlled with a CLI (Command Line Interface).

This creates the separation between services (read: license management software for each vendor) that we need in order to update individual vendors' software without taking down the entire host – simply stop the container, pull the new version, and start it back up!

Robust, stable Docker containers with near feature-parity to their Linux counterparts only became available in Windows Server 2019, which comes with Docker support built-in.

But Does It Work, Really?

To test this, I created a Dockerfile (instructions that Docker uses to build your application) and used it to create a container image for Autodesk's FlexLM-based license server. It works well!

The container was able to serve licenses just like a full-featured physical server or VM would, but with less overhead and separation from other software vendors' license management software.

To test my image, and view other open source container images, check out the Autodesk License Container Repository.

]]>
<![CDATA[Gorilla, Munki & Other Apes]]>https://garrettyamada.com/gorilla-munki-for-windows/5e63fcf5f5dc0b0012ea30d2Sun, 16 Dec 2018 22:47:56 GMTGorilla, Munki & Other Apes
Scott Menville and Greg Cipes in Teen Titans Go! (2013) 

The Problem

Gorilla, Munki & Other Apes

Desktop & laptop support technicians and IT administrators from many backgrounds and organizations have come to rely on Munki, an open-source project from Walt Disney Animation Studios, for distributing and updating the software tools their employees use on the organization's Mac fleet.

Munki uses a simple client-server model to distribute software, just like the many websites we visit every day. Clients talk to a server, requesting a predefined list of items (in this case, software and scripts that are to be installed or run on the client computer), and the server delivers those items to the client over the protocol we use every day: HTTP/HTTPS.

Munki, however, is not compatible with Windows – and so IT professionals turn to many other vendors providing their own takes on how to distribute software at scale for the Windows platform. This creates fragmentation in both the toolset and the mental model of software distribution.

The Solution (Well, My Current Favorite Solution)

Over the past month, I've become a contributor to the Gorilla project, with the goal of helping it deliver the same functionality we've come to expect from Munki, but for the Windows platform.

Gorilla uses most of the same terminology and mappings as Munki:

  • It is, at its core, just a web server & a client app
  • The server hosts manifests, catalogs, and packages
  • The manifests determine what software is installed
  • The catalogs list all available software

Gorilla is written in the Go programming language, making it fast, flexible, and powerful – and, potentially later, cross-platform.

The key differences for right now:

  • There is not a "makecatalogs" tool for when a package is imported
  • The manifests, catalogs, etc. are written in YAML rather than XML

But Does It Work, Really?

It really does! It's important to note that Gorilla is currently in beta (as of this writing/post update, Feb. 11, 2019).

As Gorilla continues to improve, we hope to use this tool to continue our efforts to implement IaC (Infrastructure as Code) in our organization, even for endpoint management.

To automate some of the manual work currently involved in importing a package into Gorilla's catalogs, I've written gorillaimport, a work-in-progress PowerShell module. (Pull requests happily accepted!)

I encourage you to give Gorilla a shot, and please submit issues, feedback and pull requests to help get Gorilla to 1.0!

]]>
<![CDATA[Agility With ChatOps - Software Deployment]]>https://garrettyamada.com/chatops-part-2/5e63fcf5f5dc0b0012ea30d1Wed, 07 Nov 2018 22:06:28 GMTAgility With ChatOps - Software Deployment
The PDQ Deploy logo.

The Problem

Agility With ChatOps - Software Deployment

Allowing Tier 1 (and "1.5") technicians – who in our environment, are often student workers – the ability and access to deploy our software packages to workstations can be difficult to maintain and can create significant overhead.

We currently make use of PDQ Deploy to install software on our Windows workstations for one-off deployments. We had to manage access to the deployment console, ensure it was installed on our student workers' workstations, and ensure the console software was kept up to date alongside the PDQ Central Server.

PDQ is very easy to use (at least, in comparison to other software deployment tools), but is another tool among many to learn.

The ChatOps Solution

With our discovery of PoshBot, we wondered, could we integrate PDQ Deploy into our daily Slack-based workflows? As it turned out, we could – through the use of PDQ's CLI (Command Line Interface).

For this first post highlighting our use of PoshBot, I'm going to showcase a part of the main function we wrote as a PowerShell module to deploy packages using PDQ Deploy. It's really simple, and really powerful!

function deploy {

[cmdletbinding()]
[PoshBot.BotCommand(
    CommandName = 'deploy',
    Aliases = ('push'),
    Permissions = 'run'
)]
param(
    [Parameter(Position=0)][string]$package,
    [Parameter(Position=1)][string]$target,
)

$pkgpush = pdqdeploy deploy -Package $package -Targets $target | Format-List | Out-String -Width '80'

New-PoshBotCardResponse -Type Normal -Text $pkgpush

PoshBot lives on the server we run as our PDQ Central Server, which allows it to take advantage of the Active Directory service account that PDQ Deploy uses to perform its own deployment tasks.

Credit to Kyle Levenick for help coding & testing the script.

But Does It Work, Really?

It does! (With some caveats.) The command is run like this:

!deploy -package AutoCAD 2019 -target 127.0.0.1

This format seems intuitive – until you realize that the -package parameter is dependent upon knowing the exact names of all of the packages.

This forced us to work on standardizing a package name format (which is, of course, not a bad thing), but was more work than we initially planned.

Please feel free to build on our formatting and improve on our module ideas, and send your feedback my way!

]]>