<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Kevin Sandy</title>
    <link>https://kevinsandy.com/</link>
    <description>Thoughts, musings, ramblings, and rants</description>
    <pubDate>Fri, 17 Apr 2026 05:09:55 +0000</pubDate>
    
    <item>
      <title>Synology DiskStation User Mapping</title>
      <link>https://kevinsandy.com/synology-diskstation-user-mapping?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have a Synology DiskStation providing file services to my home and lab networks. It works great as-is for SMB access, but NFS access was problematic because the automatic UID / GID generation didn&#39;t match the IDs used by my Linux systems. Since I already store Unix attributes in Active Directory, I needed the DiskStation to respect those.&#xA;&#xA;!--more--&#xA;&#xA;The first step to acheive this is to update the Samba configuration (/etc/samba/smb.conf) on your DiskStation. Adding the configuration below will get Samba to use the Active Directory attributes. I use 100000-199999 for my user and group IDs. If you use different values you may need to adjust it a bit. If you don&#39;t yet have Unix attributes assigned to your Active Directory users, check out Assigning Unix Attributes to Active Directory Objects for how I&#39;ve gone about that.&#xA;&#xA;[global]&#xA;    idmap config  : backend=tdb&#xA;    idmap config  : range=3000-7999&#xA;    idmap config DIGITALLOTUS : backend=ad&#xA;    idmap config DIGITALLOTUS : range=100000-199999&#xA;    idmap config DIGITALLOTUS : schemamode=rfc2307&#xA;    idmap config DIGITALLOTUS : unixnssinfo=yes&#xA;    idmap config DIGITALLOTUS : unixprimary_group=yes&#xA;&#xA;Once that is in place, restart your DiskStation. After it&#39;s up, you can check the user ID by running id user@corp.example.com and see that... it&#39;s still showing the automatically generated ID? That&#39;s actually expected at this point because of some of the DiskStation internals. If you run wbinfo -i &#34;user@corp.example.com&#34;, which will query Samba directly, you should see the right information.&#xA;&#xA;So, how do we now get the DiskStation to recognize the updated values? We have to clear its cached mappings. You can do that by running the command below.&#xA;&#xA;find /volume1/@accountdb \( -type f -o -type l \) -delete&#xA;&#xA;After running that command, you should be able to rerun id user@corp.example.com and see the right attributes. I did all this prior to setting up my shares and permissions. If you already have shares and permissions setup, you&#39;ll likely need to reapply your permissions to get them working with the new ID values.&#xA;&#xA;#activedirectory #diskstation]]&gt;</description>
      <content:encoded><![CDATA[<p>I have a Synology DiskStation providing file services to my home and lab networks. It works great as-is for SMB access, but NFS access was problematic because the automatic UID / GID generation didn&#39;t match the IDs used by my Linux systems. Since I already store Unix attributes in Active Directory, I needed the DiskStation to respect those.</p>



<p>The first step to acheive this is to update the Samba configuration (<code>/etc/samba/smb.conf</code>) on your DiskStation. Adding the configuration below will get Samba to use the Active Directory attributes. I use 100000-199999 for my user and group IDs. If you use different values you may need to adjust it a bit. If you don&#39;t yet have Unix attributes assigned to your Active Directory users, check out <a href="./assigning-unix-attributes-to-active-directory-object">Assigning Unix Attributes to Active Directory Objects</a> for how I&#39;ve gone about that.</p>

<pre><code class="language-ini">[global]
    idmap config * : backend=tdb
    idmap config * : range=3000-7999
    idmap config DIGITALLOTUS : backend=ad
    idmap config DIGITALLOTUS : range=100000-199999
    idmap config DIGITALLOTUS : schema_mode=rfc2307
    idmap config DIGITALLOTUS : unix_nss_info=yes
    idmap config DIGITALLOTUS : unix_primary_group=yes
</code></pre>

<p>Once that is in place, restart your DiskStation. After it&#39;s up, you can check the user ID by running <code>id user@corp.example.com</code> and see that... it&#39;s still showing the automatically generated ID? That&#39;s actually expected at this point because of some of the DiskStation internals. If you run <code>wbinfo -i &#34;user@corp.example.com&#34;</code>, which will query Samba directly, you should see the right information.</p>

<p>So, how do we now get the DiskStation to recognize the updated values? We have to clear its cached mappings. You can do that by running the command below.</p>

<pre><code class="language-bash">find /volume1/@accountdb \( -type f -o -type l \) -delete
</code></pre>

<p>After running that command, you should be able to rerun <code>id user@corp.example.com</code> and see the right attributes. I did all this prior to setting up my shares and permissions. If you already have shares and permissions setup, you&#39;ll likely need to reapply your permissions to get them working with the new ID values.</p>

<p><a href="https://kevinsandy.com/tag:activedirectory" class="hashtag"><span>#</span><span class="p-category">activedirectory</span></a> <a href="https://kevinsandy.com/tag:diskstation" class="hashtag"><span>#</span><span class="p-category">diskstation</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/synology-diskstation-user-mapping</guid>
      <pubDate>Thu, 15 Dec 2022 12:55:32 +0000</pubDate>
    </item>
    <item>
      <title>Assigning Unix Attributes to Active Directory Objects</title>
      <link>https://kevinsandy.com/assigning-unix-attributes-to-active-directory-object?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I run Active Directory to manage my users and groups. Most of my servers run Linux, and I also run a Synology DiskStation that serves files via NFS and CIFS. To keep file permissions and ownership consistent, I assign static UID and GID values to my Active Directory users and groups. Rather than manually assigning UID and GID values, I created a PowerShell script to do it for me.&#xA;&#xA;!--more--&#xA;&#xA;$objectBase = &#34;ou=Digital Lotus,dc=corp,dc=digitallotus,dc=com&#34;&#xA;$idRangeBase = 100000&#xA;$primaryGid = 101110&#xA;$loginShell = &#34;/bin/bash&#34;&#xA;$homeDirectoryBase = &#34;/users&#34;&#xA;&#xA;Get-ADObject `&#xA;        -LDAPFilter &#34;(&amp;(|(objectClass=user)(objectClass=group))(!objectClass=computer))&#34; `&#xA;        -SearchBase &#34;$objectBase&#34; `&#xA;        -Properties objectClass,objectSid,uidNumber,gidNumber,sAMAccountName,loginShell,unixHomeDirectory,primaryGroupID | ForEach {&#xA;        &#xA;    $sAMAccountName = $.sAMAccountName&#xA;    $objectRid = ($.objectSid -split &#34;-&#34;)[-1]&#xA;    $idNumber = $idRangeBase + $objectRid&#xA;&#xA;    if ( $.objectClass -eq &#34;user&#34; ) {&#xA;        if ( -not $.uidNumber ) {&#xA;            Write-Host &#34;Adding uidNumber $idNumber to $sAMAccountName&#34;&#xA;            $ | Set-ADObject -Add @{uidNumber=$idNumber}&#xA;        }&#xA;        if ( -not $.gidNumber ) {&#xA;            Write-Host &#34;Adding gidNumber $gidNumber to $sAMAccountName&#34;&#xA;            $ | Set-ADObject -Add @{gidNumber=$primaryGid }&#xA;        }&#xA;        if ( -not $.loginShell ) {&#xA;            Write-Host &#34;Adding loginShell $loginShell to $sAMAccountName&#34;&#xA;            $ | Set-ADObject -Add @{loginShell=$loginShell}&#xA;        }&#xA;        if ( -not $.unixHomeDirectory ) {&#xA;            $homeDirectory = &#34;$homeDirectoryBase/$sAMAccountName&#34;&#xA;            Write-Host &#34;Adding unixHomeDirectory $homeDirectory to $sAMAccountName&#34;&#xA;            $ | Set-ADObject -Add @{unixHomeDirectory=$homeDirectory}&#xA;        }&#xA;    }&#xA;&#xA;    if ( $.objectClass -eq &#34;group&#34; -and -not $.gidNumber ) {&#xA;        Write-Host &#34;Adding gidNumber $idNumber to $sAMAccountName&#34;&#xA;        $ | Set-ADObject -Add @{gidNumber=$idNumber}&#xA;    }&#xA;&#xA;}&#xA;&#xA;The objectBase variable is the base of the search for users and groups, and idRangeBase is the starting value for the IDs. The Active Directory object&#39;s relative ID is added to idRangeBase to create the actual UID or GID number.&#xA;&#xA;#activedirectory #powershell]]&gt;</description>
      <content:encoded><![CDATA[<p>I run Active Directory to manage my users and groups. Most of my servers run Linux, and I also run a Synology DiskStation that serves files via NFS and CIFS. To keep file permissions and ownership consistent, I assign static UID and GID values to my Active Directory users and groups. Rather than manually assigning UID and GID values, I created a PowerShell script to do it for me.</p>



<pre><code class="language-powershell">$objectBase = &#34;ou=Digital Lotus,dc=corp,dc=digitallotus,dc=com&#34;
$idRangeBase = 100000
$primaryGid = 101110
$loginShell = &#34;/bin/bash&#34;
$homeDirectoryBase = &#34;/users&#34;

Get-ADObject `
        -LDAPFilter &#34;(&amp;(|(objectClass=user)(objectClass=group))(!objectClass=computer))&#34; `
        -SearchBase &#34;$objectBase&#34; `
        -Properties objectClass,objectSid,uidNumber,gidNumber,sAMAccountName,loginShell,unixHomeDirectory,primaryGroupID | ForEach {
        
    $sAMAccountName = $_.sAMAccountName
    $objectRid = ($_.objectSid -split &#34;-&#34;)[-1]
    $idNumber = $idRangeBase + $objectRid

    if ( $_.objectClass -eq &#34;user&#34; ) {
        if ( -not $_.uidNumber ) {
            Write-Host &#34;Adding uidNumber $idNumber to $sAMAccountName&#34;
            $_ | Set-ADObject -Add @{uidNumber=$idNumber}
        }
        if ( -not $_.gidNumber ) {
            Write-Host &#34;Adding gidNumber $gidNumber to $sAMAccountName&#34;
            $_ | Set-ADObject -Add @{gidNumber=$primaryGid }
        }
        if ( -not $_.loginShell ) {
            Write-Host &#34;Adding loginShell $loginShell to $sAMAccountName&#34;
            $_ | Set-ADObject -Add @{loginShell=$loginShell}
        }
        if ( -not $_.unixHomeDirectory ) {
            $homeDirectory = &#34;$homeDirectoryBase/$sAMAccountName&#34;
            Write-Host &#34;Adding unixHomeDirectory $homeDirectory to $sAMAccountName&#34;
            $_ | Set-ADObject -Add @{unixHomeDirectory=$homeDirectory}
        }
    }

    if ( $_.objectClass -eq &#34;group&#34; -and -not $_.gidNumber ) {
        Write-Host &#34;Adding gidNumber $idNumber to $sAMAccountName&#34;
        $_ | Set-ADObject -Add @{gidNumber=$idNumber}
    }

}
</code></pre>

<p>The <code>objectBase</code> variable is the base of the search for users and groups, and <code>idRangeBase</code> is the starting value for the IDs. The Active Directory object&#39;s relative ID is added to <code>idRangeBase</code> to create the actual UID or GID number.</p>

<p><a href="https://kevinsandy.com/tag:activedirectory" class="hashtag"><span>#</span><span class="p-category">activedirectory</span></a> <a href="https://kevinsandy.com/tag:powershell" class="hashtag"><span>#</span><span class="p-category">powershell</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/assigning-unix-attributes-to-active-directory-object</guid>
      <pubDate>Sun, 27 Nov 2022 15:29:05 +0000</pubDate>
    </item>
    <item>
      <title>Customizing Code Blocks on Write.as</title>
      <link>https://kevinsandy.com/customizing-code-blocks-on-write-as?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I&#39;m loving the experience on Write.as so far. But, there was one thing bothering me - I wanted to make a couple adjustments to the code blocks on my posts. Specifically, I wanted to enable horizontal scrolling and add a copy button.&#xA;&#xA;!--more--&#xA;&#xA;Luckily, Write.as provides the option to enter custom CSS and JavaScript to add to your blog. I wouldn&#39;t want to make an entirely different theme this way (though several people have), but it&#39;s perfect for making small modifications like these.&#xA;&#xA;The relevant CSS customizations are below. These enable horizontal scrolling of the code block, and modify the layout and styling of the copy button that will get dynamically added.&#xA;&#xA;#post #post-body pre {&#xA;  overflow-x: auto;&#xA;  position: relative;&#xA;}&#xA;#post #post-body pre code {&#xA;  white-space: pre;&#xA;}&#xA;pre button {&#xA;  position: absolute;&#xA;  top: 4px;&#xA;  right: 4px;&#xA;  opacity: 0.5;&#xA;  border: none;&#xA;  font-family: monospace;&#xA;}&#xA;pre button:hover {&#xA;    opacity: 1.0;&#xA;}&#xA;&#xA;Here&#39;s the JavaScript code to dynamically add a copy button to the code blocks.&#xA;&#xA;if ( navigator.clipboard ) {&#xA;  let preBlocks = document.querySelectorAll(&#34;pre&#34;);&#xA;  for ( const preBlock of preBlocks ) {&#xA;    let codeBlock = preBlock.querySelector(&#34;code&#34;);&#xA;    if ( ! codeBlock ) { continue }&#xA;    let button = document.createElement(&#34;button&#34;);&#xA;    button.innerText = &#34;Copy&#34;;&#xA;    preBlock.appendChild(button);&#xA;    button.addEventListener(&#34;click&#34;, async () =  {&#xA;      await navigator.clipboard.writeText(codeBlock.innerText);&#xA;      button.innerText = &#34;Copied&#34;;&#xA;      setTimeout(() =  {button.innerText=&#34;Copy&#34;;}, 3000);&#xA;    });&#xA;  }&#xA;}&#xA;&#xA;Part of what I really like about Write.as is that the limited customization is just enough to let me tweak things as needed. With Jekyll and similar options, I feel compelled to try customizing everything instead of just writing.&#xA;&#xA;writeas]]&gt;</description>
      <content:encoded><![CDATA[<p>I&#39;m loving the experience on Write.as so far. But, there was one thing bothering me – I wanted to make a couple adjustments to the code blocks on my posts. Specifically, I wanted to enable horizontal scrolling and add a copy button.</p>



<p>Luckily, Write.as provides the option to enter custom CSS and JavaScript to add to your blog. I wouldn&#39;t want to make an entirely different theme this way (though several people have), but it&#39;s perfect for making small modifications like these.</p>

<p>The relevant CSS customizations are below. These enable horizontal scrolling of the code block, and modify the layout and styling of the copy button that will get dynamically added.</p>

<pre><code class="language-css">#post #post-body pre {
  overflow-x: auto;
  position: relative;
}
#post #post-body pre code {
  white-space: pre;
}
pre button {
  position: absolute;
  top: 4px;
  right: 4px;
  opacity: 0.5;
  border: none;
  font-family: monospace;
}
pre button:hover {
    opacity: 1.0;
}
</code></pre>

<p>Here&#39;s the JavaScript code to dynamically add a copy button to the code blocks.</p>

<pre><code class="language-javascript">if ( navigator.clipboard ) {
  let preBlocks = document.querySelectorAll(&#34;pre&#34;);
  for ( const preBlock of preBlocks ) {
    let codeBlock = preBlock.querySelector(&#34;code&#34;);
    if ( ! codeBlock ) { continue }
    let button = document.createElement(&#34;button&#34;);
    button.innerText = &#34;Copy&#34;;
    preBlock.appendChild(button);
    button.addEventListener(&#34;click&#34;, async () =&gt; {
      await navigator.clipboard.writeText(codeBlock.innerText);
      button.innerText = &#34;Copied&#34;;
      setTimeout(() =&gt; {button.innerText=&#34;Copy&#34;;}, 3000);
    });
  }
}
</code></pre>

<p>Part of what I really like about Write.as is that the limited customization is just enough to let me tweak things as needed. With Jekyll and similar options, I feel compelled to try customizing everything instead of just writing.</p>

<p><a href="https://kevinsandy.com/tag:writeas" class="hashtag"><span>#</span><span class="p-category">writeas</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/customizing-code-blocks-on-write-as</guid>
      <pubDate>Wed, 09 Nov 2022 15:10:19 +0000</pubDate>
    </item>
    <item>
      <title>ADFS and HAProxy</title>
      <link>https://kevinsandy.com/adfs-and-haproxy?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I use pfSense for my home networks. For externally available services, I prefer to use the HAProxy package rather than setting up port forwarding. This allows me to do various checks and modifications prior to passing the traffic to my internal servers. However, when I setup HAProxy to pass traffic to my ADFS server it wasn&#39;t working.&#xA;&#xA;!--more--&#xA;&#xA;I could see the web traffic getting to ADFS, but it wasn&#39;t being handled properly. After some investigation, I found that ADFS has some quirks with how it handles web traffic. Specifically, it requires the incoming HTTPS request to include SNI data indicating the name of the ADFS site. This isn&#39;t enabled by default in HAProxy, so I had to modify my configuration.&#xA;&#xA;If you are directly editing an HAProxy config file, you&#39;ll want to add sni str(your.adfs.hostname) to the backend server line. If you&#39;re using the pfSense HAProxy package, you&#39;ll want to edit your backend configuration and add that to the Per server pass thru option under Advanced settings.&#xA;&#xA;I only have a single ADFS instance, so I have health checks disabled. If you&#39;re using health checks, you&#39;ll also need to add the a href=&#34;http://docs.haproxy.org/2.6/configuration.html#check-sni&#34; target=&#34;_blank&#34;check-sni/a option.&#xA;&#xA;#adfs #haproxy #pfsense]]&gt;</description>
      <content:encoded><![CDATA[<p>I use pfSense for my home networks. For externally available services, I prefer to use the HAProxy package rather than setting up port forwarding. This allows me to do various checks and modifications prior to passing the traffic to my internal servers. However, when I setup HAProxy to pass traffic to my ADFS server it wasn&#39;t working.</p>



<p>I could see the web traffic getting to ADFS, but it wasn&#39;t being handled properly. After some investigation, I found that ADFS has some quirks with how it handles web traffic. Specifically, it requires the incoming HTTPS request to include SNI data indicating the name of the ADFS site. This isn&#39;t enabled by default in HAProxy, so I had to modify my configuration.</p>

<p>If you are directly editing an HAProxy config file, you&#39;ll want to add <code>sni str(your.adfs.hostname)</code> to the backend server line. If you&#39;re using the pfSense HAProxy package, you&#39;ll want to edit your backend configuration and add that to the <em>Per server pass thru</em> option under <em>Advanced settings</em>.</p>

<p>I only have a single ADFS instance, so I have health checks disabled. If you&#39;re using health checks, you&#39;ll also need to add the <a href="http://docs.haproxy.org/2.6/configuration.html#check-sni" target="_blank">check-sni</a> option.</p>

<p><a href="https://kevinsandy.com/tag:adfs" class="hashtag"><span>#</span><span class="p-category">adfs</span></a> <a href="https://kevinsandy.com/tag:haproxy" class="hashtag"><span>#</span><span class="p-category">haproxy</span></a> <a href="https://kevinsandy.com/tag:pfsense" class="hashtag"><span>#</span><span class="p-category">pfsense</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/adfs-and-haproxy</guid>
      <pubDate>Mon, 07 Nov 2022 18:24:54 +0000</pubDate>
    </item>
    <item>
      <title>Using objectGUID in Shibboleth IdP</title>
      <link>https://kevinsandy.com/using-objectguid-in-shibboleth-idp?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I was setting up a new Shibboleth IdP in my lab to run some tests. When integrating it with my Active Directory domain, I wanted to use the objectGUID attribute as a unique user identifier. It meets the requirements of being unique and not reusable, while also not exposing any undesirable information like objectSID would.&#xA;&#xA;!--more--&#xA;&#xA;The problem is that objectGUID is stored as a binary attribute, so it must be handled differently in the IdP. Shibboleth has the ability to automatically handle binary attributes by listing them in a BinaryAttributes tag, but it does so by base64 encoding them. This isn&#39;t ideal because some SAML attributes, like a href=&#34;https://www.switch.ch/aai/support/documents/attributes/edupersonuniqueid/&#34; target=&#34;_blank&#34;eduPersonUniqueId/a, only allow alphanumeric characters.&#xA;&#xA;The solution I came up with was to allow Shibboleth to do the binary conversion, but then transform the base64 data to hex codes. The result is a user identifier that is unique, not reusable, and entirely composed of alphanumeric values.&#xA;&#xA;Here are the relevant parts of my attribute-resolver.xml setup.&#xA;&#xA;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&#xA;&lt;AttributeResolver&#xA;        xmlns=&#34;urn:mace:shibboleth:2.0:resolver&#34; &#xA;        xmlns:xsi=&#34;http://www.w3.org/2001/XMLSchema-instance&#34; &#xA;        xsi:schemaLocation=&#34;urn:mace:shibboleth:2.0:resolver http://shibboleth.net/schema/idp/shibboleth-attribute-resolver.xsd&#34;  !-- ========================================== --&#xA;    !--      Attributes                            --&#xA;    !-- ========================================== --&#xA;&#xA;    AttributeDefinition id=&#34;eduPersonPrincipalName&#34; xsi:type=&#34;Scoped&#34; scope=&#34;%{idp.scope}&#34;&#xA;        InputAttributeDefinition ref=&#34;userIdentifier&#34;/&#xA;    /AttributeDefinition&#xA;&#xA;    AttributeDefinition id=&#34;eduPersonUniqueId&#34; xsi:type=&#34;Scoped&#34; scope=&#34;%{idp.scope}&#34;&#xA;        InputAttributeDefinition ref=&#34;userIdentifier&#34;/&#xA;    /AttributeDefinition&#xA;&#xA;    AttributeDefinition id=&#34;userIdentifier&#34; xsi:type=&#34;ScriptedAttribute&#34; dependencyOnly=&#34;true&#34;&#xA;        InputDataConnector ref=&#34;activeDirectory&#34; attributeNames=&#34;objectGUID&#34;/&#xA;        Script&#xA;            &lt;![CDATA[&#xA;            var base64guid = objectGUID.getValues().get(0);&#xA;            var hexguid = &#39;&#39;;&#xA;            for ( var i=0; i &lt; base64guid.length; i++ ) {&#xA;                var hex = base64guid.charCodeAt(i).toString(16);&#xA;                if ( hex.length == 1 ) {&#xA;                    hex = &#39;0&#39; + hex;&#xA;                }&#xA;                hexguid += hex;&#xA;            }&#xA;            userIdentifier.addValue(hexguid);&#xA;            ]]  /Script&#xA;    /AttributeDefinition&#xA;&#xA;    !-- ========================================== --&#xA;    !--      Data Connectors                       --&#xA;    !-- ========================================== --&#xA;&#xA;    &lt;DataConnector&#xA;            id=&#34;activeDirectory&#34;&#xA;            xsi:type=&#34;LDAPDirectory&#34;&#xA;            ldapURL=&#34;%{idp.attribute.resolver.LDAP.ldapURL}&#34;&#xA;            baseDN=&#34;%{idp.attribute.resolver.LDAP.baseDN}&#34;&#xA;            principal=&#34;%{idp.attribute.resolver.LDAP.bindDN}&#34;&#xA;            principalCredential=&#34;%{idp.attribute.resolver.LDAP.bindDNCredential}&#34;&#xA;            useStartTLS=&#34;%{idp.attribute.resolver.LDAP.useStartTLS}&#34;&#xA;            connectTimeout=&#34;%{idp.attribute.resolver.LDAP.connectTimeout}&#34;&#xA;            responseTimeout=&#34;%{idp.attribute.resolver.LDAP.responseTimeout}&#34;  FilterTemplate&#xA;            &lt;![CDATA[&#xA;            %{idp.attribute.resolver.LDAP.searchFilter}&#xA;            ]]  /FilterTemplate&#xA;        BinaryAttributesobjectGUID/BinaryAttributes&#xA;        ReturnAttributesgivenName sn displayName mail objectGUID/ReturnAttributes&#xA;    /DataConnector&#xA;&#xA;/AttributeResolver&#xA;&#xA;shibboleth]]&gt;</description>
      <content:encoded><![CDATA[<p>I was setting up a new Shibboleth IdP in my lab to run some tests. When integrating it with my Active Directory domain, I wanted to use the objectGUID attribute as a unique user identifier. It meets the requirements of being unique and not reusable, while also not exposing any undesirable information like objectSID would.</p>



<p>The problem is that objectGUID is stored as a binary attribute, so it must be handled differently in the IdP. Shibboleth has the ability to automatically handle binary attributes by listing them in a <code>&lt;BinaryAttributes&gt;</code> tag, but it does so by base64 encoding them. This isn&#39;t ideal because some SAML attributes, like <a href="https://www.switch.ch/aai/support/documents/attributes/edupersonuniqueid/" target="_blank">eduPersonUniqueId</a>, only allow alphanumeric characters.</p>

<p>The solution I came up with was to allow Shibboleth to do the binary conversion, but then transform the base64 data to hex codes. The result is a user identifier that is unique, not reusable, and entirely composed of alphanumeric values.</p>

<p>Here are the relevant parts of my attribute-resolver.xml setup.</p>

<pre><code class="language-xml">&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;
&lt;AttributeResolver
        xmlns=&#34;urn:mace:shibboleth:2.0:resolver&#34; 
        xmlns:xsi=&#34;http://www.w3.org/2001/XMLSchema-instance&#34; 
        xsi:schemaLocation=&#34;urn:mace:shibboleth:2.0:resolver http://shibboleth.net/schema/idp/shibboleth-attribute-resolver.xsd&#34;&gt;

    &lt;!-- ========================================== --&gt;
    &lt;!--      Attributes                            --&gt;
    &lt;!-- ========================================== --&gt;

    &lt;AttributeDefinition id=&#34;eduPersonPrincipalName&#34; xsi:type=&#34;Scoped&#34; scope=&#34;%{idp.scope}&#34;&gt;
        &lt;InputAttributeDefinition ref=&#34;userIdentifier&#34;/&gt;
    &lt;/AttributeDefinition&gt;

    &lt;AttributeDefinition id=&#34;eduPersonUniqueId&#34; xsi:type=&#34;Scoped&#34; scope=&#34;%{idp.scope}&#34;&gt;
        &lt;InputAttributeDefinition ref=&#34;userIdentifier&#34;/&gt;
    &lt;/AttributeDefinition&gt;

    &lt;AttributeDefinition id=&#34;userIdentifier&#34; xsi:type=&#34;ScriptedAttribute&#34; dependencyOnly=&#34;true&#34;&gt;
        &lt;InputDataConnector ref=&#34;activeDirectory&#34; attributeNames=&#34;objectGUID&#34;/&gt;
        &lt;Script&gt;
            &lt;![CDATA[
            var base64guid = objectGUID.getValues().get(0);
            var hexguid = &#39;&#39;;
            for ( var i=0; i &lt; base64guid.length; i++ ) {
                var hex = base64guid.charCodeAt(i).toString(16);
                if ( hex.length == 1 ) {
                    hex = &#39;0&#39; + hex;
                }
                hexguid += hex;
            }
            userIdentifier.addValue(hexguid);
            ]]&gt;
        &lt;/Script&gt;
    &lt;/AttributeDefinition&gt;

    &lt;!-- ========================================== --&gt;
    &lt;!--      Data Connectors                       --&gt;
    &lt;!-- ========================================== --&gt;

    &lt;DataConnector
            id=&#34;activeDirectory&#34;
            xsi:type=&#34;LDAPDirectory&#34;
            ldapURL=&#34;%{idp.attribute.resolver.LDAP.ldapURL}&#34;
            baseDN=&#34;%{idp.attribute.resolver.LDAP.baseDN}&#34;
            principal=&#34;%{idp.attribute.resolver.LDAP.bindDN}&#34;
            principalCredential=&#34;%{idp.attribute.resolver.LDAP.bindDNCredential}&#34;
            useStartTLS=&#34;%{idp.attribute.resolver.LDAP.useStartTLS}&#34;
            connectTimeout=&#34;%{idp.attribute.resolver.LDAP.connectTimeout}&#34;
            responseTimeout=&#34;%{idp.attribute.resolver.LDAP.responseTimeout}&#34;&gt;
        &lt;FilterTemplate&gt;
            &lt;![CDATA[
            %{idp.attribute.resolver.LDAP.searchFilter}
            ]]&gt;
        &lt;/FilterTemplate&gt;
        &lt;BinaryAttributes&gt;objectGUID&lt;/BinaryAttributes&gt;
        &lt;ReturnAttributes&gt;givenName sn displayName mail objectGUID&lt;/ReturnAttributes&gt;
    &lt;/DataConnector&gt;

&lt;/AttributeResolver&gt;
</code></pre>

<p><a href="https://kevinsandy.com/tag:shibboleth" class="hashtag"><span>#</span><span class="p-category">shibboleth</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/using-objectguid-in-shibboleth-idp</guid>
      <pubDate>Sat, 05 Nov 2022 13:20:55 +0000</pubDate>
    </item>
    <item>
      <title>Using My Documents Folder with Obsidian</title>
      <link>https://kevinsandy.com/using-my-documents-folder-with-obsidian?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I recently started using a href=&#34;https://obsidian.md&#34; target=&#34;_blank&#34;Obsidian/a, and I wanted to acheive a seemingly simple goal - to point it at my Documents folder on my Mac and iPad. I really wanted the simplicity of having a single area for things rather than multiple areas to manage. As usual, it turned out to be more involved, but I was able to get it working.&#xA;&#xA;!--more--&#xA;&#xA;I was using the iCloud Desktop &amp; Documents Folders option to keep all the contents of my Documents folder in iCloud. At first, I thought I could just point Obsidian to my Documents folder as-is. That worked on macOS, but not on the iOS or iPadOS apps. It appears those are restricted to only access the Obsidian iCloud folder.&#xA;&#xA;After some thinking, I figured out how I could make it work. It&#39;s pretty simple, and it still keeps my Documents folder saved in iCloud.&#xA;&#xA;Disable the Desktop &amp; Documents Folder iCloud option&#xA;Copy existing iCloud Documents to my local Documents&#xA;Move my local Documents folder to the Obsidian iCloud folder&#xA;Link my local Documents folder to the one I moved into Obsidian&#xA;&#xA;Step 1 is done by going to your iCloud settings in macOS and de-selecting the option. Here are the terminal commands for the other steps. They assume that you don&#39;t already have an Obsidian vault named Documents.&#xA;&#xA;cd ~&#xA;sync iCloud Documents to local Documents&#xA;rsync -av ~/Library/Mobile\ Documents/com~apple~CloudDocs/Documents/ Documents/&#xA;allow moving / removing the Documents folder&#xA;chmod -a &#39;group:everyone deny delete&#39; Documents&#xA;move current documents to Obsidian&#39;s iCloud Drive area&#xA;mv Documents ~/Library/Mobile\ Documents/iCloud~md~obsidian/Documents/&#xA;link Documents to the new location&#xA;ln -s ~/Library/Mobile\ Documents/iCloud~md~obsidian/Documents/Documents .&#xA;&#xA;I&#39;ve been running this way for a few days, and so far it&#39;s working out exactly as I&#39;d hoped. When I create a new project folder, it&#39;s immediately available for Obsidian notes, Terminal work, Finder, and anything else I want to use.&#xA;&#xA;#macos #obsidian]]&gt;</description>
      <content:encoded><![CDATA[<p>I recently started using <a href="https://obsidian.md" target="_blank">Obsidian</a>, and I wanted to acheive a seemingly simple goal – to point it at my Documents folder on my Mac and iPad. I really wanted the simplicity of having a single area for things rather than multiple areas to manage. As usual, it turned out to be more involved, but I was able to get it working.</p>



<p>I was using the iCloud <em>Desktop &amp; Documents Folders</em> option to keep all the contents of my Documents folder in iCloud. At first, I thought I could just point Obsidian to my Documents folder as-is. That worked on macOS, but not on the iOS or iPadOS apps. It appears those are restricted to only access the Obsidian iCloud folder.</p>

<p>After some thinking, I figured out how I could make it work. It&#39;s pretty simple, and it still keeps my Documents folder saved in iCloud.</p>
<ol><li>Disable the <em>Desktop &amp; Documents Folder</em> iCloud option</li>
<li>Copy existing iCloud Documents to my local Documents</li>
<li>Move my local Documents folder to the Obsidian iCloud folder</li>
<li>Link my local Documents folder to the one I moved into Obsidian</li></ol>

<p>Step 1 is done by going to your iCloud settings in macOS and de-selecting the option. Here are the terminal commands for the other steps. They assume that you don&#39;t already have an Obsidian vault named Documents.</p>

<pre><code class="language-sh">cd ~
# sync iCloud Documents to local Documents
rsync -av ~/Library/Mobile\ Documents/com~apple~CloudDocs/Documents/ Documents/
# allow moving / removing the Documents folder
chmod -a &#39;group:everyone deny delete&#39; Documents
# move current documents to Obsidian&#39;s iCloud Drive area
mv Documents ~/Library/Mobile\ Documents/iCloud~md~obsidian/Documents/
# link Documents to the new location
ln -s ~/Library/Mobile\ Documents/iCloud~md~obsidian/Documents/Documents .
</code></pre>

<p>I&#39;ve been running this way for a few days, and so far it&#39;s working out exactly as I&#39;d hoped. When I create a new project folder, it&#39;s immediately available for Obsidian notes, Terminal work, Finder, and anything else I want to use.</p>

<p><a href="https://kevinsandy.com/tag:macos" class="hashtag"><span>#</span><span class="p-category">macos</span></a> <a href="https://kevinsandy.com/tag:obsidian" class="hashtag"><span>#</span><span class="p-category">obsidian</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/using-my-documents-folder-with-obsidian</guid>
      <pubDate>Sat, 29 Oct 2022 13:05:16 +0000</pubDate>
    </item>
    <item>
      <title>CloudFormation Import Errors</title>
      <link>https://kevinsandy.com/cloudformation-import-errors?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I spent way more time than expected getting a few different resources imported to existing CloudFormation stacks. It seems like it should be easy, but there were a couple issues I ran into along the way that I hadn&#39;t seen documented anywhere.&#xA;&#xA;!--more--&#xA;&#xA;Conditional Resources&#xA;&#xA;table class=&#34;no-border&#34;&#xA;trtdiError/i/tdtdThe following resource types are not supported for resource import/td/tr&#xA;trtdiSolution/i/tdtdRemove unused conditional resources from template/td/tr&#xA;/table&#xA;&#xA;I needed to import an existing EC2 instance into a stack. The import failed, complaining that several resource types weren&#39;t able to be imported. The only resource in the template that didn&#39;t already exist was the instance itself, so why the errors?&#xA;&#xA;After some trial and error, I found that it was due to using conditions. There were several resources defined in the stack to be conditionally created based on the parameters - things like creating either an application load balancer or a network load balancer. It appears that on import, CloudFormation doesn&#39;t check the conditions and was failing because some of those conditional resources weren&#39;t of an importable type.&#xA;&#xA;After manually removing all the unused conditional resources, the stack imported successfully.&#xA;&#xA;Dynamic References&#xA;&#xA;table class=&#34;no-border&#34;&#xA;trtdiError/i/tdtdTemplate body in template URL of resource [ResourceName] doesn&#39;t match with the actual template/td/tr&#xA;trtdiSolution/i/tdtdRemove dynamic references from the template to import/td/tr&#xA;/table&#xA;&#xA;I had a stack that needed imported into the root stack. When trying to do this, CloudFormation failed because the contents of the template given in the root stack didn&#39;t match the contents of the template currently in use. But... it did! I downloaded both, and confirmed they were exactly the same!&#xA;&#xA;The issue was that the template used dynamic references for some values. It seems like CloudFormation was comparing the new template, with unresolved references, to the current template with references resolved.&#xA;&#xA;To work around this, I manually updated the template to use static values in place of the dynamic references.&#xA;&#xA;aws]]&gt;</description>
      <content:encoded><![CDATA[<p>I spent way more time than expected getting a few different resources imported to existing CloudFormation stacks. It seems like it should be easy, but there were a couple issues I ran into along the way that I hadn&#39;t seen documented anywhere.</p>



<h2 id="conditional-resources" id="conditional-resources">Conditional Resources</h2>

<table class="no-border">
<tr><td><i>Error</i></td><td>The following resource types are not supported for resource import</td></tr>
<tr><td><i>Solution</i></td><td>Remove unused conditional resources from template</td></tr>
</table>

<p>I needed to import an existing EC2 instance into a stack. The import failed, complaining that several resource types weren&#39;t able to be imported. The only resource in the template that didn&#39;t already exist was the instance itself, so why the errors?</p>

<p>After some trial and error, I found that it was due to using conditions. There were several resources defined in the stack to be conditionally created based on the parameters – things like creating either an application load balancer or a network load balancer. It appears that on import, CloudFormation doesn&#39;t check the conditions and was failing because some of those conditional resources weren&#39;t of an importable type.</p>

<p>After manually removing all the unused conditional resources, the stack imported successfully.</p>

<h2 id="dynamic-references" id="dynamic-references">Dynamic References</h2>

<table class="no-border">
<tr><td><i>Error</i></td><td>Template body in template URL of resource [ResourceName] doesn&#39;t match with the actual template</td></tr>
<tr><td><i>Solution</i></td><td>Remove dynamic references from the template to import</td></tr>
</table>

<p>I had a stack that needed imported into the root stack. When trying to do this, CloudFormation failed because the contents of the template given in the root stack didn&#39;t match the contents of the template currently in use. But... it did! I downloaded both, and confirmed they were exactly the same!</p>

<p>The issue was that the template used dynamic references for some values. It seems like CloudFormation was comparing the new template, with unresolved references, to the current template with references resolved.</p>

<p>To work around this, I manually updated the template to use static values in place of the dynamic references.</p>

<p><a href="https://kevinsandy.com/tag:aws" class="hashtag"><span>#</span><span class="p-category">aws</span></a></p>
]]></content:encoded>
      <guid>https://kevinsandy.com/cloudformation-import-errors</guid>
      <pubDate>Wed, 26 Oct 2022 12:41:56 +0000</pubDate>
    </item>
    <item>
      <title>Moved to Write.as</title>
      <link>https://kevinsandy.com/moved-to-write-as?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I’ve been putting off publishing a new site for at least six months. I finally decided to put this up here as is and add to it as I have time. Don’t expect too much, it’s mostly just a spot for me to have a written record of things I want to be able to refer to later.]]&gt;</description>
      <content:encoded><![CDATA[<p>I’ve been putting off publishing a new site for at least six months. I finally decided to put this up here as is and add to it as I have time. Don’t expect too much, it’s mostly just a spot for me to have a written record of things I want to be able to refer to later.</p>
]]></content:encoded>
      <guid>https://kevinsandy.com/moved-to-write-as</guid>
      <pubDate>Thu, 20 Oct 2022 17:23:03 +0000</pubDate>
    </item>
  </channel>
</rss>