FreeIPA Identity Management planet - technical blogs

April 12, 2019

William Brown

Using Rust Generics to Enforce DB Record State

Using Rust Generics to Enforce DB Record State

In a database, entries go through a lifecycle which represents what attributes they have have, db record keys, and if they have conformed to schema checking.

I’m currently working on a (private in 2019, public in july 2019) project which is a NoSQL database writting in Rust. To help us manage the correctness and lifecycle of database entries, I have been using advice from the Rust Embedded Group’s Book.

As I have mentioned in the past, state machines are a great way to design code, so let’s plot out the state machine we have for Entries:

Entry State Machine

The lifecyle is:

  • A new entry is submitted by the user for creation
  • We schema check that entry
  • If it passes schema, we commit it and assign internal ID’s
  • When we search the entry, we retrieve it by internal ID’s
  • When we modify the entry, we need to recheck it’s schema before we commit it back
  • When we delete, we just remove the entry.

This leads to a state machine of:

                    |
             (create operation)
                    |
                    v
            [ New + Invalid ] -(schema check)-> [ New + Valid ]
                                                      |
                                               (send to backend)
                                                      |
                                                      v    v-------------\
[Commited + Invalid] <-(modify operation)- [ Commited + Valid ]          |
          |                                          ^   \       (write to backend)
          \--------------(schema check)-------------/     ---------------/

This is a bit rough - The version on my whiteboard was better :)

The main observation is that we are focused only on the commitability and validty of entries - not about where they are or if the commit was a success.

Entry Structs

So to make these states work we have the following structs:

struct EntryNew;
struct EntryCommited;

struct EntryValid;
struct EntryInvalid;

struct Entry<STATE, VALID> {
    state: STATE,
    valid: VALID,
    // Other db junk goes here :)
}

We can then use these to establish the lifecycle with functions (similar) to this:

impl Entry<EntryNew, EntryInvalid> {
    fn new() -> Self {
        Entry {
            state: EntryNew,
            valid: EntryInvalid,
            ...
        }
    }

}

impl<STATE> Entry<STATE, EntryInvalid> {
    fn validate(self, schema: Schema) -> Result<Entry<STATE, EntryValid>, ()> {
        if schema.check(self) {
            Ok(Entry {
                state: self.state,
                valid: EntryValid,
                ...
            })
        } else {
            Err(())
        }
    }

    fn modify(&mut self, ...) {
        // Perform any modifications on the entry you like, only works
        // on invalidated entries.
    }
}

impl<STATE> Entry<STATE, EntryValid> {
    fn seal(self) -> Entry<EntryCommitted, EntryValid> {
        // Assign internal id's etc
        Entry {
            state: EntryCommited,
            valid: EntryValid,
        }
    }

    fn compare(&self, other: Entry<STATE, EntryValid>) -> ... {
        // Only allow compares on schema validated/normalised
        // entries, so that checks don't have to be schema aware
        // as the entries are already in a comparable state.
    }
}

impl Entry<EntryCommited, EntryValid> {
    fn invalidate(self) -> Entry<EntryCommited, EntryInvalid> {
        // Invalidate an entry, to allow modifications to be performed
        // note that modifications can only be applied once an entry is created!
        Entry {
            state: self.state,
            valid: EntryInvalid,
        }
    }
}

What this allows us to do importantly is to control when we apply search terms, send entries to the backend for storage and more. Benefit is this is compile time checked, so you can never send an entry to a backend that is not schema checked, or run comparisons or searches on entries that aren’t schema checked, and you can even only modify or delete something once it’s created. For example other parts of the code now have:

impl BackendStorage {
    // Can only create if no db id's are assigned, IE it must be new.
    fn create(&self, ..., entry: Entry<EntryNew, EntryValid>) -> Result<...> {
    }

    // Can only modify IF it has been created, and is validated.
    fn modify(&self, ..., entry: Entry<EntryCommited, EntryValid>) -> Result<...> {
    }

    // Can only delete IF it has been created and committed.
    fn delete(&self, ..., entry: Entry<EntryCommited, EntryValid>) -> Result<...> {
    }
}

impl Filter<STATE> {
    // Can only apply filters (searches) if the entry is schema checked. This has an
    // important behaviour, where we can schema normalise. Consider a case-insensitive
    // type, we can schema-normalise this on the entry, then our compare can simply
    // be a string.compare, because we assert both entries *must* have been through
    // the normalisation routines!
    fn apply_filter(&self, ..., entry: &Entry<STATE, EntryValid>) -> Result<bool, ...> {
    }
}

Using this with Serde?

I have noticed that when we serialise the entry, that this causes the valid/state field to not be compiled away - because they have to be serialised, regardless of the empty content meaning the compiler can’t eliminate them.

A future cleanup will be to have a serialised DBEntry form such as the following:

struct DBEV1 {
    // entry data here
}

enum DBEntryVersion {
    V1(DBEV1)
}

struct DBEntry {
    data: DBEntryVersion
}

impl From<Entry<EntryNew, EntryValid>> for DBEntry {
    fn from(e: Entry<EntryNew, EntryValid>) -> Self {
        // assign db id's, and return a serialisable entry.
    }
}

impl From<Entry<EntryCommited, EntryValid>> for DBEntry {
    fn from(e: Entry<EntryCommited, EntryValid>) -> Self {
        // Just translate the entry to a serialisable form
    }
}

This way we still have the zero-cost state on Entry, but we are able to move to a versioned seralised structure, and we minimise the run time cost.

Testing the Entry

To help with testing, I needed to be able to shortcut and move between anystate of the entry so I could quickly make fake entries, so I added some unsafe methods:

#[cfg(test)]
unsafe fn to_new_valid(self, Entry<EntryNew, EntryInvalid>) -> {
    Entry {
        state: EntryNew,
        valid: EntryValid
    }
}

These allow me to setup and create small unit tests where I may not have a full backend or schema infrastructure, so I can test specific aspects of the entries and their lifecycle. It’s limited to test runs only, and marked unsafe. It’s not “technically” memory unsafe, but it’s unsafe from the view of “it could absolutely mess up your database consistency guarantees” so you have to really want it.

Summary

Using statemachines like this, really helped me to clean up my code, make stronger assertions about the correctness of what I was doing for entry lifecycles, and means that I have more faith when I and future-contributors will work on the code base that we’ll have compile time checks to ensure we are doing the right thing - to prevent data corruption and inconsistency.

April 12, 2019 02:00 PM

April 07, 2019

William Brown

Debugging MacOS bluetooth audio stutter

Debugging MacOS bluetooth audio stutter

I was noticing that audio to my bluetooth headphones from my iPhone was always flawless, but I started to noticed stutter and drops from my MBP. After exhausting some basic ideas, I was stumped.

To the duck duck go machine, and I searched for issues with bluetooth known issues. Nothing appeared.

However, I then decided to debug the issue - thankfully there was plenty of advice on this matter. Press shift + option while clicking bluetooth in the menu-bar, and then you have a debug menu. You can also open Console.app and search for “bluetooth” to see all the bluetooth related logs.

I noticed that when the audio stutter occured that the following pattern was observed.

default     11:25:45.840532 +1000   wirelessproxd   About to scan for type: 9 - rssi: -90 - payload: <00000000 00000000 00000000 00000000 00000000 0000> - mask: <00000000 00000000 00000000 00000000 00000000 0000> - peers: 0
default     11:25:45.840878 +1000   wirelessproxd   Scan options changed: YES
error       11:25:46.225839 +1000   bluetoothaudiod Error sending audio packet: 0xe00002e8
error       11:25:46.225899 +1000   bluetoothaudiod Too many outstanding packets. Drop packet of 8 frames (total drops:451 total sent:60685 percentDropped:0.737700) Outstanding:17

There was always a scan, just before the stutter initiated. So what was scanning?

I searched for the error related to packets, and there were a lot of false leads. From weird apps to dodgy headphones. In this case I could eliminate both as the headphones worked with other devices, and I don’t have many apps installed.

So I went back and thought about what macOS services could be the problem, and I found that airdrop would scan periodically for other devices to send and recieve files. Disabling Airdrop from the sharing menu in System Prefrences cleared my audio right up.

April 07, 2019 02:00 PM

April 02, 2019

William Brown

GDB autoloads for 389 DS

GDB autoloads for 389 DS

I’ve been writing a set of extensions to help debug 389-ds a bit easier. Thanks to the magic of python, writing GDB extensions is really easy.

On OpenSUSE, when you start your DS instance under GDB, all of the extensions are automatically loaded. This will help make debugging a breeze.

zypper in 389-ds gdb
gdb /usr/sbin/ns-slapd
GNU gdb (GDB; openSUSE Tumbleweed) 8.2
(gdb) ds-
ds-access-log  ds-backtrace
(gdb) set args -d 0 -D /etc/dirsrv/slapd-<instance name>
(gdb) run
...

All the extensions are under the ds- namespace, so they are easy to find. There are some new ones on the way, which I’ll discuss here too:

ds-backtrace

As DS is a multithreaded process, it can be really hard to find the active thread involved in a problem. So we provided a command that knows how to fold duplicated stacks, and to highlight idle threads that you can (generally) skip over.

===== BEGIN ACTIVE THREADS =====
Thread 37 (LWP 70054))
Thread 36 (LWP 70053))
Thread 35 (LWP 70052))
Thread 34 (LWP 70051))
Thread 33 (LWP 70050))
Thread 32 (LWP 70049))
Thread 31 (LWP 70048))
Thread 30 (LWP 70047))
Thread 29 (LWP 70046))
Thread 28 (LWP 70045))
Thread 27 (LWP 70044))
Thread 26 (LWP 70043))
Thread 25 (LWP 70042))
Thread 24 (LWP 70041))
Thread 23 (LWP 70040))
Thread 22 (LWP 70039))
Thread 21 (LWP 70038))
Thread 20 (LWP 70037))
Thread 19 (LWP 70036))
Thread 18 (LWP 70035))
Thread 17 (LWP 70034))
Thread 16 (LWP 70033))
Thread 15 (LWP 70032))
Thread 14 (LWP 70031))
Thread 13 (LWP 70030))
Thread 12 (LWP 70029))
Thread 11 (LWP 70028))
Thread 10 (LWP 70027))
#0  0x00007ffff65db03c in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007ffff66318b0 in PR_WaitCondVar () at /usr/lib64/libnspr4.so
#2  0x00000000004220e0 in [IDLE THREAD] connection_wait_for_new_work (pb=0x608000498020, interval=4294967295) at /home/william/development/389ds/ds/ldap/servers/slapd/connection.c:970
#3  0x0000000000425a31 in connection_threadmain () at /home/william/development/389ds/ds/ldap/servers/slapd/connection.c:1536
#4  0x00007ffff6637484 in None () at /usr/lib64/libnspr4.so
#5  0x00007ffff65d4fab in start_thread () at /lib64/libpthread.so.0
#6  0x00007ffff6afc6af in clone () at /lib64/libc.so.6

This example shows that there are 17 idle threads (look at frame 2) here, that all share the same trace.

ds-access-log

The access log is buffered before writing, so if you have a coredump, and want to see the last few events before they were written to disk, you can use this to display the content:

(gdb) ds-access-log
===== BEGIN ACCESS LOG =====
$2 = 0x7ffff3c3f800 "[03/Apr/2019:10:58:42.836246400 +1000] conn=1 fd=64 slot=64 connection from 127.0.0.1 to 127.0.0.1
[03/Apr/2019:10:58:42.837199400 +1000] conn=1 op=0 BIND dn=\"\" method=128 version=3
[03/Apr/2019:10:58:42.837694800 +1000] conn=1 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0001200300 dn=\"\"
[03/Apr/2019:10:58:42.838881800 +1000] conn=1 op=1 SRCH base=\"\" scope=2 filter=\"(objectClass=*)\" attrs=ALL
[03/Apr/2019:10:58:42.839107600 +1000] conn=1 op=1 RESULT err=32 tag=101 nentries=0 etime=0.0001070800
[03/Apr/2019:10:58:42.840687400 +1000] conn=1 op=2 UNBIND
[03/Apr/2019:10:58:42.840749500 +1000] conn=1 op=2 fd=64 closed - U1
", '\276' <repeats 3470 times>

At the end the line that repeats shows the log is “empty” in that segment of the buffer.

ds-entry-print

This command shows the in-memory entry. It can be common to see Slapi_Entry * pointers in the codebase, so being able to display these is really helpful to isolate what’s occuring on the entry. Your first argument should be the Slapi_Entry pointer.

(gdb) ds-entry-print ec
Display Slapi_Entry: cn=config
cn: config
objectClass: top
objectClass: extensibleObject
objectClass: nsslapdConfig
nsslapd-schemadir: /opt/dirsrv/etc/dirsrv/slapd-standalone1/schema
nsslapd-lockdir: /opt/dirsrv/var/lock/dirsrv/slapd-standalone1
nsslapd-tmpdir: /tmp
nsslapd-certdir: /opt/dirsrv/etc/dirsrv/slapd-standalone1
...

April 02, 2019 01:00 PM

March 24, 2019

Alexander Bokovoy

Lost in (Kerberos) service translation?

A year ago Brian J. Atkisson from Red Hat IT filed a bug against FreeIPA asking to remove a default [domain_realm] mapping section from the krb5.conf configuration file generated during installation of a FreeIPA client. The bug is still open and I’d like to use this opportunity to discuss some less known aspects of a Kerberos service principal resolution.

When an application uses Kerberos to authenticate to a remote service, it needs to talk to a Kerberos key distribution center (KDC) to obtain a service ticket to that remote service. There are multiple ways how an application could construct a name of a service but in a simplistic view it boils down to getting a remote service host name and attaching it to a remote service type name. Type names are customary and really depend on an established tradition for a protocol in use. For example, browsers universally assume that a component HTTP/ is used in the service name; to authenticate to www.example.com server they would ask a KDC a service ticket to HTTP/www.example.com principal. When an LDAP client talks to an LDAP server ldap.example.com and uses SASL GSSAPI authentication, it will ask KDC for a service ticket to ldap/ldap.example.com. Sometimes these assumptions are written down in a corresponding RFC document, sometimes not, but they assume both client and server know what they are doing.

There are, however, few more moving parts at play. A host name part of a service principal might come from an interaction with a user. For browser, this would be a server name from a URL entered by a user and browser would need to construct the target service principal from it. The host name part might be incomplete in some cases: if you only have a single DNS domain in use, server names would be unique in that domain and your users might find it handy to only use the first label of a DNS name of the server to address it. Such approach was certainly very popular among system administrators who relied on capabilities of a Kerberos library to expand the short name into a fully qualified one.

Let’s look into that. Kerberos configuration file, krb5.conf, allows to say for any application that a hostname passed down to the library would need to be canonicalized. This option, dns_canonicalize_hostname, allows us to say “I want to connect to a server bastion” and let libkrb5 to expand that one to a bastion.example.com host name. While this behavior is handy, it relies on DNS. A downside of disabling canonicalization of the hostnames is that short hostnames will not be canonicalized and upon requests to KDC might be not recognized. Finally, there is a possibility of DNS hijacking. For Kerberos, use cases when DNS responses are spoofed aren’t too problematic since the fake KDC or the fake service wouldn’t gain much knowledge, but even in a normal situation a latency of DNS responses might be a considerable problem.

Another part of the equation is to find out which Kerberos realm a specified target service principal belongs to. If you have a single Kerberos realm, it might not be an issue; by setting default_realm option in the krb5.conf we can make sure a client always assumes the only realm we have. However, if there are multiple Kerberos realms, it is important to map the target service principal to the target realm at a client side, before a request is issued to a KDC.

There might be multiple Kerberos realms in existence at any site. For example, FreeIPA deployment provides one. If FreeIPA has established a trust to an Active Directory forest, then that forest would represent another Kerberos realm. Potentially, even more than one as each Active Directory domain in an Active Directory forest is a separate Kerberos realm in itself.

Kerberos protocol defines that a realm in which the application server is located must be determined by the client (RFC 4120 section 3.3.1). The specification also defines several strategies how a client may map the hostname of the application server to the realm it believes the server belongs to.

Domain to realm mapping

Let us stop and think a bit at this point. A Kerberos client has full control over the decision process of to which realm a particular application server belongs to. If it decides that the application server is from a different realm than the client is itself, then it needs to ask for a cross-realm ticket granting ticket from its own KDC. Then, with the cross-realm TGT in possession, the client can ask a KDC of the application server’s realm for the actual service ticket.

As a client, we want to be sure we would be talking to the correct KDC. As mentioned earlier, overly relying on DNS is not always a particulary secure action. As a result, krb5 library provides a way to consider how a particular hostname is mapped to a realm. The search mechanism for a realm mapping is pluggable and by default includes:

  • registry-based search on WIN32 (does nothing for Linux)
  • profile-based search: uses [domain_realm] section in krb5.conf to do actual mapping
  • dns-based search that can be disabled with dns_lookup_realm = false
  • domain-based search: it is disabled by default and can be enabled with realm_try_domains = ... option in krb5.conf

The order of search is important. It is hard-coded in krb5 library and depends on what operation is performed. For realm selection it is hard-coded that profile-based search is done before DNS-based search and domain-based search is done as the last one.

When [domain_realm] section exists in krb5.conf, it will be used to map a hostname of the application server to a realm. The mapping table in this section is typically build up based on the host and domain maps:

[domain_realm]
   www.example.com = EXAMPLE.COM
   .dev.example.com = DEVEXAMPLE.COM
   .example.com = EXAMPLE.COM

The mapping above says that www.example.com would be explicitly mapped to EXAMPLE.COM realm, all machines in DNS zone dev.example.com would be mapped to DEVEXAMPLE.COM realm and the rest of hosts in DNS zone example.com would be mapped to EXAMPLE.COM. This mapping only applies to hostnames, so a hostname foo.bar.example.com would not be mapped by this schema to any realm.

Profile-based search is visible in the Kerberos trace output as a selection of the realm right at the beginning of a request for a service ticket to a host-based service principal:

[root@client ~]# kinit -k
[root@client ~]# KRB5_TRACE=/dev/stderr kvno -S cifs client.example.com
[30798] 1552847822.721561: Getting credentials host/client.example.com@EXAMPLE.COM -> cifs/client.example.com@EXAMPLE.COM using ccache KEYRING:persistent:0:0
...

The difference here is that for a service principal not mapped with profile-based search there will be no assumed realm and the target principal would be constructed without a realm:

[root@client ~]# kinit -k
[root@client ~]# KRB5_TRACE=/dev/stderr kvno -S ldap dc.ad.example.com
[30684] 1552841274.602324: Getting credentials host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@ using ccache KEYRING:persistent:0:0

DNS-based search is activated when dns_lookup_realm option is set to true in krb5.conf and profile-based search did not return any results. Kerberos library will do a number of DNS queries for a TXT record starting with _kerberos. It will help it to discover which Kerberos realm is responsible for the DNS host of the application server. Kerberos library will perform these searches for the hostname itself first and then for each domain component in the hostname until it finds an answer or processes all domain components.

If we have www.example.com as a hostname, then Kerberos library would issue a DNS query for TXT record _kerberos.www.example.com to find a name of the Kerberos realm of www.example.com. If that fails, next try will be for a TXT record _kerberos.example.com and so on, until DNS components are all processed.

It should be noted that this algorithm is only implemented in MIT and Heimdal Kerberos libraries. Active Directory implementation from Microsoft does not allow to query _kerberos.$hostname DNS TXT record to find out which realm a target application server belongs to. Instead, Windows environments delegate the discovery process to their domain controllers.

DNS canonicalization feature (or lack of) also affects DNS search since without it we wouldn’t know what realm to map to a non-fully qualified hostname. When dns_canonicalize_hostname option is set to false, Kerberos client would send the request to KDC with a default realm associated with the non-fully qualified hostname. Most likely such service principal wouldn’t be understood by the KDC and reported as not found.

To help in this situations, FreeIPA KDC supports Kerberos principal aliases. One can use the following ipa command line tool’s command to add aliases to hosts. Remember that a host principal is really a host/<hostname>:

$ ipa help host-add-principal
Usage: ipa [global-options] host-add-principal HOSTNAME KRBPRINCIPALNAME... [options]

Add new principal alias to host entry
Options:
  -h, --help    show this help message and exit
  --all         Retrieve and print all attributes from the server. Affects
                command output.
  --raw         Print entries as stored on the server. Only affects output
                format.
  --no-members  Suppress processing of membership attributes.

$ ipa host-add-principal bastion.example.com host/bastion
-------------------------------------------
Added new aliases to host "bastion.example.com"
-------------------------------------------
  Host name: bastion.example.com
  Principal alias: host/bastion.example.com@EXAMPLE.COM, host/bastion@EXAMPLE.COM

and for other Kerberos service principals the corresponding command is ipa service-add-principal:

$ ipa help service-add-principal
Usage: ipa [global-options] service-add-principal CANONICAL-PRINCIPAL PRINCIPAL... [options]

Add new principal alias to a service
Options:
  -h, --help    show this help message and exit
  --all         Retrieve and print all attributes from the server. Affects
                command output.
  --raw         Print entries as stored on the server. Only affects output
                format.
  --no-members  Suppress processing of membership attributes.

$ ipa service-show HTTP/bastion.example.com
  Principal name: HTTP/bastion.example.com@EXAMPLE.COM
  Principal alias: HTTP/bastion.example.com@EXAMPLE.COM
  Keytab: False
  Managed by: bastion.example.com
  Groups allowed to create keytab: admins
[root@nyx ~]# ipa service-add-principal HTTP/bastion.example.com HTTP/bastion
---------------------------------------------------------------------------------
Added new aliases to the service principal "HTTP/bastion.example.com@EXAMPLE.COM"
---------------------------------------------------------------------------------
  Principal name: HTTP/bastion.example.com@EXAMPLE.COM
  Principal alias: HTTP/bastion.example.com@EXAMPLE.COM, HTTP/bastion@EXAMPLE.COM

Finally, domain-based search is activated when realm_try_domains = ... is specified. In this case Kerberos library will try heuristics based on the hostname of the target application server and a specific number of domain components of the application server hostname depending on how many components realm_try_domains option is allowing to cut off. More about that later.

However, there is another option employed by MIT Kerberos library. In case when MIT Kerberos client is unable to find out a realm on its own, starting with MIT krb5 1.6 version, a client will issue a request for without a known realm to own KDC. A KDC (must be MIT krb5 1.7 or later) can opt to recognize the hostname against own [domain_realm] mapping table and choose to issue a referral to the appropriate service realm.

The latter approach would only work if the KDC has been configured to allow such referrals to be issued and if client is asking for a host-based service. FreeIPA KDC, by default, allows this behavior. For trusted Active Directory realms there is also a support from SSSD on IPA masters: SSSD generates automatically [domain_realm] and [capaths] sections for all known trusted realms so that KDC is able to respond with the referrals.

However, a care should be taken by an application itself on the client side when constructing such Kerberos principal. For example, if we would use kvno utility, then a request kvno -S service hostname would ask for a referral while kvno service/hostname would not. The former is constructing a host-based principal while the latter is not.

When looking at the Kerberos trace, we can see the difference. Below host/client.example.com is asking for a service ticket to ldap/dc.ad.example.com as a host-based principal, without knowing which realm the application server’s principal belongs to:

[root@client ~]# kinit -k
[root@client ~]# KRB5_TRACE=/dev/stderr kvno -S ldap dc.ad.example.com
[30684] 1552841274.602324: Getting credentials host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@ using ccache KEYRING:persistent:0:0
[30684] 1552841274.602325: Retrieving host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@ from KEYRING:persistent:0:0 with result: -1765328243/Matching credential not found
[30684] 1552841274.602326: Retrying host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@EXAMPLE.COM with result: -1765328243/Matching credential not found
[30684] 1552841274.602327: Server has referral realm; starting with ldap/dc.ad.example.com@EXAMPLE.COM
[30684] 1552841274.602328: Retrieving host/client.example.com@EXAMPLE.COM -> krbtgt/EXAMPLE.COM@EXAMPLE.COM from KEYRING:persistent:0:0 with result: 0/Success
[30684] 1552841274.602329: Starting with TGT for client realm: host/client.example.com@EXAMPLE.COM -> krbtgt/EXAMPLE.COM@EXAMPLE.COM
[30684] 1552841274.602330: Requesting tickets for ldap/dc.ad.example.com@EXAMPLE.COM, referrals on
[30684] 1552841274.602331: Generated subkey for TGS request: aes256-cts/A93C
[30684] 1552841274.602332: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
[30684] 1552841274.602334: Encoding request body and padata into FAST request
[30684] 1552841274.602335: Sending request (965 bytes) to EXAMPLE.COM
[30684] 1552841274.602336: Initiating TCP connection to stream ip.ad.dr.ess:88
[30684] 1552841274.602337: Sending TCP request to stream ip.ad.dr.ess:88
[30684] 1552841274.602338: Received answer (856 bytes) from stream ip.ad.dr.ess:88
[30684] 1552841274.602339: Terminating TCP connection to stream ip.ad.dr.ess:88
[30684] 1552841274.602340: Response was from master KDC
[30684] 1552841274.602341: Decoding FAST response
[30684] 1552841274.602342: FAST reply key: aes256-cts/D1E2
[30684] 1552841274.602343: Reply server krbtgt/AD.EXAMPLE.COM@EXAMPLE.COM differs from requested ldap/dc.ad.example.com@EXAMPLE.COM
[30684] 1552841274.602344: TGS reply is for host/client.example.com@EXAMPLE.COM -> krbtgt/AD.EXAMPLE.COM@EXAMPLE.COM with session key aes256-cts/470F
[30684] 1552841274.602345: TGS request result: 0/Success
[30684] 1552841274.602346: Following referral TGT krbtgt/AD.EXAMPLE.COM@EXAMPLE.COM
[30684] 1552841274.602347: Requesting tickets for ldap/dc.ad.example.com@AD.EXAMPLE.COM, referrals on
[30684] 1552841274.602348: Generated subkey for TGS request: aes256-cts/F0C6
[30684] 1552841274.602349: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
[30684] 1552841274.602351: Encoding request body and padata into FAST request
[30684] 1552841274.602352: Sending request (921 bytes) to AD.EXAMPLE.COM
[30684] 1552841274.602353: Sending DNS URI query for _kerberos.AD.EXAMPLE.COM.
[30684] 1552841274.602354: No URI records found
[30684] 1552841274.602355: Sending DNS SRV query for _kerberos._udp.AD.EXAMPLE.COM.
[30684] 1552841274.602356: SRV answer: 0 0 88 "dc.ad.example.com."
[30684] 1552841274.602357: Sending DNS SRV query for _kerberos._tcp.AD.EXAMPLE.COM.
[30684] 1552841274.602358: SRV answer: 0 0 88 "dc.ad.example.com."
[30684] 1552841274.602359: Resolving hostname dc.ad.example.com.
[30684] 1552841274.602360: Resolving hostname dc.ad.example.com.
[30684] 1552841274.602361: Initiating TCP connection to stream ano.ther.add.ress:88
[30684] 1552841274.602362: Sending TCP request to stream ano.ther.add.ress:88
[30684] 1552841274.602363: Received answer (888 bytes) from stream ano.ther.add.ress:88
[30684] 1552841274.602364: Terminating TCP connection to stream ano.ther.add.ress:88
[30684] 1552841274.602365: Sending DNS URI query for _kerberos.AD.EXAMPLE.COM.
[30684] 1552841274.602366: No URI records found
[30684] 1552841274.602367: Sending DNS SRV query for _kerberos-master._tcp.AD.EXAMPLE.COM.
[30684] 1552841274.602368: No SRV records found
[30684] 1552841274.602369: Response was not from master KDC
[30684] 1552841274.602370: Decoding FAST response
[30684] 1552841274.602371: FAST reply key: aes256-cts/10DE
[30684] 1552841274.602372: TGS reply is for host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@AD.EXAMPLE.COM with session key aes256-cts/24D1
[30684] 1552841274.602373: TGS request result: 0/Success
[30684] 1552841274.602374: Received creds for desired service ldap/dc.ad.example.com@AD.EXAMPLE.COM
[30684] 1552841274.602375: Storing host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@ in KEYRING:persistent:0:0
[30684] 1552841274.602376: Also storing host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@AD.EXAMPLE.COM based on ticket
[30684] 1552841274.602377: Removing host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@AD.EXAMPLE.COM from KEYRING:persistent:0:0
ldap/dc.ad.example.com@: kvno = 28

However, when not using a host-based principal in the request we’ll fail.

[root@client ~]# kinit -k
[root@client ~]# KRB5_TRACE=/dev/stderr kvno ldap/dc.ad.example.com
[30695] 1552841932.100975: Getting credentials host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@EXAMPLE.COM using ccache KEYRING:persistent:0:0
[30695] 1552841932.100976: Retrieving host/client.example.com@EXAMPLE.COM -> ldap/dc.ad.example.com@EXAMPLE.COM from KEYRING:persistent:0:0 with result: -1765328243/Matching credential not found
[30695] 1552841932.100977: Retrieving host/client.example.com@EXAMPLE.COM -> krbtgt/EXAMPLE.COM@EXAMPLE.COM from KEYRING:persistent:0:0 with result: 0/Success
[30695] 1552841932.100978: Starting with TGT for client realm: host/client.example.com@EXAMPLE.COM -> krbtgt/EXAMPLE.COM@EXAMPLE.COM
[30695] 1552841932.100979: Requesting tickets for ldap/dc.ad.example.com@EXAMPLE.COM, referrals on
[30695] 1552841932.100980: Generated subkey for TGS request: aes256-cts/27DA
[30695] 1552841932.100981: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
[30695] 1552841932.100983: Encoding request body and padata into FAST request
[30695] 1552841932.100984: Sending request (965 bytes) to EXAMPLE.COM
[30695] 1552841932.100985: Initiating TCP connection to stream ip.ad.dr.ess:88
[30695] 1552841932.100986: Sending TCP request to stream ip.ad.dr.ess:88
[30695] 1552841932.100987: Received answer (461 bytes) from stream ip.ad.dr.ess:88
[30695] 1552841932.100988: Terminating TCP connection to stream ip.ad.dr.ess:88
[30695] 1552841932.100989: Response was from master KDC
[30695] 1552841932.100990: Decoding FAST response
[30695] 1552841932.100991: TGS request result: -1765328377/Server ldap/dc.ad.example.com@EXAMPLE.COM not found in Kerberos database
[30695] 1552841932.100992: Requesting tickets for ldap/dc.ad.example.com@EXAMPLE.COM, referrals off
[30695] 1552841932.100993: Generated subkey for TGS request: aes256-cts/C1BF
[30695] 1552841932.100994: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
[30695] 1552841932.100996: Encoding request body and padata into FAST request
[30695] 1552841932.100997: Sending request (965 bytes) to EXAMPLE.COM
[30695] 1552841932.100998: Initiating TCP connection to stream ip.ad.dr.ess:88
[30695] 1552841932.100999: Sending TCP request to stream ip.ad.dr.ess:88
[30695] 1552841932.101000: Received answer (461 bytes) from stream ip.ad.dr.ess:88
[30695] 1552841932.101001: Terminating TCP connection to stream ip.ad.dr.ess:88
[30695] 1552841932.101002: Response was from master KDC
[30695] 1552841932.101003: Decoding FAST response
[30695] 1552841932.101004: TGS request result: -1765328377/Server ldap/dc.ad.example.com@EXAMPLE.COM not found in Kerberos database
kvno: Server ldap/dc.ad.example.com@EXAMPLE.COM not found in Kerberos database while getting credentials for ldap/dc.ad.example.com@EXAMPLE.COM

As you can see, our client tried to ask for a service ticket to a non-host-based service principal from outside our realm and this was not accepted by the KDC, thus resolution failing.

Mixed realm deployments

The behavior above is predictable. However, a client-side processing of the target realm behaves wrongly in case a client needs to request a service ticket to a service principal located in a trusted realm but situated in a DNS zone belonging to our own realm. This might sound like a complication but it is a typical situation for deployments with FreeIPA trusting Active Directory forests. In such cases customers often want to place Linux machines right in the DNS zones associated with Active Directory domains.

Since Microsoft Active Directory implementation does not support per-host Kerberos realm hint, unlike MIT Kerberos or Heimdal, such request from Windows client will always fail. It will be not possible to obtain a service ticket in such situation from Windows machines.

However, when both realms trusting each other are MIT Kerberos, their KDCs and clients can be configured for a selective realm discovery.

As explained at FOSDEM 2018 and devconf.cz 2019, Red Hat IT moved from an old plain Kerberos realm to the FreeIPA deployment. This is a situation where we have EXAMPLE.COM and IPA.EXAMPLE.COM both trusting each other and migrating systems to IPA.EXAMPLE.COM over long period of time. We want to continue providing services in example.com DNS zone but use IPA.EXAMPLE.COM realm. Our clients are in both Kerberos realms but over time they will all eventually migrate to IPA.EXAMPLE.COM.

Working with such situation can be tricky. Let’s start with a simple example.

Suppose our client’s krb5.conf has [domain_realm] section that looks like this:

[domain_realm]
   client.example.com = EXAMPLE.COM
   .example.com = EXAMPLE.COM

If we need to ask for a HTTP/app.example.com service ticket to the application server hosted on app.example.com, the Kerberos library on the client will map HTTP/app.example.com to the EXAMPLE.COM and will not attempt to request a referral from a KDC. If our application server is enrolled into IPA.EXAMPLE.COM realm, it means the client with such configuration will never try to discover HTTP/app.example.com@IPA.EXAMPLE.COM and will never be able to authenticate to app.example.com with Kerberos.

There are two possible solutions here. We can either add an explicit mapping for app.example.com host to IPA.EXAMPLE.COM in the client’s [domain_realm] section in krb5.conf or remove .example.com mapping entry from the [domain_realm] on the client side completely and rely on KDC or DNS-based search.

First solution does not scale and is a management issue. Updating all clients when a new application server is migrated to the new realm sounds like a nightmare if majority of your clients are laptops. You’d really want to force them to delegate to the KDC or do DNS-based search instead.

Of course, there is a simple solution: add _kerberos.app.example.com TXT record pointing out to IPA.EXAMPLE.COM in the DNS and let clients to use it. This would assume that all clients will not have .example.com = EXAMPLE.COM mapping rule.

Unfortunately, it is more complicated. As Robbie Harwood, Fedora and RHEL maintainer of MIT Kerberos, explained to me, the problem is what happens if there’s inadequate DNS information, e.g. DNS-based search failed. A client would fall back to heuristics (domain-based search) and these would differ depending which MIT Kerberos version is in use. Since MIT Kerberos 1.16 heuristics would be trying to prefer mapping HTTP/app.ipa.example.com into IPA.EXAMPLE.COM over EXAMPLE.COM, and prefer EXAMPLE.COM to failure. However, there is no a way to map HTTP/app.example.com to IPA.EXAMPLE.COM with these heuristics.

Domain-based search gives us another heuristics based on the realm. It is tunable via realm_try_domains option but it also would affect the way how MIT Kerberos library would choose which credentials cache from a credentials cache collection (KEYRING:, DIR:, KCM: ccache types). This logic is present since MIT Kerberos 1.12 but it also wouldn’t help us to map HTTP/app.example.com to IPA.EXAMPLE.COM.

After some discussions, Robbie and I came to a conclusion that perhaps changing the order how these methods are applied by the MIT Kerberos library could help. As I mentioned in “Domain to realm mapping” section, the current order is hard-coded: for realm selection the profile-based search is done before DNS-based search and domain-based search is done as the last one. Ideally, choosing which search is done after which could be given to administrators. However, there aren’t many reasonable orders out there. Perhaps, allowing just two options would be enough:

  • prioritizing DNS search over a profile search
  • prioritizing a profile search over DNS search

Until it is done, we are left with the following recommendation for mixed-domain Kerberos principals from multiple realms:

  • make sure you don’t use [domain_realm] mapping for mixed realm domains
  • make sure you have _kerberos.$hostname TXT record set per host/domain for the right realm name. Remember that Kerberos realm is case-sensitive and almost everywhere it is uppercase, so be sure the value of the TXT record is correct.

March 24, 2019 07:13 AM

March 18, 2019

Fraser Tweedale

cert-fix redux

cert-fix redux

A few weeks ago I analysed the Dogtag pki-server cert-fix tool, which is intended to assist with recovery in scenarios where expired certificates inhibit Dogtag’s normal operation. Unfortunately, there were some flawed assumptions and feature gaps that limited the usefulness of the tool, especially in FreeIPA contexts.

In this post, I provide an update on changes that are being made to the tool to address those shortcomings.

Recap

Recapping the shortcomings in brief:

  1. When TLS client certificate authentication is used to authenticate to Dogtag (the default for FreeIPA), and expired subsystem certificate causes authentication failure and Dogtag cannot start.
  2. When Dogtag is configured to use TLS or STARTTLS when connecting to the database, an expired LDAP service certificate causes connection failure.
  3. cert-fix uses an admin or agent certificate to perform authenticated operations against Dogtag. An expired certificate causes authentication failure, and certificate renewal fails.
  4. Expired CA certificate is not handled. Due to longer validity periods, and externally-signed CA certificates expiring at different times from Dogtag system certificates, this scenario is less common, but it still occurs.
  5. The need to renew non-system certificates. Apart from system certificates, in order for correct operation of Dogtag it may be necessary to renew some other certificates, such as an expired LDAP service certificate, or an expired agent certificate (e.g. IPA RA). cert-fix did not provide a way to do this.

cert-fix now switches the deployment to use password authentication to LDAP, over an insecure connection on port 389. The original database configuration is restored when cert-fix finishes.

The subsystem certificate is used by Dogtag to authenticate to LDAP. Switching to password authentication works around the expired subsystem certificate. Furthermore if the subsystem certificate gets renewed, the new certificate gets imported into the pkidbuser LDAP entry so that authentication will work (389 DS requires an exact certificate match in the userCertificate attribute of the user).

If the LDAP service certificate is expired, this procedure works around that but does not renew it. This is problem #3, and is addressed separately.

Switching Dogtag to password authentication to LDAP means resetting the pkidbuser account password. We use the ldappasswd program to do this. The LDAP password modify extended operation requires confientiality (i.e. TLS or STARTTLS); an expired LDAP service certificate inhibits this. Therefore we use LDAPI and autobind. The LDAPI socket is specified via the --ldapi-socket option.

FreeIPA always configures LDAP and root autobind to the cn=Directory Manager LDAP account. For standalone Dogtag installations these may need to be configured before runnning cert-fix.

Resolving expired agent certificate (issue #3)

Instead of using the certificate to authenticate the agent, reset the password of the agent account and use that password to authenticate the agent. The password is randomly generated and forgotten after cert-fix terminates.

The agent account to use is now specified via the --agent-uid option. NSSDB-related options for specifying the agent certificate and NSSDB passphrase have been removed.

Renewing other certificates (issue #5)

cert-fix learned the --extra-cert option, which gives the serial number of an extra certificate to renew. The option can be given multiple times to specify multiple certificates. Each certificate gets renewed and output in /etc/pki/<instance-dir>/certs/<serial>-renewed.crt. If a non-existing serial number is specified, an error is printed but processing continues.

This facility allows operators (or wrapper tools) to renew other essential certificates alongside the Dogtag system certificates. Further actions are needed to put those new certificates in the right places. But it is fair, in order to keep to keep the cert-fix tool simple, to put this burden back on the operator. In any case, we intend to write a supplementary tool for FreeIPA that wraps cert-fix and takes care of working out which extra certificates to renew, and putting them in the right places.

New or changed assumptions

The changes dicsussed above abolish some assumptions that were previously made by cert-fix, and establish some new assumptions.

Absolished:

  • A valid admin certificate is no longer needed
  • A valid LDAP service certificate is no longer needed
  • When Dogtag is configured to use certificate authentication to LDAP, a valid subsystem certificate is no longer needed

New:

  • cert-fix must be run as root.
  • LDAPI must be configured, with root autobinding to cn=Directory Manager or other account with privileges on o=ipaca subtree, including password reset privileges.
  • The password of the specified agent account will be reset. If needed, it can be changed back afterwards (manually; successful execution of cert-fix proves that the operator has privileges to do this).
  • If Dogtag was configured to use TLS certificate authentication to bind to LDAP, the password on the pkidbuser account will be reset. (If password authentication was already used, the password does not get reset).
  • LDAPI (ldappasswd) and need to be root

Demo

Here I’ll put the full command and command output for an execution of the cert-fix tool, and break it up with commentary. I will renew the subsystem certificate, and additionally the certificate with serial number 29 (which happens to be the LDAP certificate):

[root@f27-1 ~]# pki-server cert-fix \
    --agent-uid admin \
    --ldapi-socket /var/run/slapd-IPA-LOCAL.socket \
    --cert subsystem \
    --extra-cert 29

There is no longer any need to set up an NSSDB with an agent certificate, a considerable UX improvement! An further improvement was to default the log verbosity to INFO, so we can see progress and observe (at a high level) what the cert-fix is doing, without specifying -v / --verbose.

INFO: Loading password config: /etc/pki/pki-tomcat/password.conf
INFO: Fixing the following system certs: ['subsystem']
INFO: Renewing the following additional certs: ['29']
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0

Preliminaries. The tool loads information about the Dogtag instance, states its intentions and verifies that it can authenticate to LDAP.

INFO: Stopping the instance to proceed with system cert renewal
INFO: Configuring LDAP password authentication
INFO: Setting pkidbuser password via ldappasswd
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
INFO: Selftests disabled for subsystems: ca
INFO: Resetting password for uid=admin,ou=people,o=ipaca
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0

cert-fix stopped Dogtag, changed the database connection configuration, reset the agent password and suppressed the Dogtag self-tests.

INFO: Starting the instance
INFO: Sleeping for 10 seconds to allow server time to start...

cert-fix starts Dogtag then sleeps for a bit. The sleep was added to avoid races against Dogtag startup that sometimes caused the tool to fail. It’s a bit of a hack, but 10 seconds should hopefully be enough.

INFO: Requesting new cert for subsystem
INFO: Getting subsystem cert info for ca
INFO: Trying to setup a secure connection to CA subsystem.
INFO: Secure connection with CA is established.
INFO: Placing cert creation request for serial: 34
INFO: Request ID: 38
INFO: Request Status: complete
INFO: Serial Number: 0x26
INFO: Issuer: CN=Certificate Authority,O=IPA.LOCAL 201903151111
INFO: Subject: CN=CA Subsystem,O=IPA.LOCAL 201903151111
INFO: New cert is available at: /etc/pki/pki-tomcat/certs/subsystem.crt
INFO: Requesting new cert for 29; writing to /etc/pki/pki-tomcat/certs/29-renewed.crt
INFO: Trying to setup a secure connection to CA subsystem.
INFO: Secure connection with CA is established.
INFO: Placing cert creation request for serial: 29
INFO: Request ID: 39
INFO: Request Status: complete
INFO: Serial Number: 0x27
INFO: Issuer: CN=Certificate Authority,O=IPA.LOCAL 201903151111
INFO: Subject: CN=f27-1.ipa.local,O=IPA.LOCAL 201903151111
INFO: New cert is available at: /etc/pki/pki-tomcat/certs/29-renewed.crt

Certificate requests were issued and completed successfully.

INFO: Stopping the instance
INFO: Getting subsystem cert info for ca
INFO: Getting subsystem cert info for ca
INFO: Updating CS.cfg with the new certificate
INFO: Importing new subsystem cert into uid=pkidbuser,ou=people,o=ipaca
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "uid=pkidbuser,ou=people,o=ipaca"

Dogtag was stopped, and the new subsystem cert was updated in CS.cfg. It was also imported into the pkidbuser entry to ensure LDAP TLS client authentication continues to work. No further action is taken in relation to the extra cert(s).

INFO: Selftests enabled for subsystems: ca
INFO: Restoring previous LDAP configuration
INFO: Starting the instance with renewed certs

Self-tests are re-enabled and the previous LDAP configuration restored. Python context managers are used to ensure that these steps are performed even when a fatal error occurs.

The end.

Conclusion

The problem of an expired CA certificate (issue #4) has not yet been addressed. It is not the highest priority but it would be nice to have. It is still believed to be a low-effort change so it is likely to be implemented at some stage.

More extensive testing of the tool is needed for renewing system certificates for other Dogtag subsystems—in particular the KRA subsystem.

The enhancements discussed in this post make the cert-fix tool a viable MVP for expired certificate recovery without time-travel. The enhancements are still in review, yet to be merged. That will hopefully happen soon (within a day or so of this post). We are also making a significant effort to backport cert-fix to some earlier branches and make it available on older releases.

As mentioned earlier in the post, we intend to implement a FreeIPA-specific wrapper for cert-fix that can take care of the additional steps required to renew and deploy expired certificates that are part of the FreeIPA system, but are not Dogtag system certificates handled directly by cert-fix. These include LDAP and Apache HTTPD certificates, the IPA RA agent certificate and the Kerberos PKINIT certificate.

March 18, 2019 12:00 AM

March 04, 2019

Fraser Tweedale

Customising Dogtag system certificate lifetimes

Customising Dogtag system certificate lifetimes

Default certificate lifetimes in Dogtag are 20 years for the CA certificate (when self-signed) and about 2 years for other system certificates. These defaults also apply to FreeIPA. It can be desirable to have shorter certificate lifetimes. And although I wouldn’t recommend to use longer lifetimes, people sometimes want that.

There is no supported mechanism for customising system certificate validity duration during Dogtag or FreeIPA installation. But it can be done. In this post I’ll explain how.

Profile configuration files

During installation, profile configurations are copied from the RPM install locations under /usr/share to the new Dogtag instance’s configuration directory. If the LDAP profile subsystem is used (FreeIPA uses it) they are further copied from the instance configuration directory into the LDAP database.

There is no facility or opportunity to modify the profiles during installation. So if you want to customise the certificate lifetimes, you have to modify the files under /usr/share.

There are two directories that contain profile configurations:

/usr/share/pki/ca/profiles/ca/*.cfg

These profile configurations are available during general operation.

/usr/share/pki/ca/conf/*.profile

These overlay configurations used during installation when issuing system certificates. Each configuration references an underlying profile and can override or extend the configuration.

/usr/share/ipa/profiles/*.cfg

Profiles that are shipped by FreeIPA and imported into Dogtag are defined here. The configurations for the LDAP, Apache HTTPS and KDC certificates are found here.

I’ll explain which configuration file is used for which certificate later on in this post.

Specifying the validity period

The configuration field for setting the validity period are:

<component>.default.params.range=720
<component>.constraint.params.range=720

where <component> is some key, usually a numeric index, that may be different for different profiles. The actual profile component classes are ValidityDefault and ValidityConstraint, or {CA,User}Validity{Default,Constraint} for some profiles.

The default component sets the default validity period for this profile, whereas the constraint sets the maximum duration in case the user overrides it. Note that if an override configuration overrides the default value such that it exceeds the constraint specified in the underlying configuration, issuance will fail due to constraint violation. It is usually best to specify both the default and constraint together, with the same value.

The default range unit is day, so the configuration above means 720 days. Use the rangeUnit parameter to specify a different unit. The supported units are year, month, day, hour and minute. For example:

<component>.default.params.range=3
<component>.default.params.rangeUnit=month
<component>.constraint.params.range=3
<component>.constraint.params.rangeUnit=month

Which configuration for which certificate?

CA certificate (when self-signed)

/usr/share/pki/ca/conf/caCert.profile

OCSP signing certificate

/usr/share/pki/ca/conf/caOCSPCert.profile

Subsystem certificate

/usr/share/pki/ca/conf/rsaSubsystemCert.profile when using RSA keys (the default)

Dogtag HTTPS certificate

/usr/share/pki/ca/conf/rsaServerCert.profile when using RSA keys (the default)

Audit signing

/usr/share/pki/ca/conf/caAuditSigningCert.profile

IPA RA agent (FreeIPA-specific)

/usr/share/pki/ca/profiles/ca/caServerCert.cfg

Apache and LDAP certificates (FreeIPA-specific)

/usr/share/ipa/profiles/caIPAserviceCert.cfg

KDC certificate (FreeIPA-specific)

/usr/share/ipa/profiles/KDCs_PKINIT_Certs.cfg

Testing

I made changes to the files mentioned above, so that certificates would be issued with the following validity periods:

CA 5 years
OCSP 1 year
Subsystem 6 months
HTTPS 3 months
Audit 1 year
IPA RA 15 months
Apache 4 months
LDAP 4 months
KDC 18 months

I installed FreeIPA (with a self-signed CA). After installation completed, I had a look at the certificates that were being tracked by Certmonger. For reference, the installation took place on March 4, 2019 (2019-03-04).

# getcert list |egrep '^Request|certificate:|expires:'
Request ID '20190304044028':
  certificate: type=FILE,location='/var/lib/ipa/ra-agent.pem'
  expires: 2020-06-04 15:40:30 AEST
Request ID '20190304044116':
  certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='auditSigningCert cert-pki-ca',token='NSS Certificate DB'
  expires: 2020-03-04 15:39:53 AEDT
Request ID '20190304044117':
  certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='ocspSigningCert cert-pki-ca',token='NSS Certificate DB'
  expires: 2020-03-04 15:39:53 AEDT
Request ID '20190304044118':
  certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='subsystemCert cert-pki-ca',token='NSS Certificate DB'
  expires: 2019-09-04 15:39:53 AEST
Request ID '20190304044119':
  certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca',token='NSS Certificate DB'
  expires: 2024-03-04 15:39:51 AEDT
Request ID '20190304044120':
  certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='Server-Cert cert-pki-ca',token='NSS Certificate DB'
  expires: 2019-06-04 15:39:53 AEST
Request ID '20190304044151':
  certificate: type=NSSDB,location='/etc/dirsrv/slapd-IPA-LOCAL',nickname='Server-Cert',token='NSS Certificate DB'
  expires: 2019-07-04 15:41:52 AEST
Request ID '20190304044225':
  certificate: type=FILE,location='/var/lib/ipa/certs/httpd.crt'
  expires: 2019-07-04 15:42:26 AEST
Request ID '20190304044234':
  certificate: type=FILE,location='/var/kerberos/krb5kdc/kdc.crt'
  expires: 2020-09-04 15:42:34 AEST

Observe that the certificate have the intended periods.

Discussion

The procedure outlined in this post is not officially supported, and not recommended. But the desire to choose different validity periods is sometimes justified, especially for the CA certificate. So should FreeIPA allow customisation of the system certificate validity periods? To what extent?

We need to reduce the default CA validity from 20 years, given the 2048-bit key size. (There is a separate issue to support generating a larger CA signing key, too). Whether the CA validity period should be configurable is another question. My personal opinion is that it makes sense to allow the customer to choose the CA lifetime.

For system certificates, I think that customers should just accept the defaults. PKI systems are trending to shorter lifetimes for end-entity certificates, which is a good thing. For FreeIPA, unfortunately we are still dealing with a lot of certificate renewal issues that arise from the complex architecture. Until we are confident in the robustness of the renewal system, and have observed a reduction in customer issues, it would be a mistake to substantially reduce the validity period for system certificates. Likewise, it is not yet a good idea to let customers choose the certificate validity periods.

On the other hand, the team is considering changing the default validity period of system certificates a little bit, so that different certificates are on different renewal candences. This would simplify recovery in some scenarios: it is easier to recover when only some of the certificates expired, instead of all of them at once.

March 04, 2019 12:00 AM

March 01, 2019

Fraser Tweedale

Specifying a CA Subject Key Identifier during Dogtag installation

Specifying a CA Subject Key Identifier during Dogtag installation

When installing Dogtag with an externally-signed CA certificate, it is sometimes necessary to include a specific Subject Key Identifier value in the CSR. In this post I will demonstrate how to do this.

What is a Subject Key Identifier?

The X.509 Subject Key Identifier (SKI) extension declares a unique identifier for the public key in the certificate. It is required on all CA certificates. CAs propagate their own SKI to the Issuer Key Identifier (AKI) extension on issued certificates. Together, these facilitate efficient certification path construction; certificate databases can index certificates by SKI.

The SKI must be unique for a given key. Most often it is derived from the public key data using a cryptographic digest, usually SHA-1. But any method of generating a unique value is acceptable.

For example, let’s look at the CA certificate and one of the service certificates in a FreeIPA deployment. The CA is self-signed and therefore contains the same value in both the SKI and AKI extensions:

% openssl x509 -text < /etc/ipa/ca.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O = IPA.LOCAL 201902271325, CN = Certificate Authority
        Validity
            Not Before: Feb 27 03:30:22 2019 GMT
            Not After : Feb 27 03:30:22 2034 GMT
        Subject: O = IPA.LOCAL 201902271325, CN = Certificate Authority
        Subject Public Key Info:
            < elided >
        X509v3 extensions:
            X509v3 Authority Key Identifier:
                keyid:C9:29:69:D0:14:A4:AB:11:D4:11:B1:35:31:81:08:B6:A9:30:D3:0A

            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Certificate Sign, CRL Sign
            X509v3 Subject Key Identifier:
                C9:29:69:D0:14:A4:AB:11:D4:11:B1:35:31:81:08:B6:A9:30:D3:0A
            Authority Information Access:
                OCSP - URI:http://ipa-ca.ipa.local/ca/ocsp
  ...

Whereas the end entity certificate has the CA’s SKI in its AKI, and its SKI is different:

% sudo cat /var/lib/ipa/certs/httpd.crt | openssl x509 -text
Certificate:
    Data:
      Version: 3 (0x2)                                                                                                                                                                                  [43/9508]
      Serial Number: 9 (0x9)
      Signature Algorithm: sha256WithRSAEncryption
      Issuer: O = IPA.LOCAL 201902271325, CN = Certificate Authority
      Validity
          Not Before: Feb 27 03:32:57 2019 GMT
          Not After : Feb 27 03:32:57 2021 GMT
      Subject: O = IPA.LOCAL 201902271325, CN = f29-0.ipa.local
      Subject Public Key Info:
          < elided >
      X509v3 extensions:
          X509v3 Authority Key Identifier:
              keyid:C9:29:69:D0:14:A4:AB:11:D4:11:B1:35:31:81:08:B6:A9:30:D3:0A

          Authority Information Access:
              OCSP - URI:http://ipa-ca.ipa.local/ca/ocsp

          X509v3 Key Usage: critical
              Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment
          X509v3 Extended Key Usage:
              TLS Web Server Authentication, TLS Web Client Authentication
          X509v3 CRL Distribution Points:

              Full Name:
                URI:http://ipa-ca.ipa.local/ipa/crl/MasterCRL.bin
              CRL Issuer:
                DirName:O = ipaca, CN = Certificate Authority

          X509v3 Subject Key Identifier:
              FE:D2:8A:72:C8:D5:78:79:C9:04:04:A8:39:37:7F:FD:36:E6:E9:D2
          X509v3 Subject Alternative Name:
              DNS:f29-0.ipa.local, othername:<unsupported>, othername:<unsupported>

Most CA programs, including Dogtag, automatically compute a SKI for every certificate being issued. Dogtag computes a SHA-1 hash over the subjectPublicKey value, which is the most common method. The value must be unique, but does not have to be derived from the public key.

It is not required for a self-signed CA certificate to contain an AKI extension. Neither is it necessary to include a SKI in an end entity certificate. But it does not hurt to include them. Indeed it is common (as we see above).

Use case for specifying a SKI

If CAs can automatically compute a SKI, why would you need to specify one?

The use case arises when you’re changing external CAs or switching from self-signed to externally-signed, or vice versa. The new CA might compute SKIs differently from the current CA. But it is important to keep using the same SKI. So it is desirable to include the SKI in the CSR to indicate to the CA the value that should be used.

Not every CA program will follow the suggestion. Or the behaviour may be configurable, system-wide or per-profile. If you’re using Dogtag / RHCS to sign CA certificates, it is straightforward to define a profile that uses an SKI supplied in the CSR (but that is beyond the scope of this article).

Including an SKI in a Dogtag CSR

At time of writing, this procedure is supported in Dogtag 10.6.9 and later, which is available in Fedora 28 and Fedora 29. It will be supported in a future version of RHEL. The behaviour depends on a recent enhancement to the certutil program, which is part of NSS. That enhancement is not in RHEL 7 yet, hence this Dogtag feature is not yet available on RHEL 7.

When installing Dogtag using the two-step external signing procedure, by default no SKI is included the CSR. You can change this via the pki_req_ski option. The option is described in the pki_default.cfg(5) man page. There are two ways to use the option, and we will look at each in turn.

Default method

[CA]
pki_req_ski=DEFAULT

This special value will cause the CSR to contain a SKI value computed using the same method Dogtag itself uses (SHA-1 digest). Adding this value resulted in the following CSR data:

Certificate Request:
    Data:
        Version: 1 (0x0)
        Subject: O = IPA.LOCAL 201903011502, CN = Certificate Authority
        Subject Public Key Info:
            < elided >
        Attributes:
        Requested Extensions:
            X509v3 Subject Key Identifier: 
                76:49:AA:B2:08:60:18:C1:6D:AF:2C:28:A0:54:34:77:7E:8F:80:71
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Certificate Sign, CRL Sign

The SKI value is the SHA-1 digest of the public key. Of course, it will be different every time, because a different key will be generated.

Explicit SKI

[CA]
pki_req_ski=<hex data>

An exact SKI value can be specified as a hex-encode byte string. The datum must not have a leading 0x. I used the following configuration:

[CA]
pki_req_ski=00D06F00D4D06746

With this configuration, the expected SKI value appears in the CSR:

Certificate Request:
    Data:
        Version: 1 (0x0)
        Subject: O = IPA.LOCAL 201903011518, CN = Certificate Authority
        Subject Public Key Info:
            < elided >
        Attributes:
        Requested Extensions:
            X509v3 Subject Key Identifier:
                00:D0:6F:00:D4:D0:67:46
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Certificate Sign, CRL Sign

Renewal

We don’t have direct support for including the SKI in the CSR generated for renewing an externally signed CA. But you can use certutil to create a CSR that includes the desired SKI.

It could be worthwhile to enhance Certmonger to automatically include the SKI of the current certificate when it creates a CSR for renewing a tracked certificate.

FreeIPA support

We don’t expose this feature in FreeIPA directly. It can be hacked in pretty easily by modifying the Python code that builds the pkispawn configuration during installation. Alternatively, set the option in the pkispawn default configuration file: /usr/share/pki/server/etc/default.cfg (this is what I did to test the feature).

Changes to be made as part of the upcoming HSM support will, as a pleasant side effect, make it easy to specify or override pkispawn configuration values including pki_req_ski.

March 01, 2019 12:00 AM

February 28, 2019

Fraser Tweedale

Offline expired certificate renewal for Dogtag

Offline expired certificate renewal for Dogtag

The worst has happened. Somehow, certificate renewal didn’t happen when it should have, and now you have expired certificates. Worst, these are Dogtag system certificates; you can’t even start Dogtag to issue new ones! Unfortunately, this situation arises fairly often. Sometimes due to administrator error or extended downtime; sometimes due to bugs. These cases are notoriously difficult (and expensive) to analyse and resolve. It often involves time travel:

  1. Set the system clock to a time setting just before certificates started expiring.
  2. Fix whatever caused renewal not to work in the first place.
  3. Renew expiring certificates.
  4. Reset system clock.

That is the simple case! I have seen much gnarlier scenarios. Ones where multiple times must be visited. Ones where there is no time at which all relevant certs are valid.

It would be nice to avoid these scenarios, and the FreeIPA team continues to work to improve the robustness of certificate renewal. We also have a monitoring / health check solution on the roadmap, so that failure of automated renewal sets off alarms before everything else falls over. But in the meantime, customers and support are still dealing with scenarios like this. Better recovery tools are needed.

And better tools are on the way! Dinesh, one of the Dogtag developers, has built a tool to simplify renewal when your Dogtag CA is offline due to expired system certificates. This post outlines what the tool is, what it does, and my first experiences using it in a FreeIPA deployment. Along the way and especially toward the end of the post, I will discuss the caveats and potential areas for improvement, and FreeIPA-specific considerations.

pki-server cert-fix

The tool is implemented as a subcommand of the pki-server utility–namely cert-fix (and I will use this short name throughout the post). So it is implemented in Python, but in some places it calls out to certutil or the Java parts of Dogtag via the HTTP API. The user documentation is maintained the source repository.

The insight at the core of cert-fix is that even if Dogtag is not running or cannot run, we still have access to the keys needed to issue certificates. We do need to use Dogtag to properly store issued certificates (for revocation purposes) and produce an audit trail. But if needed, we can use the CA signing key to temporarily fudge the important certificates to get Dogtag running again, then re-issue expired system certificates properly.

Assumptions

cert-fix makes the following assumptions about your environment. If these do not hold, then cert-fix, as currently implemented, cannot do its thing.

  • The CA signing certificate is valid.
  • You have a valid admin or agent certificate. In a FreeIPA environment the IPA RA certificaite fulfils this role.
  • (indirect) The LDAP server (389 DS) is operational, its certificate is valid, and Dogtag can authenticate to it.

These assumptions have been made for good reasons, but there are several certificate expiry scenarios that breach them. I will discuss in detail later in the post. For now, we must accept them.

What cert-fix does

The cert-fix performs the following actions to renew an expired system certificate:

  1. Inspect the system and identify which system certificates need renewing. Or the certificates can be specified on the command line.
  2. If Dogtag’s HTTPS certificate is expired, use certutil commands to issue a new “temporary” certificate. The validity period is three months (from the current time). The serial number of the current (expired) HTTPS is reused (a big X.509 no-no, but operationally no big deal in this scenario). There is no audit trail and the certificate will not appear in the LDAP database.
  3. Disable the startup self-test for affected subsystems, then start Dogtag.
  4. For each target certificate, renew the certificate via API, using given credential. Validity periods and other characteristics are determined by relevant profiles. Serial numbers are chosen in the usual manner, the certificates appear in LDAP and there is an audit trail.
  5. Stop Dogtag.
  6. For each target certificate, import the new certificate into Dogtag’s NSSDB.
  7. Re-enable self-test for affected subsystems and start Dogtag.

Using cert-fix

There are a couple of ways to try out the tool—without waiting for certificates to expire, that is. One way is to roll your system clock forward, beyond the expiry date of one or more certificates. Another possibility is to modify a certificate profile used for a system certificate so that it will be issued with a very short validity period.

I opted for the latter option. I manually edited the default profile configuration, so that Dogtag’s OCSP and HTTPS certificates would be issued with a validity period of 15 minutes. By the time I installed FreeIPA, grabbed a coffee and read a few emails, the certificates had expired. Certmonger didn’t even attempt to renew them. Dogtag was still running and working properly, but ipactl restart put Dogtag, and the whole FreeIPA deployment, out of action.

I used pki-server cert-find to have a peek at Dogtag’s system certificates:

[root@f29-0 ca]# pki-server cert-find
  Cert ID: ca_signing
  Nickname: caSigningCert cert-pki-ca
  Serial Number: 0x1
  Subject DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Not Valid Before: Wed Feb 27 14:30:22 2019
  Not Valid After: Mon Feb 27 14:30:22 2034

  Cert ID: ca_ocsp_signing
  Nickname: ocspSigningCert cert-pki-ca
  Serial Number: 0x2
  Subject DN: CN=OCSP Subsystem,O=IPA.LOCAL 201902271325
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Not Valid Before: Wed Feb 27 14:30:24 2019
  Not Valid After: Wed Feb 27 14:45:24 2019

  Cert ID: sslserver
  Nickname: Server-Cert cert-pki-ca
  Serial Number: 0x3
  Subject DN: CN=f29-0.ipa.local,O=IPA.LOCAL 201902271325
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Not Valid Before: Wed Feb 27 14:30:24 2019
  Not Valid After: Wed Feb 27 14:45:24 2019

  Cert ID: subsystem
  Nickname: subsystemCert cert-pki-ca
  Serial Number: 0x4
  Subject DN: CN=CA Subsystem,O=IPA.LOCAL 201902271325
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Not Valid Before: Wed Feb 27 14:30:24 2019
  Not Valid After: Tue Feb 16 14:30:24 2021

  Cert ID: ca_audit_signing
  Nickname: auditSigningCert cert-pki-ca
  Serial Number: 0x5
  Subject DN: CN=CA Audit,O=IPA.LOCAL 201902271325
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201902271325
  Not Valid Before: Wed Feb 27 14:30:24 2019
  Not Valid After: Tue Feb 16 14:30:24 2021

Note the Not Valid After times for the ca_ocsp_signing and sslserver certificates. These are certificates we must renew.

Preparing the agent certificate

The cert-fix command requires an agent certificate. We will use the IPA RA certificate. The pki-server CLI tool needs an NSSDB with the agent key and certificate. So we have to set that up. First initialise the NSSDB:

[root@f29-0 ~]# mkdir ra-nssdb
[root@f29-0 ~]# cd ra-nssdb
[root@f29-0 ra-nssdb]# certutil -d . -N
Enter a password which will be used to encrypt your keys.
The password should be at least 8 characters long,
and should contain at least one non-alphabetic character.

Enter new password: XXXXXXXX
Re-enter password: XXXXXXXX

Then create a PKCS #12 file containing the required key and certificates:

[root@f29-0 ra-nssdb]# openssl pkcs12 -export \
  -inkey /var/lib/ipa/ra-agent.key \
  -in /var/lib/ipa/ra-agent.pem \
  -name "ra-agent" \
  -certfile /etc/ipa/ca.crt > ra-agent.p12
Enter Export Password:
Verifying - Enter Export Password:

Import it into the NSSDB, and fix up trust flags on the IPA CA certificate:

[root@f29-0 ra-nssdb]# pk12util -d . -i ra-agent.p12
Enter Password or Pin for "NSS Certificate DB":
Enter password for PKCS12 file:
pk12util: PKCS12 IMPORT SUCCESSFUL

[root@f29-0 ra-nssdb]# certutil -d . -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

ra-agent                                                     u,u,u
Certificate Authority - IPA.LOCAL 201902271325               ,,

[root@f29-0 ra-nssdb]# certutil -d . -M \
    -n 'Certificate Authority - IPA.LOCAL 201902271325' \
    -t CT,C,C
Enter Password or Pin for "NSS Certificate DB":

[root@f29-0 ra-nssdb]# certutil -d . -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

ra-agent                                                     u,u,u
Certificate Authority - IPA.LOCAL 201902271325               CT,C,C

Running cert-fix

Let’s look at the cert-fix command options:

[root@f29-0 ra-nssdb]# pki-server cert-fix --help
Usage: pki-server cert-fix [OPTIONS]

      --cert <Cert ID>            Fix specified system cert (default: all certs).
  -i, --instance <instance ID>    Instance ID (default: pki-tomcat).
  -d <NSS database>               NSS database location (default: ~/.dogtag/nssdb)
  -c <NSS DB password>            NSS database password
  -C <path>                       Input file containing the password for the NSS database.
  -n <nickname>                   Client certificate nickname
  -v, --verbose                   Run in verbose mode.
      --debug                     Run in debug mode.
      --help                      Show help message.

It’s not a good idea to put passphrases on the command line in the clear, so let’s write the NSSDB passphrase to a file:

[root@f29-0 ra-nssdb]# cat > pwdfile.txt
XXXXXXXX
^D

Finally, I was ready to execute cert-fix:

[root@f29-0 ra-nssdb]# pki-server cert-fix \
    -d . -C pwdfile.txt -n ra-agent \
    --cert sslserver --cert ca_ocsp_signing \
    --verbose

Running with --verbose causes INFO and higher-level log messages to be printed to the terminal. Running with --debug includes DEBUG messages. If neither of these is used, nothing is output (unless there’s an error). So I recommend running with --verbose.

So, what happened? Unfortunately I ran into several issues.

389 DS not running

The first issue was trivial, but likely to occur if you have to cert-fix a FreeIPA deployment. The ipactl [re]start command will shut down every component if any component failed to start. Dogtag didn’t start, therefore ipactl shut down 389 DS too. As a consequence, Dogtag failed to initialise after cert-fix started it, and the command failed.

So, before running cert-fix, make sure LDAP is working properly. To start it, use systemctl instead of ipactl:

# systemctl start dirsrv@YOUR-REALM

Connection refused

One issue I encountered was that a slow startup of Dogtag caused failure of the tool. cert-fix does not wait for Dogtag to start up properly. It just ploughs ahead—only to encounter ConnectionRefusedError.

I worked around this—temporarily—by adding a sleep after cert-fix starts Dogtag. A proper fix will require a change to the code. cert-fix should perform a server status check, retrying until it succeeds or times out.

TLS handshake failure

The next error I encountered was a TLS handshake failure:

urllib3.exceptions.MaxRetryError:
  HTTPSConnectionPool(host='f29-0.ipa.local', port=8443): Max retries
  exceeded with url: /ca/rest/certrequests/profiles/caManualRenewal
  (Caused by SSLError(SSLError(185073780, '[X 509: KEY_VALUES_MISMATCH]
  key values mismatch (_ssl.c:3841)')))

I haven’t worked out yet what is causing this surprising error. But I wasn’t the first to encounter it. A comment in the Bugzilla ticket indicated that the workaround was to remove the IPA CA certificate from the client NSSDB. This I did:

[root@f29-0 ra-nssdb]# certutil -d . -D \
    -n "Certificate Authority - IPA.LOCAL 201902271325"

After this, my next attempt at running cert-fix succeeded.

Results

Looking at the previously expired target certificates, observe that the certificates have been updated. They have new serial numbers, and expire in 15 months:

[root@f29-0 ra-nssdb]# certutil -d /etc/pki/pki-tomcat/alias \
    -L -n 'Server-Cert cert-pki-ca' | egrep "Serial|Not After"
      Serial Number: 12 (0xc)
          Not After : Wed May 27 12:45:25 2020

[root@f29-0 ra-nssdb]# certutil -d /etc/pki/pki-tomcat/alias \
    -L -n 'ocspSigningCert cert-pki-ca' | egrep "Serial|Not After"
      Serial Number: 13 (0xd)
          Not After : Wed May 27 12:45:28 2020

Looking at the output of getcert list for the target certificates, we see that Certmonger has not picked these up (some lines removed):

[root@f29-0 ra-nssdb]# getcert list -i 20190227033149
Number of certificates and requests being tracked: 9.
Request ID '20190227033149':
   status: CA_UNREACHABLE
   ca-error: Internal error
   stuck: no
   CA: dogtag-ipa-ca-renew-agent
   issuer: CN=Certificate Authority,O=IPA.LOCAL 201902271325
   subject: CN=OCSP Subsystem,O=IPA.LOCAL 201902271325
   expires: 2019-02-27 14:45:24 AEDT
   eku: id-kp-OCSPSigning

[root@f29-0 ra-nssdb]# getcert list -i 20190227033152
Number of certificates and requests being tracked: 9.
Request ID '20190227033152':
   status: CA_UNREACHABLE
   ca-error: Internal error
   stuck: no
   CA: dogtag-ipa-ca-renew-agent
   issuer: CN=Certificate Authority,O=IPA.LOCAL 201902271325
   subject: CN=f29-0.ipa.local,O=IPA.LOCAL 201902271325
   expires: 2019-02-27 14:45:24 AEDT
   dns: f29-0.ipa.local
   key usage: digitalSignature,keyEncipherment,dataEncipherment
   eku: id-kp-serverAuth

Restarting Certmonger (systemctl restart certmonger) resolved the discrepancy.

Finally, ipactl restart puts everything back online. cert-fix has saved the day!

[root@f29-0 ra-nssdb]# ipactl restart
Restarting Directory Service
Starting krb5kdc Service
Starting kadmin Service
Starting httpd Service
Starting ipa-custodia Service
Starting pki-tomcatd Service
Starting ipa-otpd Service
ipa: INFO: The ipactl command was successful

Issues and caveats

Besides the issues already covered, there are several scenarios that cert-fix cannot handle.

Expired CA certificate

Due to the the long validity period of a typical CA certificate, the assumption that the CA certificate is valid is the safest assumption made by cert-fix. But it is not a safe assumption.

The most common way this assumption is violated is with externally-signed CA certificates. For example, the FreeIPA CA in your organisation is signed by Active Directory CA, with a validity period of two years. Things get overlooked and suddenly, your FreeIPA CA is expired. It may take some time for the upstream CA administrators to issue a new certificate. In the meantime, you want to get your FreeIPA/Dogtag CA back up.

Right now cert-fix doesn’t handle this scenario. I think it should. As far as I can tell, this should be straightforward to support. Unlike the next few issues…

Agent certificate expiry

This concerns the assumption that you have a valid agent certificate. Dogtag requires authentication to perform privilieged operations like certificate issuance. Also, the authenticated user must be included in audit events. cert-fix must issue certificates properly (with limiited temporary fudging tolerated for operational efficacy), therefore there must be an agent credential. And if your agent credential is a certificate, it must be valid. So if your agent certificate is expired, it’s Catch-22. That is why the tool, as currently implemented, must assume you have a valid, non-expired agent certificate.

In some deployments the agent certificate is renewed on a different cadence from subsystem certificates. In that case, this scenario is less like to occur—but still entirely possible! The assumption is bad.

In my judgement it is fairly important to find a workaround for this. One idea could be to talk directly to LDAP and set a randomly-generated password on an agent account, and use that to authenticate. After the tool exits, the passphrase is forgotten. This approach means cert-fix needs a credential and privileges to perform those operations in LDAP.

Speaking of LDAP…

389 DS certificate authentication

In FreeIPA deployments, Dogtag is configured to use the subsystem certificate to bind (authenticate) to the LDAP server. If the subsystem certificate is expired, 389 DS will reject the certificate; the connection fails and and Dogtag cannot start.

A workaround for this may be to temporarily reconfigure Dogtag to use a password to authenticate to LDAP. Then after the new subsystem certificate was issued, it must be added to the pkidbuser entry in LDAP, and certificate authentication reinstated.

This is not a FreeIPA-specific consideration. Using TLS client authentication to bind to LDAP is a supported configuration in Dogtag / RHCS. So we should probably support it in cert-fix too, somehow, since the point of the tool is to avoid complex manual procedures in recovering from expired system certificates.

389 DS service certificate expiry

You know the tune by now… if this certificate is expired, Dogtag can’t talk to LDAP and can’t start, therefore a new LDAP certificate can’t be issued.

Issuing a temporary certificate with the same serial number may be the best way forward here, like what we do for the Dogtag HTTPS certificate.

Re-keying

…is not supported. But it is a possible future enhancement

Serial number reuse

Re-using a serial number is prohibited by the X.509 standard. Although the temporary re-issued HTTPS certificate is supposed to be temporary, what if it did leak out? For example, another client that contacted Dogtag while that certificate is in use could log it to a Certificate Transparency log (not a public one, unless your Dogtag CA is chained to a publicly trusted CA). If this occurred, there would be a record that the CA had misbehaved.

What are the ramifications? If this happened in the public PKI, the offending CA would at best get a harsh and very public admonishment, and be put on notice. But trust store vendors might just straight up wash their hands of you and yank trust.

In a private PKI is it such a big deal? Given our use case—the same subject names are used—probably not. But I leave it as an open topic to ponder how this might backfire.

Conclusion

In this post I introduced the pki-server cert-fix subcommand. The purpose of this tool is to simplify and speed up recovery when Dogtag system certificates have expired.

It does what it says on the tin, with a few rough edges and, right now, a lot of caveats. The fundamentals are very good, but I think we need to address number of these caveats for cert-fix to be generally useful, especially in a FreeIPA context. Based on my early experiences and investigation, my suggested priorities are:

  1. Workaround for when the agent certificate is expired. This can affect every kind of deployment and the reliance on a valid agent certificate is a significant limitation.
  2. Workaround for expired subsystem certificate when TLS client authentication is used to bind to LDAP. This affects all FreeIPA deployments (standalone Dogtag deployments less commonly).
  3. Support renewing the CA certificate in cert-fix. A degree of sanity checking or confirmation may be reasonable (e.g. it must be explicitly listed on the CLI as a --cert option).
  4. Investigate ways to handle expired LDAP certificate, if issued by Dogtag. In some deployments, including some FreeIPA deployments, the LDAP certificate is not issued by Dogtag, so the risk is not universal.

In writing this post I by no means wish to diminish Dinesh’s work. On the contrary, I’m impressed with what the tool already can do! And, mea culpa, I have taken far too long to test this tool and evaluate it in a FreeIPA setting. Now that I have a clearer picture, I see that I will be very busy making the tool more capable and ready for action in FreeIPA scenarios.

February 28, 2019 12:00 AM

February 25, 2019

William Brown

Programming Lessons and Methods

Programming Lessons and Methods

Everyone has their own lessons and methods that they use when they approaching programming. These are the lessons that I have learnt, which I think are the most important when it comes to design, testing and communication.

Comments and Design

Programming is the art of writing human readable code, that a machine will eventually run. Your program needs to be reviewed, discussed and parsed by another human. That means you need to write your program in a way they can understand first.

Rather than rushing into code, and hacking until it works, I find it’s great to start with comments such as:

fn data_access(search: Search) -> Type {
    // First check the search is valid
    //  * No double terms
    //  * All schema is valid

    // Retrieve our data based on the search

    // if debug, do an un-indexed assert the search matches

    // Do any need transform

    // Return the data
}

After that, I walk away, think about the issue, come back, maybe tweak these comments. When I eventually fill in the code inbetween, I leave all the comments in place. This really helps my future self understand what I was thinking, but it also helps other people understand too.

State Machines

State machines are a way to design and reason about the states a program can be in. They allow exhaustive represenations of all possible outcomes of a function. A simple example is a microwave door.

  /----\            /----- close ----\          /-----\
  |     \          /                 v         v      |
  |    -------------                ---------------   |
open   | Door Open |                | Door Closed |  close
  |    -------------                ---------------   |
  |    ^          ^                  /          \     |
  \---/            \------ open ----/            \----/

When the door is open, opening it again does nothing. Only when the door is open, and we close the door (and event), does the door close (a transition). Once closed, the door can not be closed any more (event does nothing). It’s when we open the door now, that a state change can occur.

There is much more to state machines than this, but they allow us as humans to reason about our designs and model our programs to have all possible outcomes considered.

Zero, One and Infinite

In mathematics there are only three numbers that matter. Zero, One and Infinite. It turns out the same is true in a computer too.

When we are making a function, we can define limits in these terms. For example:

fn thing(argument: Type)

In this case, argument is “One” thing, and must be one thing.

fn thing(argument: Option<Type>)

Now we have argument as an option, so it’s “Zero” or “One”.

fn thing(argument: Vec<Type>)

Now we have argument as vec (array), so it’s “Zero” to “Infinite”.

When we think about this, our functions have to handle these cases properly. We don’t write functions that take a vec with only two items, we write a function with two arguments where each one must exist. It’s hard to handle “two” - it’s easy to handle two cases of “one”.

It also is a good guide for how to handle data sets, assuming they could always be infinite in size (or at least any arbitrary size).

You can then apply this to tests. In a test given a function of:

fn test_me(a: Option<Type>, b: Vec<Type>)

We know we need to test permutations of:

  • a is “Zero” or “One” (Some, None)
  • b is “Zero”, “One” or “Infinite” (.len() == 0, .len() == 1, .len() > 0)

Note: Most languages don’t have an array type that is “One to Infinite”, IE non-empty. If you want this condition (at least one item), you have to assert it yourself ontop of the type system.

Correct, Simple, Fast

Finally, we can put all these above tools together and apply a general philosophy. When writing a program, first make it correct, then simpify the program, then make it fast.

If you don’t do it in this order you will hit barriers - social and technical. For example, if you make something fast, simple, correct, you will likely have issues that can be fixed without making a decrease in performance. People don’t like it when you introduce a patch that drops performance, so as a result correctness is now sacrificed. (Spectre anyone?)

If you make something too simple, you may never be able to make it correctly handle all cases that exist in your application - likely facilitating a future rewrite to make it correct.

If you do correct, fast, simple, then your program will be correct, and fast, but hard for a human to understand. Because programming is the art of communicating intent to a person sacrificing simplicity in favour of fast will make it hard to involve new people and educate and mentor them into development of your project.

  • Correct: Does it behave correctly, handle all states and inputs correctly?
  • Simple: Is it easy to comprehend and follow for a human reader?
  • Fast: Is it performant?

February 25, 2019 01:00 PM

February 18, 2019

Fraser Tweedale

IP address SAN support in FreeIPA

IP address SAN support in FreeIPA

The X.509 Subject Alternative Name (SAN) certificate extension carries subject names that cannot (or cannot easily) be expressed in the Subject Distinguished Name field. The extension supports various name types, including DNS names (the most common), IP addresses, email addresses (for users) and Kerberos principal names, among others.

When issuing a certificate, FreeIPA has to validate that requested SAN name values match the principal to whom the certificate is being issued. There has long been support for DNS names, Kerberos and Microsoft principal names, and email addresses. Over the years we have received many requests to support IP address SAN names. And now we are finally adding support!

In this post I will explain the context and history of this feature, and demonstrate how to use it. At time of writing the work is not yet merged, but substantive changes are not expected.

Acknowledgement

First and foremost, I must thank Ian Pilcher who drove this work. DNS name validation is tricky, but Ian proposed a regime that was acceptable to the FreeIPA team from a philosophical and security standpoint. Then he cut the initial patch for the feature. The work was of a high quality; my subsequent changes and enhancements were minor. Above all, Ian and others had great patience as the pull request sat in limbo for nearly a year! Thank you Ian.

IP address validation

There is a reason we kicked the SAN IP address support can down the road for so long. Unlike some name types, validating IP addresses is far from straightforward.

Let’s first consider the already-supported name types. FreeIPA is an identity management system. It knows the various identities (principal name, email address, hostname) of the subjects/principals it knows about. Validation of these name types reduces to the question “does this name belong to the subject principal object?”

For IP addresses is not so simple. There are several complicating factors:

  • FreeIPA can manage DNS entries, but it doesn’t have to. If FreeIPA is not a source of authoritative DNS information, should it trust information from external resolvers? Only with DNSSEC?
  • There may be multiple, conflicting sources of DNS records. The DNS view presented to FreeIPA clients may differ from that seen by other clients. The FreeIPA DNS may “shadow” public (or other) DNS records.
  • For validation, what should be the treatment of forward (A / AAAA) and reverse (PTR) records pertaining to the names involved?
  • Should CNAME records be followed? How many times?
  • The issued certificate may be used in or presented to clients in environments with a different DNS view from the environment in which validation was performed.
  • Does the request have to come from, or does the requesting entity have to prove control of, the IP address(es) requested for inclusion in the certificate?
  • IP addresses often change and a reassigned much more often than the typical lifetime of a certificate.
  • If you query external DNS systems, how do you handle failures or slowness?
  • The need to mitigate DNS or BGP poisoning attacks

Taking these factors into account, it is plain to see why we put this feature off for so long. It is just hard to determine what the correct behaviour should be. Nevertheless use cases exist so the feature request is legitimate. The difference with Ian's RFE was that he proposed a strict validation regime that only uses data defined in FreeIPA. It is a fair assumption that the data managed by a FreeIPA instance is trustworthy. That assumption, combined with some sanity checks, gives the validation requirements:

  1. Only FreeIPA-managed DNS records are considered. There is no communication with external DNS resolvers.
  2. For each IP address in the SAN, there is a DNS name in the SAN that resolves to it. (As an implementation decision, we permit one level of CNAME indirection).
  3. For each IP address in the SAN, there is a valid PTR (reverse DNS) record.
  4. SAN IP addresses are only supported for host and service principals.

Requirement 1 avoids dealing with any conflicts or communication issues with external resolvers. Requirements 2 and 3 together enforce a tight association between the subject principal (every DNS name is verified to belong to it) and the IP address (through forward and reverse resolution to the DNS name(s)).

Caveats and limitations

FreeIPA’s SAN IP address validation regime leads to the following caveats and limitations:

  • The FreeIPA DNS component must be used. (It can be enabled during installation, or at any time after installation.)
  • Forward and reverse records of addresses to be included in certificates must be added and maintained.
  • SAN IP addresses must be accompanied by at least one DNS name. Requests with only IP addresses will be rejected.

SAN IP address names in general have some limitations, too:

  • The addresses in the certificate were correct at validation time, but might have changed. The only mitigations are to use short-lived certificates, or revoke certificates if DNS changes render them invalid. There is no detection or automation to assist with that.
  • The certificate could be misused by services in other networks with the same IP address. A well-behaved client would still have to trust the FreeIPA CA in order for this impersonation attack to work.

Comparison with the public PKI

SAN IP address names are supported by browsers. The CA/Browser Forum’s Baseline Requirements permit publicly-trusted CAs to issue end-entity certificates with SAN IP address values. CAs have to verify that the applicant controls (or has been granted the right to use) the IP address. There are several acceptable verification methods:

  1. The applicant make some agreed-upon change to a network resource at the IP address in question;
  2. Consulting IANA or regional NIC assignment information;
  3. Performing reverse lookup then verifying control over the DNS name.

The IETF Automated Certificate Management Environment (ACME) working group has an Internet-Draft for automated IP address validation in the ACME protocol. It defines an automated approach to method 1 above. SAN IP addresses are not yet supported by the most popular ACME CA, Let’s Encrypt (and might never be).

Depending on an organisation’s security goals, the verification methods mentioned above may or may not be appropriate for enterprise use (i.e. behind the firewall). Likewise, the decision about whether a particular kind of validation could or should be automated might have different answers for different organisations. It is not really a question of technical constraints; rather, one of philosophy and security doctrine. When it comes to certificate request validation, the public PKI and FreeIPA are asking different questions:

  • FreeIPA asks: does the indicated subject principal own the requested names?
  • The public PKI asks: does the (potentially anonymous) applicant control the names they’re requestion?

In a few words, it’s ownership versus control. In the future it might be possible for a FreeIPA CA to ask the latter question and issue certificates (or not) accordingly. But that isn’t the focus right now.

Demonstration

Preliminaries

The scene is set. Let’s see this feature in action! The domain of my FreeIPA deployment is ipa.local. I will add a host called iptest.example.com, with the IP address 192.168.2.1. The first step is to add the reverse zone for this IP address:

% ipa dnszone-add --name-from-ip 192.168.2.1
Zone name [2.168.192.in-addr.arpa.]:
  Zone name: 2.168.192.in-addr.arpa.
  Active zone: TRUE
  Authoritative nameserver: f29-0.ipa.local.
  Administrator e-mail address: hostmaster
  SOA serial: 1550454790
  SOA refresh: 3600
  SOA retry: 900
  SOA expire: 1209600
  SOA minimum: 3600
  BIND update policy: grant IPA.LOCAL krb5-subdomain 2.168.192.in-addr.arpa. PTR;
  Dynamic update: FALSE
  Allow query: any;
  Allow transfer: none;

If the reverse zone for the IP address already exists, there would be no need to do this first step.

Next I add the host entry. Supplying --ip-address causes forward and reverse records to be added for the supplied address (assuming the relevant zones are managed by FreeIPA):

% ipa host-add iptest.ipa.local \
      --ip-address 192.168.2.1
-----------------------------
Added host "iptest.ipa.local"
-----------------------------
  Host name: iptest.ipa.local
  Principal name: host/iptest.ipa.local@IPA.LOCAL
  Principal alias: host/iptest.ipa.local@IPA.LOCAL
  Password: False
  Keytab: False
  Managed by: iptest.ipa.local

CSR generation

There are several options for creating a certificate signing request (CSR) with IP addresses in the SAN extension.

  • Lots of devices (routers, middleboxes, etc) generate CSRs containing their IP address. This is the significant driving use case for this feature, but there’s no point going into details because every device is different.
  • The Certmonger utility makes it easy to add DNS names and IP addresses to a CSR, via command line arguments. Several other name types are also supported. See getcert-request(1) for details.
  • OpenSSL requires a config file to specify SAN values for inclusing in CSRs and certificates. See req(1) and x509v3_config(5) for details.
  • The NSS certutil(1) command provides the --extSAN option for specifying SAN names, including DNS names and IP addresses.

For this demonstration I use NSS and certutil. First I initialise a new certificate database:

% mkdir nssdb ; cd nssdb ; certutil -d . -N
Enter a password which will be used to encrypt your keys.
The password should be at least 8 characters long,
and should contain at least one non-alphabetic character.

Enter new password:
Re-enter password:

Next, I generate a key and create CSR with the desired names in the SAN extension. We do not specify a key type or size we get the default (2048-bit RSA).

% certutil -d . -R -a -o ip.csr \
      -s CN=iptest.ipa.local \
      --extSAN dns:iptest.ipa.local,ip:192.168.2.1
Enter Password or Pin for "NSS Certificate DB":

A random seed must be generated that will be used in the
creation of your key.  One of the easiest ways to create a
random seed is to use the timing of keystrokes on a keyboard.

To begin, type keys on the keyboard until this progress meter
is full.  DO NOT USE THE AUTOREPEAT FUNCTION ON YOUR KEYBOARD!


Continue typing until the progress meter is full:

|************************************************************|

Finished.  Press enter to continue:


Generating key.  This may take a few moments...

The output file ip.csr contains the generated CSR. Let’s use OpenSSL to pretty-print it:

% openssl req -text < ip.csr
Certificate Request:
    Data:
        Version: 1 (0x0)
        Subject: CN = iptest.ipa.local
        Subject Public Key Info:
            < elided >
        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name:
                DNS:iptest.ipa.local, IP Address:192.168.2.1
    Signature Algorithm: sha256WithRSAEncryption
         < elided >

It all looks correct.

Issuing the certificate

I use the ipa cert-request command to request a certificate. The host iptest.ipa.local is the subject principal. The default profile is appropriate.

% ipa cert-request ip.csr \
      --principal host/iptest.ipa.local \
      --certificate-out ip.pem
  Issuing CA: ipa
  Certificate: < elided >
  Subject: CN=iptest.ipa.local,O=IPA.LOCAL 201902181108
  Subject DNS name: iptest.ipa.local
  Issuer: CN=Certificate Authority,O=IPA.LOCAL 201902181108
  Not Before: Mon Feb 18 03:24:48 2019 UTC
  Not After: Thu Feb 18 03:24:48 2021 UTC
  Serial number: 10
  Serial number (hex): 0xA

The command succeeded. As requested, the issued certificate has been written to ip.pem. Again we’ll use OpenSSL to inspect it:

% openssl x509 -text < ip.pem
Certificate:                                                                                                                                                                                               [42/694]
    Data:
        Version: 3 (0x2)
        Serial Number: 10 (0xa)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O = IPA.LOCAL 201902181108, CN = Certificate Authority
        Validity
            Not Before: Feb 18 03:24:48 2019 GMT
            Not After : Feb 18 03:24:48 2021 GMT
        Subject: O = IPA.LOCAL 201902181108, CN = iptest.ipa.local
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    < elided >
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Authority Key Identifier:
                keyid:70:C0:D3:02:EA:88:4A:4D:34:4C:84:CD:45:5F:64:8A:0B:59:54:71

            Authority Information Access:
                OCSP - URI:http://ipa-ca.ipa.local/ca/ocsp

            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 CRL Distribution Points:

                Full Name:
                  URI:http://ipa-ca.ipa.local/ipa/crl/MasterCRL.bin
                CRL Issuer:
                  DirName:O = ipaca, CN = Certificate Authority

            X509v3 Subject Key Identifier:
                3D:A9:7E:E3:05:D6:03:6A:9E:85:BB:72:69:E1:E7:11:92:6F:29:08
            X509v3 Subject Alternative Name:
                DNS:iptest.ipa.local, IP Address:192.168.2.1
    Signature Algorithm: sha256WithRSAEncryption
         < elided >

We can see that the Subject Alternative Name extension is present, and included the expected values.

Error scenarios

It’s nice to see that we can get a certificate with IP address names. But it’s more important to know that we cannot get an IP address certificate when the validation requirements are not satisfied. I’ll run through a number of scenarios and show the results (without showing the whole procedure, which would repeat a lot of information).

If we omit the DNS name from the SAN extension, there is nothing linking the IP address to the subject principal and the request will be rejected. Note that the Subject DN Common Name (CN) attribute is ignored for the purposes of SAN IP address validation. The CSR was generated using --extSAN ip:192.168.2.1.

% ipa cert-request ip-bad.csr --principal host/iptest.ipa.local
ipa: ERROR: invalid 'csr': IP address in
  subjectAltName (192.168.2.1) unreachable from DNS names

If we reinstate the DNS name but add an extra IP address that does not relate to the hostname, the request gets rejected. The CSR was generated using --extSAN dns:iptest.ipa.local,ip:192.168.2.1,ip:192.168.2.2.

% ipa cert-request ip-bad.csr --principal host/iptest.ipa.local
ipa: ERROR: invalid 'csr': IP address in
  subjectAltName (192.168.2.2) unreachable from DNS names

Requesting a certificate for a user principal fails. The CSR has Subject DN CN=alice and the SAN extension contain an IP address. The user principal alice does exist.

% ipa cert-request ip-bad.csr --principal alice
ipa: ERROR: invalid 'csr': subject alt name type
  IPAddress is forbidden for user principals

Let’s return to our original, working CSR. If we alter the relevant PTR record so that it no longer points a DNS name in the SAN (or the canonical name thereof), the request will fail:

% ipa dnsrecord-mod 2.168.192.in-addr.arpa. 1 \
      --ptr-rec f29-0.ipa.local.
  Record name: 1
  PTR record: f29-0.ipa.local.

% ipa cert-request ip.csr --principal host/iptest.ipa.local
ipa: ERROR: invalid 'csr': IP address in
  subjectAltName (192.168.2.1) does not match A/AAAA records

Similarly if we delete the PTR record, the request fails (with a different message):

% ipa dnsrecord-del 2.168.192.in-addr.arpa. 1 \
      --ptr-rec f29-0.ipa.local.
------------------
Deleted record "1"
------------------

% ipa cert-request ip.csr --principal host/iptest.ipa.local
ipa: ERROR: invalid 'csr': IP address in
  subjectAltName (192.168.2.1) does not have PTR record

IPv6

Assuming the relevant reverse zone is managed by FreeIPA and contains the correct records, FreeIPA can issue certificates with IPv6 names. First I have to add the relevant zones and records. I’m using the machine’s link-local address but the commands will be similar for other IPv6 addresses.

% ipa dnsrecord-mod ipa.local. iptest \
      --a-rec=192.168.2.1 \
      --aaaa-rec=fe80::8f18:bdab:4299:95fa
  Record name: iptest
  A record: 192.168.2.1
  AAAA record: fe80::8f18:bdab:4299:95fa

% ipa dnszone-add \
      --name-from-ip fe80::8f18:bdab:4299:95fa
Zone name [0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa.]:
  Zone name: 0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa.
  Active zone: TRUE
  Authoritative nameserver: f29-0.ipa.local.
  Administrator e-mail address: hostmaster
  SOA serial: 1550468242
  SOA refresh: 3600
  SOA retry: 900
  SOA expire: 1209600
  SOA minimum: 3600
  BIND update policy: grant IPA.LOCAL krb5-subdomain 0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa. PTR;
  Dynamic update: FALSE
  Allow query: any;
  Allow transfer: none;

% ipa dnsrecord-add \
      0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa. \
      a.f.5.9.9.9.2.4.b.a.d.b.8.1.f.8 \
      --ptr-rec iptest.ipa.local.
  Record name: a.f.5.9.9.9.2.4.b.a.d.b.8.1.f.8
  PTR record: iptest.ipa.local.

With these in place I’ll generate the CSR and issue the certificate. (This time I’ve used the -f and -z options to reduce user interaction.)

% certutil -d . -f pwdfile.txt \
    -z <(dd if=/dev/random bs=2048 count=1 status=none) \
    -R -a -o ip.csr -s CN=iptest.ipa.local \
    --extSAN dns:iptest.ipa.local,ip:fe80::8f18:bdab:4299:95fa


Generating key.  This may take a few moments...

% ipa cert-request ip.csr \
      --principal host/iptest.ipa.local \
      --certificate-out ip.pem
  Issuing CA: ipa
  Certificate: < elided >
  Subject: CN=iptest.ipa.local,O=IPA.LOCAL 201902181108
  Subject DNS name: iptest.ipa.local
  Issuer: CN=Certificate Authority,O=IPA.LOCAL 201902181108
  Not Before: Mon Feb 18 05:49:01 2019 UTC
  Not After: Thu Feb 18 05:49:01 2021 UTC
  Serial number: 12
  Serial number (hex): 0xC

The issuance succeeded. Observe that the IPv6 address is present in the certificate:

% openssl x509 -text < ip.pem | grep -A 1 "Subject Alt"
    X509v3 Subject Alternative Name:
      DNS:iptest.ipa.local, IP Address:FE80:0:0:0:8F18:BDAB:4299:95FA

Of course, it is possible to issue certificates with multiple IP addresses, including a mix of IPv4 and IPv6. Assuming all the necessary DNS records exist, with

--extSAN ip:fe80::8f18:bdab:4299:95fa,ip:192.168.2.1,dns:iptest.ipa.local

The resulting certificate will have the SAN:

IP Address:FE80:0:0:0:8F18:BDAB:4299:95FA, IP Address:192.168.2.1, DNS:iptest.ipa.local

Conclusion

In this post I discussed the challenges of verifying IP addresses for inclusion in X.509 certificates. I discussed the approach we are taking in FreeIPA to finally support this, including its caveats and limitations. For comparison, I outlined how IP address verification is done by CAs on the open internet.

I then demonstrated how the feature will work in FreeIPA. Importantly, I showed (though not exhaustively), that FreeIPA refuses to issue the certificate if the verification requirements are not met. It is a bit hard to demonstrate, from a user perspective, that we only consult FreeIPA’s own DNS records and never consult another DNS server. But hey, the code is open source so you can satisfy yourself that the behaviour fulfils the requirements (or leave a review / file an issue if you find that it does not!)

When will the feature land in master? Before the feature can be merged, I still need to write acceptance tests and have the feature reviewed by another FreeIPA developer. I am hoping to finish the work this week.

As a final remark, I must again acknowledge Ian Pilcher’s significant contribution. Were it not for him, it is likely that this longstanding RFE would still be in our “too hard” basket. Ian, thank you for your patience and I hope that your efforts are rewarded very soon with the feature finally being merged.

February 18, 2019 12:00 AM

February 11, 2019

William Brown

Meaningful 2fa on modern linux

Meaningful 2fa on modern linux

Recently I heard of someone asking the question:

“I have an AD environment connected with <product> IDM. I want to have 2fa/mfa to my linux machines for ssh, that works when the central servers are offline. What’s the best way to achieve this?”

Today I’m going to break this down - but the conclusion for the lazy is:

This is not realistically possible today: use ssh keys with ldap distribution, and mfa on the workstations, with full disk encryption.

Background

So there are a few parts here. AD is for intents and purposes an LDAP server. The <product> is also an LDAP server, that syncs to AD. We don’t care if that’s 389-ds, freeipa or vendor solution. The results are basically the same.

Now the linux auth stack is, and will always use pam for the authentication, and nsswitch for user id lookups. Today, we assume that most people run sssd, but pam modules for different options are possible.

There are a stack of possible options, and they all have various flaws.

  • FreeIPA + 2fa
  • PAM TOTP modules
  • PAM radius to a TOTP server
  • Smartcards

FreeIPA + 2fa

Now this is the one most IDM people would throw out. The issue here is the person already has AD and a vendor product. They don’t need a third solution.

Next is the fact that FreeIPA stores the TOTP in the LDAP, which means FreeIPA has to be online for it to work. So this is eliminated by the “central servers offline” requirement.

PAM radius to TOTP server

Same as above: An extra product, and you have a source of truth that can go down.

PAM TOTP module on hosts

Okay, even if you can get this to scale, you need to send the private seed material of every TOTP device that could login to the machine, to every machine. That means any compromise, compromises every TOTP token on your network. Bad place to be in.

Smartcards

Are notoriously difficult to have functional, let alone with SSH. Don’t bother. (Where the Smartcard does TLS auth to the SSH server this is.)

Come on William, why are you so doom and gloom!

Lets back up for a second and think about what we we are trying to prevent by having mfa at all. We want to prevent single factor compromise from having a large impact and we want to prevent brute force attacks. (There are probably more reasons, but these are the ones I’ll focus on).

So the best answer: Use mfa on the workstation (password + totp), then use ssh keys to the hosts.

This means the target of the attack is small, and the workstation can be protected by things like full disk encryption and group policy. To sudo on the host you still need the password. This makes sudo MFA to root as you need something know, and something you have.

If you are extra conscious you can put your ssh keys on smartcards. This works on linux and osx workstations with yubikeys as I am aware. Apparently you can have ssh keys in TPM, which would give you tighter hardware binding, but I don’t know how to achieve this (yet).

To make all this better, you can distributed your ssh public keys in ldap, which means you gain the benefits of LDAP account locking/revocation, you can remove the keys instantly if they are breached, and you have very little admin overhead to configuration of this service on the linux server side. Think about how easy onboarding is if you only need to put your ssh key in one place and it works on every server! Let alone shutting down a compromised account: lock it in one place, and they are denied access to every server.

SSSD as the LDAP client on the server can also cache the passwords (hashed) and the ssh public keys, which means a disconnected client will still be able to be authenticated to.

At this point, because you have ssh key auth working, you could even deny password auth as an option in ssh altogether, eliminating an entire class of bruteforce vectors.

For bonus marks: You can use AD as the generic LDAP server that stores your SSH keys. No additional vendor products needed, you already have everything required today, for free. Everyone loves free.

Conclusion

If you want strong, offline capable, distributed mfa on linux servers, the only choice today is LDAP with SSH key distribution.

Want to know more? This blog contains how-tos on SSH key distribution for AD, SSH keys on smartcards, and how to configure SSSD to use SSH keys from LDAP.

February 11, 2019 01:00 PM

February 08, 2019

Adam Young

Ansible and FreeIPA Part 2

After some discussion with Bill Nottingham I got a little further along with what it would take to integrate Ansible Tower and FreeIPA. Here are the notes from that talk.

FreeIPA work best when you can use SSSD to manage the user and groups of the application. Since Ansible is a DJango Application running behind NGinx, this means using REMOTE_USER configuration. However, Ansible Tower already provides integration with SAML and OpenIDC using Python Social Auth. If an administrator wants to enable SAML, they do so in the database layer, and that provides replication to all of the Ansible Tower instances in a cluster.

The Social integration provides the means to map from the SAML/OpenIDC assertion to the local user and groups. An alternative based on the REMOTE_USER section would have the same set of mappings, but from Variables exposed by the SSSD layer. The variables available would any exposed from an Nginx module, such as those documented here.

Some configuration of the Base OS would be required beyond enrolling the system as an IPA client. Specifically, any variables that the user wishes to expose would be specified in /etc/sssd/sssd.conf.

This mirrors how I set up SSSD Federation in OpenStack Keystone. The configuration of SSSD is the same.

by Adam Young at February 08, 2019 01:09 AM

February 07, 2019

Adam Young

Ansible and FreeIPA Part-1

Ansible is a workflow engine. I use it to do work on my behalf.

FreeIPA is an identity management system. It allows me to manage the identities of users in my organization

How do I get the two things to work together? The short answer is that it is trivial to do using Ansible Engine. It is harder to do using Ansible tower.

Edit: Second part is here. Third part is coming.

Engine


Lets start with engine. Lets say that I want to execute a playbook on a remote system. Both my local and remote systems are FreeIPA clients. Thus, I can use Kerberos to authenticate when I ssh in to the remote system. This same mechanism is reused by Ansible when I connect to the system. The following two commands are roughly comparable

scp myfile.txt  ayoung@hostname:  
ansible  --user ayoung hostname -m copy -a /
"src=myfile.txt dest=/home/ayoung"  

Ignoring all the extra work that the copy module does, checking hashes etc.

Under the covers, the ssh layer checks the various authentication mechanism available to communicate with the remote machine. If I have run kinit (successfully) prior to executing the scp command, it will try the Kerberos credentials (via GSSAPI, don’t get me started on the acronym soup) to authenticate to the remote system.

This is all well and good if I am running the playbook interactively. But, what if I want to kick off the playbook from an automated system, like cron?

Keys

The most common way that people use ssh is using asymmetric keys with no certificated. On a Linux system, these keys are kept in ~/.ssh. If I am using rsa, then the private key is kept in ~/.ssh/id_rsa. I can use a passphrase to protect this file. If I want to script using that key, I need to remove the passphrase, or I need to store the passphrase in a file that automates submitting it. While there are numerous ways to handle this, a very common pattern is to have a second set of credentials, stored in a second file, and a configuration option that says to use them. For example, I have a directory ~/keys that contains an id_rsa file. I can use it with ssh like this:

ssh cloud-user@128.31.24.146 -i ~/keys/id_rsa

And with Ansible:

 ansible -i inventory.py ayoung_resources --key-file ~/keys/id_rsa  -u cloud-user   -m ping

Ansible lacks knowledge of Kerberos. There is no way to say “kinit blah” prior to the playbook. While you can add this to a script, you are now providing a wrapper around Ansible.

Automating via Kerberos

Kerberos has a different way to automate credentials: You can use a keytab ( a file with symmetric keys stored in it) to get a Ticket Granting Ticket (TGT) and you can place that TGT in a special directory: /var/kerberos/krb5/user/<uid>

I wrote this up a few years back: https://adam.younglogic.com/2015/05/auto-kerberos-authn/

Lets take this a little bit further. Lets say that I don’t want to perform the operation as me. Specifically, I don’t want to create a TGT for my user that has all of my authority in an automated fashion. I want to create some other, limited scope principal (the Kerberos term for users and things that are like users that can do things) and use that.

Service Principals

I’d prefer to create a service principal from my machine. If my machine is testing.demo1.freeipa.org and I create on it a service called ansible, I’ll end up with a principal of:

anisble/testing.demo1.freeipa.org@DEMO1.FREEIPA.ORG

A user can allocate to this principal a Keytab, an X509 Certificate, or both. These credentials can be used to authenticate with a remote machine.

If I want to allow this service credential to get access to a host that I set up as some specified user, I can put an entry in the file ~/.k5login that will specify what principals are allowed to login. So I add the above principal line and now that principal can log in.

Lets assume, however, that we want to limit what that user can do. Say we want to restrict it only to be able to perform git operations. Instead of ~/.k5login, we would use ~/.k5users. This allows us to put a list of commands on the line. It would look like this:

anisble/testing.demo1.freeipa.org@DEMO1.FREEIPA.ORG /usr/bin/git

Ansible Tower

Now that we can set up delegations for the playbooks to use, we can turn our eyes to Ansible Tower. Today, when a user kicks off a playbook from Tower, they have to reuse a set of credentials stored in Ansible tower. However, that means that any external identity management must be duplicated inside tower.

What if we need to pass through the user that logs in to Tower in order to use that initial users identity for operations? We have a few tools available.

Lets start with the case where the user logs in to the Tower instance using Kerberos. We can make use of a mechanism that goes by the unwieldy name of Service-for-User-to-Proxy, usually reduced to S4U2Proxy. This provides a constrained delegation.

What if a user is capable of logging in via some mechanism that is not Kerberos? There is a second mechanism called Service-for-User-to-Self. This allows a system to convert from, say, a password based mechanism, to a Kerberos ticket.

Simo Sorce wrote these up a few years back.

https://ssimo.org/blog/id_011.html

And the Microsoft RFC that describe the mechanisms in detail

https://msdn.microsoft.com/en-us/library/cc246071.aspx

In the case of Ansible Tower, we’d have to specify at the playbook level what user to use when executing the template: The AWX account that runs tower, or the TGT fetched via the S4U* mechanism.

What would it take to extend Tower to do use S4U? Tower can already user Kerberos from the original user:

https://docs.ansible.com/ansible-tower/latest/html/administration/kerberos_auth.html.

The Tower web application would then need to be able to perform the S4U transforms. Fortunately, iot is Python cade. The FreeIPA server has to perform these transforms itself, and it would be comparable transforms.

Configuring the S4U mechanisms in FreeIPA is fairly manual process, as documented by https://vda.li/en/posts/2013/07/29/Setting-up-S4U2Proxy-with-FreeIPA/ I would suggest using Ansible to automate it.

Wrap Up

Kerberos provides a distributed authentication scheme with validation that the user is still active. The is a powerful combination. Ansible should be able to take advantage of the Kerberos support in ssh to greatly streaml;ine the authorization decisions in provisioning and orchestration.

by Adam Young at February 07, 2019 08:25 PM

Fraser Tweedale

staticmethod considered beneficial

staticmethod considered beneficial

Some Python programmers hold that the staticmethod decorator, and to a lesser extent classmethod, are to be avoided where possible. This view is not correct, and in this post I will explain why.

This post will be useful to programmers in any language, but especially Python.

The constructions

I must begin with a brief overview of the classmethod and staticmethod constructions and their uses.

classmethod is a function that transforms a method into a class method. The class method receives the class object as its first argument, rather than an instance of the class. It is typically used as a method decorator:

By idiom, the class object argument is bound to the name cls. You can invoke a class method via an instance (C().f()) or via the class object itself (C.f()). In return for this flexibility you give up the ability to access instance methods or attributes from the method body, even when it was called via an instance.

staticmethod is nearly identical to classmethod. The only difference is that instead of receiving the class object as the first argument, it does not receive any implicit argument:

How are the classmethod and staticmethod constructions used? Consider the following (contrived) class:

There are some places we could use staticmethod and classmethod. Should we? Let’s just do it and discuss the impact of the changes:

forty_two became a static method, and it no longer takes any argument. answer became a class method, and its self argument became cls. It cannot become a static method, because it references cls.forty_two. modified_answer can’t change at all, because it references an instance attribute (self.delta). forty_two could have been made a class method, but just as it had no need of self, it has no need cls either.

There is an alternative refactoring for forty_two. Because it doesn’t reference anything in the class, we could have extracted it as a top-level function (i.e. defined not in the class but directly in a module). Conceptually, staticmethod and top-level functions are equivalent modulo namespacing.

Was the change I made a good one? Well, you already know my answer will be yes. Before I justify my position, let’s discuss some counter-arguments.

Why not staticmethod or classmethod?

Most Python programmers accept that alternative constructors, factories and the like are legitimate applications of staticmethod and classmethod. Apart from these applications, opinions vary.

  • For some folks, the above are the only acceptable uses.
  • Some accept staticmethod for grouping utility functions closely related to some class, into that class; others regard this kind of staticmethod proliferation as a code smell.
  • Some feel that anything likely to only ever be called on an instance should use instance methods, i.e. having self as the first argument, even when not needed.
  • The decorator syntax “noise” seems to bother some people

Guido van Rossum, author and BDFL of Python, wrote that static methods were an accident. History is interesting, sure, but not all accidents are automatically bad.

I am sympathetic to some of these arguments. A class with a lot of static methods might just be better off as a module with top-level functions. It is true that staticmethod is not required for anything whatsoever and could be dispensed with (this is not true of classmethod). And clean code is better than noisy code. Surely if you’re going to clutter your class with decorators, you want something in return right? Well, you do get something in return.

Deny thy self

Let us put to the side the side-argument of staticmethod versus top-level functions. The real debate is instance methods versus not instance methods. This is the crux. Why avoid instance methods (where possible)? Because doing so is a win for readability.

Forget the contrived Foo class from above and imagine you are in a non-trivial codebase. You are hunting a bug, or maybe trying to understand what some function does. You come across an interesting function. It is 50 lines long. What does it do?

If you are reading an instance method, in addition to its arguments, the module namespace, imports and builtins, it has access to self, the instance object. If you want to know what the function does or doesn’t do, you’ll have to read it.

But if that function is a classmethod, you now have more information about this function—namely that it cannot access any instance methods, even if it was invoked on an instance (including from within a sibling instance method). staticmethod (or a top-level function) gives you a bit more than this: not even class methods can be accessed (unless directly referencing the class, which is easily detected and definitely a code smell). By using these constructions when possible, the programmer has less to think about as they read or modify the function.

You can flip this scenario around, too. Say you know a program is failing in some instance method, but you’re not sure how the problematic code is reached. Well, you can rule out the class methods and static methods straight away.

These results are similar to the result of parametricity in programming language theory. The profound and actionable observation in both settings is this: knowing less about something gives the programmer more information about its behaviour.

These might not seem like big wins. Because most of the time it’s only a small win. But it’s never a lose, and over the life of a codebase or the career of a programmer, the small readability wins add up. To me, this is a far more important goal than avoiding extra lines of code (decorator syntax), or spurning a feature because its author considers it an accident or it transgresses the Zen of Python or whatever.

But speaking of the Zen of Python…

Readability counts.

So use classmethod or staticmethod wherever you can.

February 07, 2019 12:00 AM

February 04, 2019

Fraser Tweedale

How does Dogtag PKI spawn?

How does Dogtag PKI spawn?

Dogtag PKI is a complex program. Anyone who has performed a standalone installation of Dogtag can attest to this (to say nothing of actually using it). The program you invoke to install Dogtag is called pkispawn(8). When installing standalone, you invoke pkispawn directly. When FreeIPA installs a Dogtag instance, it invokes pkispawn behind the scenes.

So what does pkispawn actually do? In this post I’ll explain how pkispawn actually spawns a Dogtag instance. This post is not intended to be a guide to the many configuration options pkispawn knows about (although we’ll cover several). Rather, I’ll explain the actions pkispawn performs (or causes to be performed) to go from a fresh system to a working Dogtag CA instance.

This post is aimed at developers and support associates, and to a lesser extent, people who are trying to diagnose issues themselves or understand how to accomplish something fancy in their Dogtag installation. By explaining the steps involved in spawning a Dogtag instance, I hope to make it easier for readers to diagnose issues or implement fixes or enhancements.

pkispawn overview

pkispawn(8) is provided by the pki-server RPM (which is required by the pki-ca RPM that provides the CA subsystem).

You can invoke pkispawn without arguments, and it will prompt for the minimal data it needs to continue. These data include the subsystem to install (e.g. CA or KRA), and LDAP database connection details. For a fresh installation, most defaults are acceptable.

There are many ways to configure or customise an installation. A few important scenarios are:

  • installing a KRA, OCSP, TKS or TPS subsystem associated with the existing CA subsystem (typically on the same machine as the CA subsystem).
  • installing a clone of a subsystem (typically on a different machine)
  • installing a CA subsystem with an externally-signed CA certificate
  • non-interactive installation

For the above scenarios, and for many other possible variations, it is necessary to give pkispawn a configuration file. The pki_default.cfg(5) man page describes the format and available options. Some options are relevant to all subsystems, and others are subsystem-specific (i.e. only for CA, or KRA, etc.) Here is a basic configuration:

[DEFAULT]
pki_server_database_password=Secret.123

[CA]
pki_admin_email=caadmin@example.com
pki_admin_name=caadmin
pki_admin_nickname=caadmin
pki_admin_password=Secret.123
pki_admin_uid=caadmin

pki_client_database_password=Secret.123
pki_client_database_purge=False
pki_client_pkcs12_password=Secret.123

pki_ds_base_dn=dc=ca,dc=pki,dc=example,dc=com
pki_ds_database=ca
pki_ds_password=Secret.123

pki_security_domain_name=EXAMPLE

pki_ca_signing_nickname=ca_signing
pki_ocsp_signing_nickname=ca_ocsp_signing
pki_audit_signing_nickname=ca_audit_signing
pki_sslserver_nickname=sslserver
pki_subsystem_nickname=subsystem

The -f option tells pkispawn the configuration file to use. -s CA tell it install the CA subsystem.

$ pkispawn -f ca.cfg -s CA

For many more examples of how to install Dogtag subsystems for particular scenarios, see the PKI 10 Installation guide on the Dogtag wiki.

Terminology

It is worthwhile to clarify the meaning of some terms:

instance or installation

An installation of Dogtag on a particular machine. An instance may contain one or more subsystems. There may be more than one Dogtag instance on a single machine, although this is uncommon (and each instance must use a disjoint set of network ports). The default instance name is pki-tomcat.

subsystem

Each main function in Dogtag is provided by a subsystem. The subsystems are: CA, KRA, OCSP, TKS and TPS. Every Dogtag instance must have a CA subsystem (hence, the first subsystem installed must be the CA subsystem).

clone

For redundancy, a subsystem may be cloned to a different instance (usually on a different machine; this is not a technical requirement but it does not make sense to do otherwise). Different subsystems may have different numbers of clones in a topology.

topology or deployment

All of the clones of all subsystems derived from some original CA subsystem form a deployment or topology. Typically, each instance in the topology would have a replicated copy of the LDAP database.

pkispawn implementation

Two main phases

pkispawn has two main phases:

  1. set up the Tomcat server and Dogtag application
  2. send configuration requests to the Dogtag application, which performs further configuration steps.

(This is not to be confused with a two step externally-signed CA installation.)

Of course there are many more steps than this. But there is an important reasons I am making such a high-level distinction: debugging. In the first phase pkispawn does everything. Any errors will show up in the pkispawn log file (/var/log/pki/pki-<subsystem>-<timestamp>.log). It is usually straightforward to work out what failed. Why it failed is sometimes easy to work out, and sometimes not so easy.

But in the second phase, pkispawn is handing over control to Dogtag to finish configuring itself. pkispawn sends a series of requests to the pki-tomcatd web application. These requests tell Dogtag to configure things like the database, security domain, and so on. If something goes wrong during these steps, you might see something useful in the pkispawn log, but you will probably also need to look at the Dogtag debug log, or even the Tomcat or Dogtag logs of another subsystem or clone. I detailed this (in the context of debugging clone installation failures) in a previous post.

Scriptlets

pkispawn is implemented in Python. The various steps of installation are implemented as scriptlets: small subroutines that take care of one part of the installation. These are:

  1. initialization: sanity check and normalise installer configuration, and sanity check the system environment.
  2. infrastructure_layout: create PKI instance directories and configuration files.
  3. instance_layout: lay out the Tomcat instance and configuration files (skipped when spawning a second subsystem on an existing instance).
  4. subsystem_layout: lay out subsystem-specific files and directories.
  5. webapp_deployment: deploy the Tomcat web application.
  6. security_databases: set up the main Dogtag NSS database, and a client database where the administrator key and certificate will be created.
  7. selinux_setup: establish correct SELinux contexts on instance and subsystem files.
  8. keygen: generate keys and CSRs for the subsystem (for the CA subsystem, this inclues the CA signing key and CSR for external signing).
  9. configuration: For external CA installation, import the externally-signed CA certificate and chain. (Re)start the pki-tomcatd instance and send configuration requests to the Java application. The whole second phase discussed in the previous section occurs here. It will be discussed in more detail in the next section.
  10. finalization: enable PKI to start on boot (by default) and optionally purge client NSS databases that were set up during installation.

For a two-step externally-signed CA installation, the configuration and finalization scriptlets are skipped during step 1, and in step 2 the scriptlets up to and including keygen are skipped. (A bit of hand-waving here; they not not really skipped but return early).

In the codebase, scriptlets are located under base/server/python/pki/server/deployment/scriptlets/<name>.py. The list of scriptlets and the order in which they’re run is given by the spawn_scriplets variable in base/server/etc/default.cfg. Note that scriplet there is not a typo. Or maybe it is, but it’s not my typo. In some parts of the codebase, we say scriplet, and in others it’s scriptlet. This is mildly annoying, but you just have to be careful to use the correct class or variable name.

Some other Python files contain a lot of code used during deployment. It’s not reasonable to make an exhaustive list, but pki.server.deployment.pkihelper and pki.server.deployment.pkiparser in particular include a lot of configuration processing code. If you are implementing or changing pkispawn configuration options, you’ll be defining them and following changes around in these files (and possibly others), as well as in base/server/etc/default.cfg.

Scriptlets and uninstallation

The installation scriptlets also implement corresponding uninstallation behaviours. When uninstalling a Dogtag instance or subsystem via the pkidestroy command, each scriptlets’ uninstallation behaviour is invoked. The order in which they’re invoked is different from installation, and is given by the destroy_scriplets variable in base/server/etc/default.cfg.

Configuration requests

The configuration scriptlet sends a series of configuration requests to the Dogtag web API. Each request causes Dogtag to perform specific configuration behaviour(s). Depending on the subsystem being installed and whether it is a clone, these steps may including communication with other subsystems or instances, and/or the LDAP database.

The requests performed, in order, are:

  1. /rest/installer/configure: configure (but don’t yet create) the security domain. Import and verify certificates. If creating a clone, request number range allocations from the master.
  2. /rest/installer/setupDatabase: add database connection configuration to CS.cfg. Enable required DS plugins. Populate the database. If creating a clone, initialise replication (this can be suppressed if replication is managed externally, as is the case for FreeIPA in Domain Level 1). Populate VLV indices.
  3. /rest/installer/configureCerts: configure system certificates, generating keys and issuing certificates where necessary.
  4. /rest/installer/setupAdmin (skipped for clones): create admin user and issue certificate.
  5. /rest/installer/backupKeys (optional): back up system certificates and keys to a PKCS #12 file.
  6. /rest/installer/setupSecurityDomain: create the security domain data in LDAP (non-clone) or add the new clone to the security domain.
  7. /rest/installer/setupDatabaseUser: set up the LDAP database user, including certificate (if configured). This is the user that Dogtag uses to bind to LDAP.
  8. /rest/installer/finalizeConfiguration: remove preop configuration entries (which are only used during installation) and perform other finalisation in CS.cfg.

For all of these requests, the configuration scriptlet builds the request data according to the pkispawn configuration. Then it sends the request to the current hostname. Communications between pkispawn and Tomcat are unlikely to fail (connection failure would suggest a major network configuration problem).

If something goes wrong during processing of the request, errors should appear in the subsystem debug log (/etc/pki/pki-tomcat/ca/debug.YYYY-MM-DD.log; /etc/pki/pki-tomcat/ca/debug on older versions), or the system journal. If the local system had to contact other subsystems or instances on other hosts, it may be necessary to look at the debug logs, system journal or Tomcat / Apache httpd logs of the relevant host / subsystem. I wrote about this at length in a previous post so I won’t say more about it here.

In terms of the code, the resource paths and servlet interface are defined in com.netscape.certsrv.system.SystemConfigResource. The implementation is in com.netscape.certsrv.system.SystemConfigService, with a considerable amount of behaviour residing as helper methods in com.netscape.cms.servlet.csadmin.ConfigurationUtils. If you are investigating or fixing configuration request failures, you will spend a fair bit of time grubbing around in these classes.

Conclusion

As I have shown in this post, spawning a Dogtag PKI instance involves a lot of steps. There are many, many ways to customise the installation and I have glossed over many details. But my aim in this post was not to be a comprehensive reference guide or how-to. Rather the intent was to give a high-level view of what happens during installation, and how those behaviours are implemented. Hopefully I have achieved that, and as a result you are now able to more easily diagnose issues or implement changes or features in the Dogtag installer.

February 04, 2019 12:00 AM

January 29, 2019

William Brown

Using the latest 389-ds on OpenSUSE

Using the latest 389-ds on OpenSUSE

Thanks to some help from my friend who works on OBS, I’ve finally got a good package in review for submission to tumbleweed. However, if you are impatient and want to use the “latest” and greatest 389-ds version on OpenSUSE (docker anyone?).

docker run -i -t opensuse/tumbleweed:latest
zypper ar obs://network:ldap network:ldap
zypper in 389-ds

Now, we still have an issue with “starting” from dsctl (we don’t really expect you to do it like this ….) so you have to make a tweak to defaults.inf:

vim /usr/share/dirsrv/inf/defaults.inf
# change the following to match:
with_systemd = 0

After this, you should now be able to follow our new quickstart guide on the 389-ds website.

I’ll try to keep this repo up to date as much as possible, which is great for testing and early feedback to changes!

EDIT: Updated 2019-04-03 to change repo as changes have progressed forward.

January 29, 2019 01:00 PM

Fraser Tweedale

X.509 Name Constraints and FreeIPA

X.509 Name Constraints and FreeIPA

The X.509 Name Constraints extension is a mechanism for constraining the name space(s) in which a certificate authority (CA) may (or may not) issue end-entity certificates. For example, a CA could issue to Bob’s Widgets, Inc a contrained CA certificate that only allows the CA to issue server certificates for bobswidgets.com, or subdomains thereof. In a similar way, an enterprise root CA could issue constrained certificates to different departments in a company.

What is the advantage? Efficiency can be improved without sacrificing security by enabling scoped delegation of certificate issuance capability to subordinate CAs controlled by different organisations. The name constraints extension is essential for the security of such a mechanism. The Bob’s Widgets, Inc CA must not be allowed to issue valid certificates for google.com (and vice versa!)

FreeIPA supports installation with an externally signed CA. It is possible that such a CA certificate could have a name constraints extension, defined and imposed by the external issuer. Does FreeIPA support this? What are the caveats? In this blog post I will describe in detail how Name Constraints work and the state of FreeIPA support. Along the way I will dive into the state of Name Constraints verfication in the NSS security library. And I will conclude with a discussion of limitations, alternatives and complementary controls.

Name Constraints

The Name Constraints extension is defined in RFC 5280. Just as the Subject Alternative Name (SAN) is a list of GeneralName values with various possible types (DNS name, IP address, DN, etc), the Name Constraints extension also contains a list of GeneralName values. The difference is in interpretation. In the Name Constraints extension:

  • A DNS name means that the CA may issue certificates with DNS names in the given domain, or a subdomain of arbitrary depth.
  • An IP address is interpreted as a CIDR address range.
  • A directory name is interpreted as a base DN.
  • An RFC822 name can be a single mailbox, all mailboxes at a particular host, or all mailboxes at a particular domain (including subdomains).
  • The SRVName name type, and corresponding Name Constraints matching rules, are defined in RFC 4985.

There are other rules for other name types, but I won’t elaborate them here.

In X.509 terminology, these name spaces are called subtrees. The Name Constraints extension can define permitted subtrees and/or excluded subtrees. Permitted subtrees is more often used because it defines what is allowed, and anything not explicitly allowed is prohibited. It is possible for a single Name Constraints extension to define both permitted and excluded subtrees. But I have never seen this in the wild, and I will not bother explaining the rules.

When validating a certificate, the Name Constraints subtrees of all CA certificates in the certification path are merged, and the certificate is checked against the merged results. Name values in the SAN extension are compared to Name Constraint subtrees of the same type (the comparison rules differ for each name type.)

In addition to comparing SAN names against Name Constraints, there are a couple of additional requirements:

  • directoryName constraints are checked against the whole Subject DN, in additional to directoryName SAN values.
  • rfc822Name constraints are checked against the emailAddress Subject DN attribute (if present) in addition to rfc822Name SAN values. (Use of the emailAddress attribute is deprecated in favour of rfc822Name SAN values.)

Beyond this, because of the legacy de facto use of the Subject DN CN attribute to carry DNS names, several implementations check the CN attribute against dnsName constraints. This behaviour is not defined (let alone required) by RFC 5280. It is reasonable behaviour when dealing with server certificates. But we will see that this behaviour can lead to problems in other scenarios.

It is important to mention that nothing prevents a constrained CA from issuing a certificate that violates its Name Constraints (either direct or transitive). Validation must be performed by a client. If a client does not validate Name Constraints, then even a (trusted) issuing CA with a permittedSubtrees dnsName constraint of bobswidgets.com could issue a certificate for google.com and the client will accept it. Fortunately, modern web browsers strictly enforce DNS name constraints. For other clients, or other name types, Name Constraint enforcement support is less consistent. I haven’t done a thorough survey yet but you should make your own investigations into the state of Name Constraint validation support in libraries or programs relevant to your use case.

FreeIPA support for constrained CA certificates

It is common to deploy FreeIPA with a subordinate CA certificate signed by an external CA (e.g. the organisation’s Active Directory CA). If the FreeIPA deployment controls the ipa.bobswidgets.com subdomain, then it is reasonable for the CA administrator to issue the FreeIPA CA certificate with a Name Constraints permittedSubtree of ipa.bobswidgets.com. Will this work?

The most important thing to consider is that all names in all certificates issued by the FreeIPA CA must conform to whatever Name Constraints are imposed by the external CA. Above all else, the constraints must permit all DNS names used by the IPA servers across the whole topology. Support for DNS name constraint enforcement is widespread, so if this condition is not met, nothing with work. Most likely not even installation with succeed. So if the permitted dnsName constraint is ipa.bobswidgets.com, then every server hostname must be in that subtree. Likewise for SRV names, RFC822 names and so on.

In a typical deployment scenario this is not a burdensome requirement. And if the requirements change (e.g. needing to add a FreeIPA replica with a hostname excluded by Name Constraints) then the CA certificate could be re-issued with an updated Name Constraints extension to allow it. In some use cases (e.g. FreeIPA issuing certificates for cloud services), Name Constraints in the CA certificate may be untenable.

If the external issuer imposes a directoryName constraint, more care must be taken, because as mentioned above, these constraints apply to the Subject DN of issued certificates. The deployment’s subject base (an installation parameter that defines the base subject DN used in all default certificate profiles) must correspond to the directoryName constraint. Also, the Subject DN configuration for custom certificate profiles must correspond to the constraint.

If all of these conditions are met, then there should be no problem having a constrained FreeIPA CA.

A wild Name Constraint validation bug appears!

You didn’t think the story would end there, did you? As is often the case, my study of some less commonly used feature of X.509 was inspired by a customer issue. The customer’s external CA issued a CA certificate with dnsName and directoryName constraints. The permittedSubtree values were reasonable. Everything looked fine, but nothing worked (not even installation). Dogtag would not start up, and the debug log showed that the startup self-test was complaining about the OCSP signing certificate:

The Certifying Authority for this certificate is not
permitted to issue a certificate with this name.

Adding to the mystery, when the certutil(1) program was used to validate the certificate, the result was success:

# certutil -V -e -u O \
  -d /etc/pki/pki-tomcat/alias \
  -f /etc/pki/pki-tomcat/alias/pwdfile.txt \
  -n "ocspSigningCert cert-pki-ca"
certutil: certificate is valid

Furthermore, the customer was experiencing (and I was also able to reproduce) the issue on RHEL 7, but I could not reproduce the issue on recent versions of Fedora or the RHEL 8 beta.

directoryName constraints are uncommon (relative to dnsName constraints). And having in my past encountered many issues caused by DN string encoding mismatches (a valid scenario, but some libraries do not handle it correctly), my initial theory was that this was the cause. Dogtag uses the NSS security library (via the JSS binding for Java), and a search of the NSS commit log uncovered an interesting change that supported my theory:

Author: David Keeler <dkeeler@mozilla.com>
Date:   Wed Apr 8 16:17:39 2015 -0700

  bug 1150114 - allow PrintableString to match UTF8String
                in name constraints checking r=briansmith

On closer examination however, this change affected code in the mozpkix library (part of NSS), which is not invoked by the certificate validation routines used by Dogtag and certutil program. But if the mozpkix Name Constraint validation code was not being used, where was the relevant code.

Finding the source of the problem

Some more reading of NSS code showed that the error originated in libpkix (also part of NSS).

To work out why certutil was succeeding where Dogtag was failing, I launched certutil in a debugger to see what was going on. Eventually I reached the following routine:

SECStatus
cert_VerifyCertChain(CERTCertDBHandle *handle, CERTCertificate *cert,
                     PRBool checkSig, PRBool *sigerror,
                     SECCertUsage certUsage, PRTime t, void *wincx,
                     CERTVerifyLog *log, PRBool *revoked)
{
  if (CERT_GetUsePKIXForValidation()) {
    return cert_VerifyCertChainPkix(cert, checkSig, certUsage, t,
                                    wincx, log, sigerror, revoked);
  }
  return cert_VerifyCertChainOld(handle, cert, checkSig, sigerror,
}

OK, now I was getting somewhere. It turns out that during library initialisation, NSS reads the NSS_ENABLE_PKIX_VERIFY environment variable and sets a global variable, the value of which determines the return value of CERT_GetUsePKIXForValidation(). The behaviour can also be controlled explicitly via CERT_SetUsePKIXForValidation(PRBool enable).

When invoking certutil ourselves, this environment variable was not set so “old” validation subroutine was invoked. Both routines performs cryptographic validation of a certification path to a trusted CA, and several other important checks. But it seems that the libpkix routine is more thorough, performing Name Constraints checks, as well as OCSP and perhaps other checks that are not also performed by the “old” subroutine.

If an environment variable or explicit library call is required to enable libpkix validation, why was the error occuring in Dogtag? The answer is simple: as part of ipa-server-install, we update /etc/sysconfig/pki-tomcat to set NSS_ENABLE_PKIX_VERIFY=1 in Dogtag’s process environment. This was implemented a few years ago to support OCSP validation of server certificates in connections made by Dogtag (e.g. to the LDAP server).

The bug

Stepping through the code revealed the true nature of the bug. libpkix Name Constraints validation treats the Common Name (CN) attribute of the Subject DN as a DNS name for the purposes of name constraints validation. I already mentioned that this is reasonable behaviour for server certificates. But libpkix has this behaviour for all end-entity certiticates. For an OCSP signing certificate, whose CN attribute carries no special meaning (formally or conventially), this behaviour is wrong. And it is the bug at the root of this problem. I filed a bug in the Mozilla tracker along with a patch—my attempt at fixing the issue. Hopefully a fix can be merged soon.

Why no failure on newer releases?

The issue does not occur on Fedora >= 28 (or maybe earlier, but I haven’t tested), nor the RHEL 8 beta. So was there already a fix for the issue in NSS, or did something change in Dogtag, FreeIPA or elsewhere?

In fact, the change was in Dogtag. In recent versions we switched to a less comprehensive certificate validation routine—one that does not use libpkix. This is just the default behaviour; the old behaviour can still be enabled. We made this change because in some scenarios the OCSP checking performed by libpkix causes Dogtag startup to hang. Because the OCSP server it is trying to reach to validate certificates during start self-test is the same Dogtag instance that is starting up! Because of the change to the self-test validation behaviour, FreeIPA deployments on Fedora >= 28 and RHEL 8 beta do not experience this issue.

Workaround?

If you were experiencing this issue in an existing release (e.g. because you renewed the CA certificate on your existing FreeIPA deployment, and the Name Constraints appeared on the new certificate), an obvious workaround would be to remove the environment variable from /etc/sysconfig/pki-tomcat. That would work, and the change will persist even after an ipa-server-upgrade. But that assumes you already had a working installation. Which the customer doesn’t have, becaues installation itself is failing. So apart from modifying the FreeIPA code to avoid setting this environment variable in the first place, I don’t yet know of a reliable workaround.

This concludes the discussion of constrained CA certificate support in FreeIPA.

Name Constraints only constrains names. There are other ways you might want to constrain a CA. For example: can only issue certificates with validity period <= δ, or can only issue certificates with Extended Key Usages ∈ S. But there exists no mechanism for constraining CAs in such ways.

Not all defined GeneralName types have Name Constraints syntax and semantics defined for them. Documents that define otherName types may define corresponding Name Constraints matching rules, but are not required to. For example RFC 4985, which defines the SRVName type, also defines Name Constraints rules for it. But RFC 4556, which specifies the Kerberos PKINIT protocol, defines the KRB5PrincipalName otherName type but no Name Constraints semantics.

For applications where the set of domains (or other names) is volatile, a constrained CA certificate is likely to be more of a problem than a solution. An example might be a cloud or Platform-as-a-Service provider wanting to issue certificates on behalf of customers, who bring their own domains. For this use case it would be better to use an existing CA that supports automated domain validation and issuance, such as Let’s Encrypt.

Name Constraints say which names a CA is or is not allowed to issue certificates for. But this restriction is controlled by the superior CA(s), not the end-entity. Interestingly there is a way for a domain owner to indicate which CAs are authorised to issue certificates for names in the domain. The DNS CAA record (RFC 6844) can anoint one more CAs, implicitly prohibiting other CAs from issuing certificates for that domain. The CA itself can check for these records, as a control against mis-issuance. For publicly-trusted CAs, the CA-Browser Forum Baseline Requirements requires CAs to check and obey CAA records. DNSSEC is recommended but not required.

CAA is an authorisation control—relying parties do not consult or care about CAA records when verifying certificates. The verification counterpart of CAA is DANE—DNS-based Authentication of Named Entities, defined in RFC 6698. Like CAA, DANE uses DNS (the TLSA record type), but DNSSEC is required. TLSA records can be used to indicate the authorised CA(s) for a certificate. Or they can specify the exact certificate(s) for the domain, a kind of certificate pinning. So DANE can work hand-in-hand with the existing public PKI infrastructure, or it can do an end-run around it. Depending on who you talk to, the reliance on DNSSEC makes it a non-starter, or humanity’s last hope! In any case, support is not yet widespread. Today DANE can be used in some browsers via add-ons, and the OpenSSL and GnuTLS libraries have some support.

Nowadays all publicly-trusted CAs, and some private PKIs, log all issued certificates to Certificate Transparency (CT) logs. These logs are auditable (publicly if the log is public), cryptographically verifiable logs of CA activity. CT was imposed after the detection of many serious misissuances by several publicly-trusted CAs (most of whom are no longer trusted by anyone). Now, even failure to log a certificate to a CT log is reason enough to revoke trust (because what else might they have failed to log? Certificates for google.com or yourbank.ch?) What does CT have to do with Name Constraints? When you consider that client Name Constraints validation support is patchy at best, a CT-based logging and audit solution is a credible alternative to Name Constraints, or at least a valuable complementary control.

Conclusion

So, we have looked at what the Name Constraints extension does, and why it can be useful. We have discussed its limitations and some alternative or related mechanisms. We looked at the state of FreeIPA support, and did a deep dive into NSS to investigate the one bug that seems to be getting in the way.

Name Constraints is one of the many complex features that makes X.509 both so versatile yet so painful to work with. It’s a necessary feature, but support is not consistent and where it exists, there are usually bugs. Although I did discuss some “alternatives”, a big reason you might look for an alternative is because the support is not great in the first place. In my opinion, the best way forward is to ensure Name Constraints validation is performed more often, and more correctly, while (separately) preparing the way for comprehensive CT logging in enterprise CAs. A combination of monitoring (CT) and validation controls (browsers correctly validating names, Name Constraints and requiring evidence of CT logging) seems to be improving security in the public PKI. If we fix the client libraries and make CT logging and monitoring easy, it could work well for enterprise PKIs too.

January 29, 2019 12:00 AM

January 18, 2019

William Brown

Structuring Rust Transactions

Structuring Rust Transactions

I’ve been working on a database-related project in Rust recently, which takes advantage of my concurrently readable datastructures. However I ran into a problem of how to structure Read/Write transaction structures that shared the reader code, and container multiple inner read/write types.

Some Constraints

To be clear, there are some constraints. A “parent” write, will only ever contain write transaction guards, and a read will only ever contain read transaction guards. This means we aren’t going to hit any deadlocks in the code. Rust can’t protect us from mis-ording locks. An additional requirement is that readers and a single write must be able to proceed simultaneously - but having a rwlock style writer or readers behaviour would still work here.

Some Background

To simplify this, imagine we have two concurrently readable datastructures. We’ll call them db_a and db_b.

struct db_a { ... }

struct db_b { ... }

Now, each of db_a and db_b has their own way to protect their inner content, but they’ll return a DBWriteGuard or DBReadGuard when we call db_a.read()/write() respectively.

impl db_a {
    pub fn read(&self) -> DBReadGuard {
        ...
    }

    pub fn write(&self) -> DBWriteGuard {
        ...
    }
}

Now we make a “parent” wrapper transaction such as:

struct server {
    a: db_a,
    b: db_b,
}

struct server_read {
    a: DBReadGuard,
    b: DBReadGuard,
}

struct server_write {
    a: DBWriteGuard,
    b: DBWriteGuard,
}

impl server {
    pub fn read(&self) -> server_read {
        server_read {
            self.a.read(),
            self.b.read(),
        }
    }

    pub fn write(&self) -> server_write {
        server_read {
            self.a.write(),
            self.b.write(),
        }
    }
}

The Problem

Now the problem is that on my server_read and server_write I want to implement a function for “search” that uses the same code. Search or a read or write should behave identically! I wanted to also avoid the use of macros as the can hide issues while stepping in a debugger like LLDB/GDB.

Often the answer with rust is “traits”, to create an interface that types adhere to. Rust also allows default trait implementations, which sounds like it could be a solution here.

pub trait server_read_trait {
    fn search(&self) -> SomeResult {
        let result_a = self.a.search(...);
        let result_b = self.b.search(...);
        SomeResult(result_a, result_b)
    }
}

In this case, the issue is that &self in a trait is not aware of the fields in the struct - traits don’t define that fields must exist, so the compiler can’t assume they exist at all.

Second, the type of self.a/b is unknown to the trait - because in a read it’s a “a: DBReadGuard”, and for a write it’s “a: DBWriteGuard”.

The first problem can be solved by using a get_field type in the trait. Rust will also compile this out as an inline, so the correct thing for the type system is also the optimal thing at run time. So we’ll update this to:

pub trait server_read_trait {
    fn get_a(&self) -> ???;

    fn get_b(&self) -> ???;

    fn search(&self) -> SomeResult {
        let result_a = self.get_a().search(...); // note the change from self.a to self.get_a()
        let result_b = self.get_b().search(...);
        SomeResult(result_a, result_b)
    }
}

impl server_read_trait for server_read {
    fn get_a(&self) -> &DBReadGuard {
        &self.a
    }
    // get_b is similar, so ommitted
}

impl server_read_trait for server_write {
    fn get_a(&self) -> &DBWriteGuard {
        &self.a
    }
    // get_b is similar, so ommitted
}

So now we have the second problem remaining: for the server_write we have DBWriteGuard, and read we have a DBReadGuard. There was a much longer experimentation process, but eventually the answer was simpler than I was expecting. Rust allows traits to have Self types that enforce trait bounds rather than a concrete type.

So provided that DBReadGuard and DBWriteGuard both implement “DBReadTrait”, then we can have the server_read_trait have a self type that enforces this. It looks something like:

pub trait DBReadTrait {
    fn search(&self) -> ...;
}

impl DBReadTrait for DBReadGuard {
    fn search(&self) -> ... { ... }
}

impl DBReadTrait for DBWriteGuard {
    fn search(&self) -> ... { ... }
}

pub trait server_read_trait {
    type GuardType: DBReadTrait; // Say that GuardType must implement DBReadTrait

    fn get_a(&self) -> &Self::GuardType; // implementors must return that type implementing the trait.

    fn get_b(&self) -> &Self::GuardType;

    fn search(&self) -> SomeResult {
        let result_a = self.get_a().search(...);
        let result_b = self.get_b().search(...);
        SomeResult(result_a, result_b)
    }
}

impl server_read_trait for server_read {
    fn get_a(&self) -> &DBReadGuard {
        &self.a
    }
    // get_b is similar, so ommitted
}

impl server_read_trait for server_write {
    fn get_a(&self) -> &DBWriteGuard {
        &self.a
    }
    // get_b is similar, so ommitted
}

This works! We now have a way to write a single “search” type for our server read and write types. In my case, the DBReadTrait also uses a similar technique to define a search type shared between the DBReadGuard and DBWriteGuard.

January 18, 2019 01:00 PM

SUSE Open Build Service cheat sheet

SUSE Open Build Service cheat sheet

Part of starting at SUSE has meant that I get to learn about Open Build Service. I’ve known that the project existed for a long time but I have never had a chance to use it. So far I’m thoroughly impressed by how it works and the features it offers.

As A Consumer

The best part of OBS is that it’s trivial on OpenSUSE to consume content from it. Zypper can add projects with the command:

zypper ar obs://<project name> <repo nickname>
zypper ar obs://network:ldap network:ldap

I like to give the repo nickname (your choice) to be the same as the project name so I know what I have enabled. Once you run this you can easily consume content from OBS.

Package Management

As someone who has started to contribute to the suse 389-ds package, I’ve been slowly learning how this work flow works. OBS similar to GitHub/Lab allows a branching and request model.

On OpenSUSE you will want to use the osc tool for your workflow:

zypper in osc
# If you plan to use the "service" command
zypper in obs-service-tar obs-service-obs_scm obs-service-recompress obs-service-set_version obs-service-download_files

You can branch from an existing project to make changes with:

osc branch <project> <package>
osc branch network:ldap 389-ds

This will branch the project to my home namespace. For me this will land in “home:firstyear:branches:network:ldap”. Now I can checkout the content on to my machine to work on it.

osc co <project>
osc co home:firstyear:branches:network:ldap

This will create the folder “home:…:ldap” in the current working directory.

From here you can now work on the project. Some useful commands are:

Add new files to the project (patches, new source tarballs etc).

osc add <path to file>
osc add feature.patch
osc add new-source.tar.xz

Edit the change log of the project (I think this is used in release notes?)

osc vc

To ammend your changes, use:

osc vc -e

Build your changes locally matching the system you are on. Packages normally build on all/most OpenSUSE versions and architectures, this will build just for your local system and arch.

osc build

Make sure you clean up files you aren’t using any more with:

osc rm <filename>
# This commands removes anything untracked by osc.
osc clean

Commit your changes to the OBS server, where a complete build will be triggered:

osc commit

View the results of the last commit:

osc results

Enable people to use your branch/project as a repository. You edit the project metadata and enable repo publishing:

osc meta prj -e <name of project>
osc meta prj -e home:firstyear:branches:network:ldap

# When your editor opens, change this section to enabled (disabled by default):
<publish>
  <enabled />
</publish>

NOTE: In some cases if you have the package already installed, and you add the repo/update it won’t install from your repo. This is because in SUSE packages have a notion of “vendoring”. They continue to update from the same repo as they were originally installed from. So if you want to change this you use:

zypper [d]up --from <repo name>

You can then create a “request” to merge your branch changes back to the project origin. This is:

osc sr

A helpful maintainer will then review your changes. You can see this with.

osc rq show <your request id>

If you change your request, to submit again, use:

osc sr

And it will ask if you want to replace (supercede) the previous request.

I was also helped by a friend to provie a “service” configuration that allows generation of tar balls from git. It’s not always appropriate to use this, but if the repo has a “_service” file, you can regenerate the tar with:

osc service ra

So far this is as far as I have gotten with OBS, but I already appreciate how great this work flow is for package maintainers, reviewers and consumers. It’s a pleasure to work with software this well built.

As an additional piece of information, it’s a good idea to read the OBS Packaging Guidelines
to be sure that you are doing the right thing!

January 18, 2019 01:00 PM

Powered by Planet