FreeIPA Identity Management planet - technical blogs

November 02, 2018

Rob Crittenden

Doing bulk IPA operations from the command-line can be inefficient because each command requires a round trip. So a loop like this can be rather slow:

for line in $(cat /etc/passwd); do
        IFS=' '
        username=$(echo $line|cut -f1 -d:)
        password=$(echo $line|cut -f2 -d:)
        uid=$(echo $line|cut -f3 -d:)  
        gid=$(echo $line|cut -f4 -d:)
        ...
        ipa user-add $username --first=NIS --last=USER --password --gidnumber=$gid --uid=$uid --gecos=$gecos --homedir=$homedir --shell=$shell --setattr userpassword={crypt}$password
done

There is a round trip for every user.

The obvious way to improve this is to reduce the number of round trips by using the IPA batch command. Here is the skeleton of a program to read /etc/passwd. It lacks a whole ton of error checking and may be filled with errors but it should illustrate how the batch command works.

This will batch the creation of 50 users at a time.

from ipalib import api
from ipalib import errors
import sys


def add_batch_operation(command, *args, **kw):
    batch_args.append({
        "method": command,
        "params": [args, kw],
    })


def flush_batch_operation():
    if not batch_args:
        return None

    kw = {}

    try:
        return api.Command['batch'](*batch_args, **kw)
    except errors.CCacheError as e:
        print(e)
        sys.exit(1)


api.bootstrap(context='batch')
api.finalize()
api.Backend.rpcclient.connect()

lineno = 0
batch_args = 0
count = 0
batch_args = list()
with open("/etc/passwd", "r") as passwd:
    for line in passwd:
        lineno += 1
        try:
            (login, password, uid, gid, gecos, homedir, shell) = \
                line.strip().split(':')
        except ValueError as ve:
            print("Malformed line %d: %s" % (lineno, ve))

        if gecos:
            try:
                first, last = gecos.split(' ', 1)
            except ValueError:
                print("Unable to parse gecos line %d" % lineno)
        else:
            print("Missing gecos line %d" % lineno)

        params = [login]
        kw = {
            'givenname': first,
            'sn': last,
            'cn': gecos,
            'userpassword': '{crypt}' + password,
            'gecos': gecos,
            'homedirectory': homedir,
            'loginshell': shell,
        }

        add_batch_operation('user_add', *params, **kw)
        count += 1

        if count % 50 == 0:
            print("%d entries" % count)
            results = flush_batch_operation()
            for result in results.get('results'):
                if result.get('error') != None:
                    print(result.get('error'))
            batch_args = list()

flush_batch_operation()

by rcritten at November 02, 2018 08:07 PM

October 31, 2018

William Brown

High Available RADVD on Linux

High Available RADVD on Linux

Recently I was experimenting again with high availability router configurations so that in the cause of an outage or a failover the other router will take over and traffic is still served.

This is usually done through protocols like VRRP to allow virtual ips to exist that can be failed between. However with ipv6 one needs to still allow clients to find the router, and in the cause of a failure, the router advertisments still must continue for client renewals.

To achieve this we need two parts. A shared Link Local address, and a special RADVD configuration.

Because of howe ipv6 routers work, all traffic (even global) is still sent to your link local router. We can use an address like:

fe80::1:1

This doesn’t clash with any reserved or special ipv6 addresses, and it’s easy to remember. Because of how link local works, we can put this on many interfaces of the router (many vlans) with no conflict.

So now to the two components.

Keepalived

Keepalived is a VRRP implementation for linux. It has extensive documentation and sometimes uses some implementation specific language, but it works well for what it does.

Our configuration looks like:

#  /etc/keepalived/keepalived.conf
global_defs {
  vrrp_version 3
}

vrrp_sync_group G1 {
 group {
   ipv6_ens256
 }
}

vrrp_instance ipv6_ens256 {
   interface ens256
   virtual_router_id 62
   priority 50
   advert_int 1.0
   virtual_ipaddress {
    fe80::1:1
    2001:db8::1
   }
   nopreempt
   garp_master_delay 1
}

Note that we provide both a global address and an LL address for the failover. This is important for services and DNS for the router to have the global, but you could omit this. The LL address however is critical to this configuration and must be present.

Now you can start up vrrp, and you should see one of your two linux machines pick up the address.

RADVD

For RADVD to work, a feature of the 2.x series is required. Packaging this for el7 is out of scope of this post, but fedora ships the version required.

The feature is that RADVD can be configured to specify which address it advertises for the router, rather than assuming the interface LL autoconf address is the address to advertise. The configuration appears as:

# /etc/radvd.conf
interface ens256
{
    AdvSendAdvert on;
    MinRtrAdvInterval 30;
    MaxRtrAdvInterval 100;
    AdvRASrcAddress {
        fe80::1:1;
    };
    prefix 2001:db8::/64
    {
        AdvOnLink on;
        AdvAutonomous on;
        AdvRouterAddr off;
    };
};

Note the AdvRASrcAddress parameter? This defines a priority list of address to advertise that could be available on the interface.

Now start up radvd on your two routers, and try failing over between them while you ping from your client. Remember to ping LL from a client you need something like:

ping6 fe80::1:1%en1

Where the outgoing interface of your client traffic is denoted after the ‘%’.

Happy failover routing!

October 31, 2018 02:00 PM

October 22, 2018

Rob Crittenden

certmonger CA subsystem renewal

The CA subsystem certificate (OCSP, Audit, etc) are renewed directly against dogtag rather than being processed through IPA like the Apache and 389-ds server certificates are.

certmonger does the renewa by issuing a request like this:

GET /ca/ee/ca/profileSubmit?profileId=caServerCert&serial_num=5&renewal=true&xml=true&requestor_name=IPA

The serial number value comes from the current certificate being tracked by certmonger. Dogtag will generate its own CSR based on the template values currently in LDAP, cn=5,ou=ca,ou=requests,o=ipaca

by rcritten at October 22, 2018 07:27 PM

October 19, 2018

Fraser Tweedale

Should FreeIPA ship a subordinate CA profile?

Should FreeIPA ship a subordinate CA profile?

In my previous post I discussed how to issue subordinate CA (sub-CA) certificates from FreeIPA. In brief, the administrator must create and import a profile configuration for issuing certificates with the needed characteristics. The profile must add a Basic Constraints extension asserting that the subject is a CA.

After publishing that post, it formed the basis of an official Red Hat solution (Red Hat subscription required to view). Subsequently, an RFE was filed requesting a sub-CA profile to be included by default in FreeIPA. In this short post I’ll outline the reasons why this might not be a good idea, and what the profile might look like if we did ship one.

The case against

The most important reason not to include a sub-CA profile is that it will not be appropriate for many use cases. Important attributes of a sub-CA certificate include:

  • validity period (how long will the certificate be valid for?)
  • key usage and extended key usage (what can the certificate be used for?)
  • path length constraint (how many further subordinate CAs may be issued below this CA?)
  • name constraints (what namespaces can this CA issue certificates for?)

If we ship a default sub-CA profile in FreeIPA, all of these attributes will be determined ahead of time and fixed. There is a good chance the values will not be appropriate, and the administrator must create a custom profile configuration anyway. Worse, there is a risk that the profile will be used without due consideration of its appropriateness.

If we do nothing, we still have the blog post and official solution to guide administrators through the process. The administrator has the opportunity to alter the profile configuration according to their security or operational requirements.

The case for

The RFE description states:

Signing a subordinate CA’s CSR in IdM is difficult and requires tinkering. This functionality should be built in and present with the product. Please bundle a subordinate CA profile like the one described in the [blog post].

I agree that Dogtag profile configuration is difficult, even obtuse. It is not well documented and there is limited sanity checking. There is no “one size fits all” when it comes to sub-CA profiles, but can there be a “one size fits most”? Such a profile might have:

  • path length constraint of zero (the CA can only issue leaf certificates)
  • name constraints limiting DNS names to the FreeIPA domain (and subdomains)
  • a validity period of two years

In terms of security these are conservative attributes but they still admit the most common use case. Two years may or may not be a reasonable lifetime for the subordinate CA, but we have to choose some fixed value. The downside is that customers could use this profile without being aware of its limitations (path length, name constraints). The resulting issues will frustrate the customer and probably result in some support cases too.

Alternatives and conclusion

There is a middle road: instead of shipping the profile, we ship a “profile assistant” tool that asks some questions and builds the profile configuration. Questions would include the desired validity period, whether it’s for a CA (and if so the path length constraint), name constraints (if any), and so on. Then it imports the configuration.

There may be merit to this option, but none of the machinery exists. The effort and lead time are high. The other options: do-nothing (really improve and maintain documentation), or shipping a default sub-CA profile—are low effort and lead time.

In conclusion, I am open to either leaving sub-CA profiles as a documentation concern, or including a conservative default profile. But because there is no one size fits all, I prefer to leave sub-CA profile creation as a documented process that administrators can perform themselves—and tweak as they see fit.

October 19, 2018 12:00 AM

October 18, 2018

William Brown

Rust RwLock and Mutex Performance Oddities

Rust RwLock and Mutex Performance Oddities

Recently I have been working on Rust datastructures once again. In the process I wanted to test how my work performed compared to a standard library RwLock and Mutex. On my home laptop the RwLock was 5 times faster, the Mutex 2 times faster than my work.

So checking out my code on my workplace workstation and running my bench marks I noticed the Mutex was the same - 2 times faster. However, the RwLock was 4000 times slower.

What’s a RwLock and Mutex anyway?

In a multithreaded application, it’s important that data that needs to be shared between threads is consistent when accessed. This consistency is not just logical consistency of the data, but affects hardware consistency of the memory in cache. As a simple example, let’s examine an update to a bank account done by two threads:

acc = 10
deposit = 3
withdrawl = 5

[ Thread A ]            [ Thread B ]
acc = load_balance()    acc = load_balance()
acc = acc + deposit     acc = acc - withdrawl
store_balance(acc)      store_balance(acc)

What will the account balance be at the end? The answer is “it depends”. Because threads are working in parallel these operations could happen:

  • At the same time
  • Interleaved (various possibilities)
  • Sequentially

This isn’t very healthy for our bank account. We could lose our deposit, or have invalid data. Valid outcomes at the end are that acc could be 13, 5, 8. Only one of these is correct.

A mutex protects our data in multiple ways. It provides hardware consistency operations so that our cpus cache state is valid. It also allows only a single thread inside of the mutex at a time so we can linearise operations. Mutex comes from the word “Mutual Exclusion” after all.

So our example with a mutex now becomes:

acc = 10
deposit = 3
withdrawl = 5

[ Thread A ]            [ Thread B ]
mutex.lock()            mutex.lock()
acc = load_balance()    acc = load_balance()
acc = acc + deposit     acc = acc - withdrawl
store_balance(acc)      store_balance(acc)
mutex.unlock()          mutex.unlock()

Now only one thread will access our account at a time: The other thread will block until the mutex is released.

A RwLock is a special extension to this pattern. Where a mutex guarantees single access to the data in a read and write form, a RwLock (Read Write Lock) allows multiple read-only views OR single read and writer access. Importantly when a writer wants to access the lock, all readers must complete their work and “drain”. Once the write is complete readers can begin again. So you can imagine it as:

Time ->

T1: -- read --> x
T3:     -- read --> x                x -- read -->
T3:     -- read --> x                x -- read -->
T4:                   | -- write -- |
T5:                                  x -- read -->

Test Case for the RwLock

My test case is simple. Given a set of 12 threads, we spawn:

  • 8 readers. Take a read lock, read the value, release the read lock. If the value == target then stop the thread.
  • 4 writers. Take a write lock, read the value. Add one and write. Continue until value == target then stop.

Other conditions:

  • The test code is identical between Mutex/RwLock (beside the locking costruct)
  • –release is used for compiler optimisations
  • The test hardware is as close as possible (i7 quad core)
  • The tests are run multiple time to construct averages of the performance

The idea being that X target number of writes must occur, while many readers contend as fast as possible on the read. We are pressuring the system of choice between “many readers getting to read fast” or “writers getting priority to drain/block readers”.

On OSX given a target of 500 writes, this was able to complete in 0.01 second for the RwLock. (MBP 2011, 2.8GHz)

On Linux given a target of 500 writes, this completed in 42 seconds. This is a 4000 times difference. (i7-7700 CPU @ 3.60GHz)

All things considered the Linux machine should have an advantage - it’s a desktop processor, of a newer generation, and much faster clock speed. So why is the RwLock performance so much different on Linux?

To the source code!

Examining the Rust source code , many OS primitives come from libc. This is because they require OS support to function. RwLock is an example of this as is mutex and many more. The unix implementation for Rust consumes the pthread_rwlock primitive. This means we need to read man pages to understand the details of each.

OSX uses FreeBSD userland components, so we can assume they follow the BSD man pages. In the FreeBSD man page for pthread_rwlock_rdlock we see:

IMPLEMENTATION NOTES

 To prevent writer starvation, writers are favored over readers.

Linux however, uses different constructs. Looking at the Linux man page:

PTHREAD_RWLOCK_PREFER_READER_NP
  This is the default.  A thread may hold multiple read locks;
  that is, read locks are recursive.  According to The Single
  Unix Specification, the behavior is unspecified when a reader
  tries to place a lock, and there is no write lock but writers
  are waiting.  Giving preference to the reader, as is set by
  PTHREAD_RWLOCK_PREFER_READER_NP, implies that the reader will
  receive the requested lock, even if a writer is waiting.  As
  long as there are readers, the writer will be starved.

Reader vs Writer Preferences?

Due to the policy of a RwLock having multiple readers OR a single writer, a preference is given to one or the other. The preference basically boils down to the choice of:

  • Do you respond to write requests and have new readers block?
  • Do you favour readers but let writers block until reads are complete?

The difference is that on a read heavy workload, a write will continue to be delayed so that readers can begin and complete (up until some threshold of time). However, on a writer focused workload, you allow readers to stall so that writes can complete sooner.

On Linux, they choose a reader preference. On OSX/BSD they choose a writer preference.

Because our test is about how fast can a target of write operations complete, the writer preference of BSD/OSX causes this test to be much faster. Our readers still “read” but are giving way to writers, which completes our test sooner.

However, the linux “reader favour” policy means that our readers (designed for creating conteniton) are allowed to skip the queue and block writers. This causes our writers to starve. Because the test is only concerned with writer completion, the result is (correctly) showing our writers are heavily delayed - even though many more readers are completing.

If we were to track the number of reads that completed, I am sure we would see a large factor of difference where Linux has allow many more readers to complete than the OSX version.

Linux pthread_rwlock does allow you to change this policy (PTHREAD_RWLOCK_PREFER_WRITER_NP) but this isn’t exposed via Rust. This means today, you accept (and trust) the OS default. Rust is just unaware at compile time and run time that such a different policy exists.

Conclusion

Rust like any language consumes operating system primitives. Every OS implements these differently and these differences in OS policy can cause real performance differences in applications between development and production.

It’s well worth understanding the constructions used in programming languages and how they affect the performance of your application - and the decisions behind those tradeoffs.

This isn’t meant to say “don’t use RwLock in Rust on Linux”. This is meant to say “choose it when it makes sense - on read heavy loads, understanding writers will delay”. For my project (A copy on write cell) I will likely conditionally compile rwlock on osx, but mutex on linux as I require a writer favoured behaviour. There are certainly applications that will benefit from the reader priority in linux (especially if there is low writer volume and low penalty to delayed writes).

October 18, 2018 02:00 PM

August 21, 2018

Fraser Tweedale

Issuing subordinate CA certificates from FreeIPA

Issuing subordinate CA certificates from FreeIPA

FreeIPA, since version 4.4, has supported creating subordinate CAs within the deployment’s Dogtag CA instance. This feature is called lightweight sub-CAs. But what about when you need to issue a subordinate CA certificate to an external entity? One use case would be chaining a FreeIPA deployment up to some existing FreeIPA deployment. This is similar to what many customers do with Active Directory. In this post I’ll show how you can issue subordinate CA certificates from FreeIPA.

Scenario description

The existing FreeIPA deployment has the realm IPA.LOCAL and domain ipa.local. Its CA’s Subject Distinguished Name (Subject DN) is CN=Certificate Authority,O=IPA.LOCAL 201808022359. The master’s hostname is f28-0.ipa.local. I will refer to this deployment as the existing or primary deployment.

I will install a new FreeIPA deployment on the host f28-1.ipa.local, with realm SUB.IPA.LOCAL and domain sub.ipa.local. This will be called the secondary deployment. Its CA will be signed by the CA of the primary deployment.

Choice of subject principal and Subject DN

All certificate issuance via FreeIPA (with some limited exceptions) requires a nominated subject principal. Subject names in the CSR (Subject DN and Subject Alternative Names) are validated against the subject principal. We must create a subject principal in the primary deployment to represent the CA of the secondary deployment.

When validating CSRs, the Common Name (CN) of the Subject DN is checked against the subject principal, in the following ways:

  • for user principals, the CN must match the UID
  • for host principals, the CN must match the hostname (case-insensitive)
  • for service principals, the CN must match the hostname (case-insensitive); only principal aliases with the same service type as the canonical principal are checked

This validation regime imposes a restriction on what the CN of the subordinate CA can be. In particular:

  • the Subject DN must contain a CN attribute
  • the CN value can be a hostname (host or service principal), or a UID (user principal)

For this scenario, I chose to create a host principal for the domain of the secondary deployment:

[f28-0]% ipa host-add --force sub.ipa.local
--------------------------
Added host "sub.ipa.local"
--------------------------
  Host name: sub.ipa.local
  Principal name: host/sub.ipa.local@IPA.LOCAL
  Principal alias: host/sub.ipa.local@IPA.LOCAL
  Password: False
  Keytab: False
  Managed by: sub.ipa.local

Creating a certificate profile for sub-CAs

We will tweak the caIPAserviceCert profile configuration to create a new profile for subordinate CAs. Export the profile configuration:

[f28-0]% ipa certprofile-show caIPAserviceCert --out SubCA.cfg
------------------------------------------------
Profile configuration stored in file 'SubCA.cfg'
------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Perform the following edits to SubCA.cfg:

  1. Replace profileId=caIPAserviceCert with profileId=SubCA.
  2. Replace the subjectNameDefaultImpl component with the userSubjectNameDefaultImpl component. This will use the Subject DN from the CSR as is, without restriction:

    policyset.serverCertSet.1.constraint.class_id=noConstraintImpl
    policyset.serverCertSet.1.constraint.name=No Constraint
    policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl
    policyset.serverCertSet.1.default.name=Subject Name Default
  3. Edit the keyUsageExtDefaultImpl and keyUsageExtConstraintImpl configurations. They should have the following settings:
    • keyUsageCrlSign=true
    • keyUsageDataEncipherment=false
    • keyUsageDecipherOnly=false
    • keyUsageDigitalSignature=true
    • keyUsageEncipherOnly=false
    • keyUsageKeyAgreement=false
    • keyUsageKeyCertSign=true
    • keyUsageKeyEncipherment=false
    • keyUsageNonRepudiation=true
  4. Add the Basic Constraints extension configuration:

    policyset.serverCertSet.15.constraint.class_id=basicConstraintsExtConstraintImpl
    policyset.serverCertSet.15.constraint.name=Basic Constraint Extension Constraint
    policyset.serverCertSet.15.constraint.params.basicConstraintsCritical=true
    policyset.serverCertSet.15.constraint.params.basicConstraintsIsCA=true
    policyset.serverCertSet.15.constraint.params.basicConstraintsMinPathLen=0
    policyset.serverCertSet.15.constraint.params.basicConstraintsMaxPathLen=0
    policyset.serverCertSet.15.default.class_id=basicConstraintsExtDefaultImpl
    policyset.serverCertSet.15.default.name=Basic Constraints Extension Default
    policyset.serverCertSet.15.default.params.basicConstraintsCritical=true
    policyset.serverCertSet.15.default.params.basicConstraintsIsCA=true
    policyset.serverCertSet.15.default.params.basicConstraintsPathLen=0

    Add the new components’ index to the component list, to ensure they get processed:

    policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12,15
  5. Remove the commonNameToSANDefaultImpl and Extended Key Usage related components. This can be accomplished by removing the relevant indices (in my case, 7 and 12) from the component list:

    policyset.serverCertSet.list=1,2,3,4,5,6,8,9,10,11,15
  6. (Optional) edit the validity period in the validityDefaultImpl and validityConstraintImpl components. The default is 731 days. I did not change it.

For the avoidance of doubt, the diff between the caIPAserviceCert profile configuration and SubCA is:

--- caIPAserviceCert.cfg        2018-08-21 12:44:01.748884778 +1000
+++ SubCA.cfg   2018-08-21 14:05:53.484698688 +1000
@@ -13,5 +13,3 @@
-policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl
-policyset.serverCertSet.1.constraint.name=Subject Name Constraint
-policyset.serverCertSet.1.constraint.params.accept=true
-policyset.serverCertSet.1.constraint.params.pattern=CN=[^,]+,.+
-policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl
+policyset.serverCertSet.1.constraint.class_id=noConstraintImpl
+policyset.serverCertSet.1.constraint.name=No Constraint
+policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl
@@ -19 +16,0 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=IPA.LOCAL 201808022359
@@ -66,2 +63,2 @@
-policyset.serverCertSet.6.constraint.params.keyUsageCrlSign=false
-policyset.serverCertSet.6.constraint.params.keyUsageDataEncipherment=true
+policyset.serverCertSet.6.constraint.params.keyUsageCrlSign=true
+policyset.serverCertSet.6.constraint.params.keyUsageDataEncipherment=false
@@ -72,2 +69,2 @@
-policyset.serverCertSet.6.constraint.params.keyUsageKeyCertSign=false
-policyset.serverCertSet.6.constraint.params.keyUsageKeyEncipherment=true
+policyset.serverCertSet.6.constraint.params.keyUsageKeyCertSign=true
+policyset.serverCertSet.6.constraint.params.keyUsageKeyEncipherment=false
@@ -78,2 +75,2 @@
-policyset.serverCertSet.6.default.params.keyUsageCrlSign=false
-policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=true
+policyset.serverCertSet.6.default.params.keyUsageCrlSign=true
+policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=false
@@ -84,2 +81,2 @@
-policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=false
-policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=true
+policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=true
+policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=false
@@ -111,2 +108,13 @@
-policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12
-profileId=caIPAserviceCert
+policyset.serverCertSet.15.constraint.class_id=basicConstraintsExtConstraintImpl
+policyset.serverCertSet.15.constraint.name=Basic Constraint Extension Constraint
+policyset.serverCertSet.15.constraint.params.basicConstraintsCritical=true
+policyset.serverCertSet.15.constraint.params.basicConstraintsIsCA=true
+policyset.serverCertSet.15.constraint.params.basicConstraintsMinPathLen=0
+policyset.serverCertSet.15.constraint.params.basicConstraintsMaxPathLen=0
+policyset.serverCertSet.15.default.class_id=basicConstraintsExtDefaultImpl
+policyset.serverCertSet.15.default.name=Basic Constraints Extension Default
+policyset.serverCertSet.15.default.params.basicConstraintsCritical=true
+policyset.serverCertSet.15.default.params.basicConstraintsIsCA=true
+policyset.serverCertSet.15.default.params.basicConstraintsPathLen=0
+policyset.serverCertSet.list=1,2,3,4,5,6,8,9,10,11,15
+profileId=SubCA

Now import the profile:

[f28-0]% ipa certprofile-import SubCA \
            --desc "Subordinate CA" \
            --file SubCA.cfg \
            --store=1
------------------------
Imported profile "SubCA"
------------------------
  Profile ID: SubCA
  Profile description: Subordinate CA
  Store issued certificates: TRUE

Creating the CA ACL

Before issuing a certificate, CA ACLs are checked to determine if the combination of CA, profile and subject principal is acceptable. We must create a CA ACL that permits use of the SubCA profile to issue certificate to our subject principal:

[f28-0]% ipa caacl-add SubCA
--------------------
Added CA ACL "SubCA"
--------------------
  ACL name: SubCA
  Enabled: TRUE

[f28-0]% ipa caacl-add-profile SubCA --certprofile SubCA
  ACL name: SubCA
  Enabled: TRUE
  Profiles: SubCA
-------------------------
Number of members added 1
-------------------------

[f28-0]% ipa caacl-add-ca SubCA --ca ipa
  ACL name: SubCA
  Enabled: TRUE
  CAs: ipa
  Profiles: SubCA
-------------------------
Number of members added 1
-------------------------

[f28-0]% ipa caacl-add-host SubCA --hosts sub.ipa.local
  ACL name: SubCA
  Enabled: TRUE
  CAs: ipa
  Profiles: SubCA
  Hosts: sub.ipa.local
-------------------------
Number of members added 1
-------------------------

Installing the secondary FreeIPA deployment

We are finally ready to run ipa-server-install to set up the secondary deployment. We need to use the --ca-subject option to override the default Subject DN that will be included in the CSR, providing a valid DN according to the rules discussed above.

[root@f28-1]# ipa-server-install \
    --realm SUB.IPA.LOCAL \
    --domain sub.ipa.local \
    --external-ca \
    --ca-subject 'CN=SUB.IPA.LOCAL,O=Red Hat'

...

The IPA Master Server will be configured with:
Hostname:       f28-1.ipa.local
IP address(es): 192.168.124.142
Domain name:    sub.ipa.local
Realm name:     SUB.IPA.LOCAL

The CA will be configured with:
Subject DN:   CN=SUB.IPA.LOCAL,O=Red Hat
Subject base: O=SUB.IPA.LOCAL
Chaining:     externally signed (two-step installation)

Continue to configure the system with these values? [no]: yes

...

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance

The next step is to get /root/ipa.csr signed by your CA and re-run
/usr/sbin/ipa-server-install as:
/usr/sbin/ipa-server-install
  --external-cert-file=/path/to/signed_certificate
  --external-cert-file=/path/to/external_ca_certificate
The ipa-server-install command was successful

Let’s inspect /root/ipa.csr:

[root@f28-1]# openssl req -text < /root/ipa.csr |grep Subject:
        Subject: O = Red Hat, CN = SUB.IPA.LOCAL

The desired Subject DN appears in the CSR (note that openssl shows DN components in the opposite order from FreeIPA). After copying the CSR to f28-0.ipa.local we can request the certificate:

[f28-0]% ipa cert-request ~/ipa.csr \
            --principal host/sub.ipa.local \
            --profile SubCA \
            --certificate-out ipa.pem
  Issuing CA: ipa
  Certificate: MIIEAzCCAuugAwIBAgIBFTANBgkqhkiG9w0BAQsF...
  Subject: CN=SUB.IPA.LOCAL,O=Red Hat
  Issuer: CN=Certificate Authority,O=IPA.LOCAL 201808022359
  Not Before: Tue Aug 21 04:16:24 2018 UTC
  Not After: Fri Aug 21 04:16:24 2020 UTC
  Serial number: 21
  Serial number (hex): 0x15

The certificate was saved in the file ipa.pem. We can see from the command output that the Subject DN in the certificate is exactly what was in the CSR. Further inspecting the certificate, observe that the Basic Constraints extension is present and the Key Usage extension contains the appropriate assertions:

[f28-0]% openssl x509 -text < ipa.pem
...
      X509v3 extensions:
          ...
          X509v3 Key Usage: critical
              Digital Signature, Non Repudiation, Certificate Sign, CRL Sign
          ...
          X509v3 Basic Constraints: critical
              CA:TRUE, pathlen:0
          ...

Now, after copying the just-issued subordinate CA certificate and the primary CA certificate (/etc/ipa/ca.crt) over to f28-1.ipa.local, we can continue the installation:

[root@f28-1]# ipa-server-install \
                --external-cert-file ca.crt \
                --external-cert-file ipa.pem

The log file for this installation can be found in /var/log/ipaserver-install.log
Directory Manager password: XXXXXXXX

...

Adding [192.168.124.142 f28-1.ipa.local] to your /etc/hosts file
Configuring ipa-custodia
  [1/5]: Making sure custodia container exists
...
The ipa-server-install command was successful

And we’re done.

Discussion

I’ve shown how to create a profile for issuing subordinate CA certificates in FreeIPA. Because of the way FreeIPA validates certificate requests—always against a subject principal—there are restrictions on the what the subject DN of the subordinate CA can be. The Subject DN must contain a CN attribute matching either the hostname of a host or service principal, or the UID of a user principal.

If you want to avoid these Subject DN restrictions, right now there is no choice but to use the Dogtag CA directly, instead of via the FreeIPA commands. If such a requirement emerges it might make sense to implement some “special handling” for issuing sub-CA certificates (similar to what we currently do for the KDC certificate). But the certificate request logic is already complicated; I am hesitant to complicate it even more.

Currently there is no sub-CA profile included in FreeIPA by default. It might make sense to include it, or at least to produce an official solution document describing the procedure outlined in this post.

August 21, 2018 12:00 AM

Powered by Planet