GithubHelp home page GithubHelp logo

Comments (31)

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I just took a look to ldapsource.py here and it seems nsscache doesn't support samba4 active directory.

An object in samba4 active directory doesn't have these attributes. The uid attribute can be added but neither uidNumber nor gidNumber are allowed according to the schema.

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Well I think of two ways, either hard coding the AD attributes and map them to the Unix/NSS attribute or let the users decide what attributes they want to map.

For passwd I would suggest:

  • map uid to sAMAccountName.
  • uidNumber and gidNumber can be both be mapped to the objectSid, which can be broken up into two components that represent the AD domain identity and the relative identifier (RID) of the user or group object, which is unique.
    The objectSid looks like this S-1-5-21-1584226190-4227463277-35352144893-1104.
    The first part S-1-5-21-1584226190-4227463277-35352144893 is the AD domain identity and the second part 1104 is the RID, which can be mapped as uidNumber and gidNumber.
  • fullname is represented by the displayNamein AD, it can be mapped to gecos as well.
  • the homeDirectory can be mapped to unixHomeDirectory or something like /home/$sAMAccountName.
  • The loginShell is the same in AD

For shadow it should be sufficient with two entries:

  • map uid to sAMAccountName
  • map shadowLastChange to pwdLastSet

And for groups:

  • cn can be mapped to sAMAccountName
  • gidNumber to objectSid

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

The default behavior is:

  • The local system users and groups are from 0-999
  • The local Unix users and groups start at 1000
  • The AD Domain Controller master holds the variable riDNextRID for the next non builtin account
  • The range can go up to 999999 as far as I know (never searched that) and might be more.

Furthermore the objectClass posixAccount provides Unix attributes for users, which are supported by nsscache, like uidNumber, gecos, gidNumber etc.. The same applies to the objectClass posixGroup regarding the groups attributes supported by nsscache. The Unix attributes cannot be easily added afterwords without some tweeks, thus new users/groups can be created with these attributes, even if they don't have the objectclass assigned.

So in a samba 4 AD if all users and groups have the needed Unix attributes set, then the mapping would work without any problems, but this is not the default behavior of samba 4 AD, I can remember we had a lot of pain when migrated slapd to samba 4 AD, since we didn't want to create every thing from scratch, we had to modify and tweek to keep the slapd structure in samba 4 as well. With the default behavior Otherwise, the attributes could be mapped as suggested above.

The only issue that needs to be considered is the uidNumber/gidNumber/objectSid of the users and groups is the value range that could very well conflicts with Unix uid and gid for local users. So I have no clue how exactly to handle this, since administrators handle this in different ways:

  • setting the first RID > 50000 or higher
  • adding posixAccount objectClass and set the uidNumber > 50000 or higher for users
  • adding posixGroup objectClass and set the gidNumber > 50000 or higher for groups

We did all three steps above.

An approach to avoid this kind of conflicts would be mapping these attributes values uidNumber, gidNumber. objectSid to a higher value range by default through nsscache, the same way winbind handles ist. Winbind uses idmap for multiple domains with multiple non overlapping value ranges to avoid these kind of conflicts.

At the end it should be doable just to check if the Unix attributes are present in the object and accordingly map them (this is what nsscache now does), otherwise check if the other attributes are present in the object and map them to the respective Unix attributes.

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Well I must confess I am not the most gifted Python programmer. Thus I will try to implement it locally and see how far I can get and let you know.

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Hi, I ended up with the following modified ldapsource.py, I'll just paste the whole modified file.
I modified PasswdUpdateGetter and GroupUpdateGetter and ShadowUpdateGetter, further more I added a function at file beginn sidToStr() to convert the objectSid binary to string. This is the source of the function.
According to these modifications I needed to set new options in nsscache.conf, ldap_ad to get map AD attributes, ldap_homedir to be able to set a home directory for all mapped users if needed.
ldapsource.py

# Copyright 2007 Google Inc.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

"""An implementation of an ldap data source for nsscache."""

__author__ = ('[email protected] (Jamie Wilkinson)',
              '[email protected] (Vasilios Hoffman)')

import calendar
import logging
import time
import ldap
import ldap.sasl
import urllib
import re
import sys
import struct
from distutils.version import StrictVersion

from nss_cache import error
from nss_cache.maps import automount
from nss_cache.maps import group
from nss_cache.maps import netgroup
from nss_cache.maps import passwd
from nss_cache.maps import shadow
from nss_cache.maps import sshkey
from nss_cache.sources import source


IS_LDAP24_OR_NEWER = StrictVersion(ldap.__version__) >= StrictVersion('2.4')

# ldap.LDAP_CONTROL_PAGE_OID is unavailable on some systems, so we define it here
LDAP_CONTROL_PAGE_OID = '1.2.840.113556.1.4.319'

def RegisterImplementation(registration_callback):
  registration_callback(LdapSource)

def makeSimplePagedResultsControl(page_size):
  # The API for this is different on older versions of python-ldap, so we need
  # to handle this case.
  if IS_LDAP24_OR_NEWER:
    return ldap.controls.SimplePagedResultsControl(True, size=page_size, cookie='')
  else:
    return ldap.controls.SimplePagedResultsControl(LDAP_CONTROL_PAGE_OID, True, (page_size, ''))

def getCookieFromControl(pctrl):
  if IS_LDAP24_OR_NEWER:
    return pctrl.cookie
  else:
    return pctrl.controlValue[1]

def setCookieOnControl(control, cookie, page_size):
  if IS_LDAP24_OR_NEWER:
    control.cookie = cookie
  else:
    control.controlValue = (page_size, cookie)

  return cookie

def sidToStr(sid):
  """ Converts a hexadecimal string returned from the LDAP query to a
  string version of the SID in format of S-1-5-21-1270288957-3800934213-3019856503-500
  This function was based from: http://www.gossamer-threads.com/lists/apache/bugs/386930
  """
  # The revision level (typically 1)
  if sys.version_info.major < 3:
      revision = ord(sid[0])
  else:
      revision = sid[0]
  # The number of dashes minus 2
  if sys.version_info.major < 3:
      number_of_sub_ids = ord(sid[1])
  else:
      number_of_sub_ids = sid[1]
  # Identifier Authority Value (typically a value of 5 representing "NT Authority")
  # ">Q" is the format string. ">" specifies that the bytes are big-endian.
  # The "Q" specifies "unsigned long long" because 8 bytes are being decoded.
  # Since the actual SID section being decoded is only 6 bytes, we must precede it with 2 empty bytes.
  iav = struct.unpack('>Q', b'\x00\x00' + sid[2:8])[0]
  # The sub-ids include the Domain SID and the RID representing the object
  # '<I' is the format string. "<" specifies that the bytes are little-endian. "I" specifies "unsigned int".
  # This decodes in 4 byte chunks starting from the 8th byte until the last byte
  sub_ids = [struct.unpack('<I', sid[8 + 4 * i:12 + 4 * i])[0]
             for i in range(number_of_sub_ids)]

  return 'S-{0}-{1}-{2}'.format(revision, iav, '-'.join([str(sub_id) for sub_id in sub_ids]))


class LdapSource(source.Source):
  """Source for data in LDAP.

  After initialisation, one can search the data source for 'objects'
  under a particular part of the LDAP tree, with some filter, and have it
  return only some set of attributes.

  'objects' in this sense means some structured blob of data, not a Python
  object.
  """
  # ldap defaults
  BIND_DN = ''
  BIND_PASSWORD = ''
  RETRY_DELAY = 5
  RETRY_MAX = 3
  SCOPE = 'one'
  TIMELIMIT = -1
  TLS_REQUIRE_CERT = 'demand'  # one of never, hard, demand, allow, try

  # for registration
  name = 'ldap'

  # Page size for paged LDAP requests
  # Value chosen based on default Active Directory MaxPageSize
  PAGE_SIZE = 1000

  def __init__(self, conf, conn=None):
    """Initialise the LDAP Data Source.

    Args:
      conf: config.Config instance
      conn: An instance of ldap.LDAPObject that'll be used as the connection.
    """
    super(LdapSource, self).__init__(conf)
    self._dn_requested = False  # dn is a special-cased attribute

    self._SetDefaults(conf)
    self._conf = conf
    self.ldap_controls = makeSimplePagedResultsControl(self.PAGE_SIZE)

    # Used by _ReSearch:
    self._last_search_params = None

    if conn is None:
      # ReconnectLDAPObject should handle interrupted ldap transactions.
      # also, ugh
      rlo = ldap.ldapobject.ReconnectLDAPObject
      self.conn = rlo(uri=conf['uri'],
                      retry_max=conf['retry_max'],
                      retry_delay=conf['retry_delay'])
      if conf['tls_starttls'] == 1:
          self.conn.start_tls_s()
      if 'ldap_debug' in conf:
        self.conn.set_option(ldap.OPT_DEBUG_LEVEL, conf['ldap_debug'])
    else:
      self.conn = conn

    # TODO(v): We should bind on-demand instead.
    # (although binding here makes it easier to simulate a dropped network)
    self.Bind(conf)

  def _SetDefaults(self, configuration):
    """Set defaults if necessary."""
    # LDAPI URLs must be url escaped socket filenames; rewrite if necessary.
    if 'uri' in configuration:
      if configuration['uri'].startswith('ldapi://'):
        configuration['uri'] = 'ldapi://' + urllib.quote(configuration['uri'][8:], '')
    if not 'bind_dn' in configuration:
      configuration['bind_dn'] = self.BIND_DN
    if not 'bind_password' in configuration:
      configuration['bind_password'] = self.BIND_PASSWORD
    if not 'retry_delay' in configuration:
      configuration['retry_delay'] = self.RETRY_DELAY
    if not 'retry_max' in configuration:
      configuration['retry_max'] = self.RETRY_MAX
    if not 'scope' in configuration:
      configuration['scope'] = self.SCOPE
    if not 'timelimit' in configuration:
      configuration['timelimit'] = self.TIMELIMIT
    # TODO(jaq): XXX EVIL.  ldap client libraries change behaviour if we use
    # polling, and it's nasty.  So don't let the user poll.
    if configuration['timelimit'] == 0:
      configuration['timelimit'] = -1
    if not 'tls_require_cert' in configuration:
      configuration['tls_require_cert'] = self.TLS_REQUIRE_CERT
    if not 'tls_starttls' in configuration:
      configuration['tls_starttls'] = 0

    # Translate tls_require into appropriate constant, if necessary.
    if configuration['tls_require_cert'] == 'never':
      configuration['tls_require_cert'] = ldap.OPT_X_TLS_NEVER
    elif configuration['tls_require_cert'] == 'hard':
      configuration['tls_require_cert'] = ldap.OPT_X_TLS_HARD
    elif configuration['tls_require_cert'] == 'demand':
      configuration['tls_require_cert'] = ldap.OPT_X_TLS_DEMAND
    elif configuration['tls_require_cert'] == 'allow':
      configuration['tls_require_cert'] = ldap.OPT_X_TLS_ALLOW
    elif configuration['tls_require_cert'] == 'try':
      configuration['tls_require_cert'] = ldap.OPT_X_TLS_TRY

    if not 'sasl_authzid' in configuration:
      configuration['sasl_authzid'] = ''

    # Should we issue STARTTLS?
    if configuration['tls_starttls'] in (1, '1', 'on', 'yes', 'true'):
        configuration['tls_starttls'] = 1
    #if not configuration['tls_starttls']:
    else:
      configuration['tls_starttls'] = 0

    # Setting global ldap defaults.
    ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT,
                    configuration['tls_require_cert'])
    if 'tls_cacertdir' in configuration:
        ldap.set_option(ldap.OPT_X_TLS_CACERTDIR, configuration['tls_cacertdir'])
    if 'tls_cacertfile' in configuration:
        ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, configuration['tls_cacertfile'])
    if 'tls_certfile' in configuration:
        ldap.set_option(ldap.OPT_X_TLS_CERTFILE, configuration['tls_certfile'])
    if 'tls_keyfile' in configuration:
        ldap.set_option(ldap.OPT_X_TLS_KEYFILE, configuration['tls_keyfile'])
    ldap.version = ldap.VERSION3  # this is hard-coded, we only support V3

  def _SetCookie(self, cookie):
    return setCookieOnControl(self.ldap_controls, cookie, self.PAGE_SIZE)

  def Bind(self, configuration):
    """Bind to LDAP, retrying if necessary."""
    # If the server is unavailable, we are going to find out now, as this
    # actually initiates the network connection.
    retry_count = 0
    while retry_count < configuration['retry_max']:
      self.log.debug('opening ldap connection and binding to %s',
                     configuration['uri'])
      try:
        if 'use_sasl' in configuration and configuration['use_sasl']:
          if ('sasl_mech' in configuration and
              configuration['sasl_mech'] and
              configuration['sasl_mech'].lower() == 'gssapi'):
            sasl = ldap.sasl.gssapi(configuration['sasl_authzid'])
          # TODO: Add other sasl mechs
          else:
            raise error.ConfigurationError('SASL mechanism not supported')

          self.conn.sasl_interactive_bind_s('', sasl)
        else:
          self.conn.simple_bind_s(who=configuration['bind_dn'],
                                cred=str(configuration['bind_password']))
        break
      except ldap.SERVER_DOWN, e:
        retry_count += 1
        self.log.warning('Failed LDAP connection: attempt #%s.', retry_count)
        self.log.debug('ldap error is %r', e)
        if retry_count == configuration['retry_max']:
          self.log.debug('max retries hit')
          raise error.SourceUnavailable(e)
        self.log.debug('sleeping %d seconds', configuration['retry_delay'])
        time.sleep(configuration['retry_delay'])

  def _ReSearch(self):
    """
    Performs self.Search again with the previously used parameters.

    Returns:
     self.Search result.
    """
    self.Search(*self._last_search_params)

  def Search(self, search_base, search_filter, search_scope, attrs):
    """Search the data source.

    The search is asynchronous; data should be retrieved by iterating over
    the source object itself (see __iter__() below).

    Args:
     search_base: the base of the tree being searched
     search_filter: a filter on the objects to be returned
     search_scope: the scope of the search from ldap.SCOPE_*
     attrs: a list of attributes to be returned

    Returns:
     nothing.
    """
    self._last_search_params = (search_base, search_filter, search_scope, attrs)

    self.log.debug('searching for base=%r, filter=%r, scope=%r, attrs=%r',
                   search_base, search_filter, search_scope, attrs)
    if 'dn' in attrs: self._dn_requested = True  # special cased attribute
    self.message_id = self.conn.search_ext(base=search_base,
                                           filterstr=search_filter,
                                           scope=search_scope,
                                           attrlist=attrs,
                                           serverctrls=[self.ldap_controls])

  def __iter__(self):
    """Iterate over the data from the last search.

    Probably not threadsafe.

    Yields:
      Search results from the prior call to self.Search()
    """
    # Acquire data to yield:
    while True:
      result_type, data = None, None

      timeout_retries = 0
      while timeout_retries < self._conf['retry_max']:
        try:
          result_type, data, _, serverctrls = self.conn.result3(
            self.message_id, all=0, timeout=self.conf['timelimit'])

          # Paged requests return a new cookie in serverctrls at the end of a page,
          # so we search for the cookie and perform another search if needed.
          if len(serverctrls) > 0:
            # Search for appropriate control
            simple_paged_results_controls = [
              control
              for control in serverctrls
              if control.controlType == LDAP_CONTROL_PAGE_OID
            ]
            if simple_paged_results_controls:
              # We only expect one control; just take the first in the list.
              cookie = getCookieFromControl(simple_paged_results_controls[0])

              if len(cookie) > 0:
                # If cookie is non-empty, call search_ext and result3 again
                self._SetCookie(cookie)
                self._ReSearch()
                result_type, data, _, serverctrls = self.conn.result3(
                  self.message_id, all=0, timeout=self.conf['timelimit'])
              # else: An empty cookie means we are done.

          # break loop once result3 doesn't time out and reset cookie
          setCookieOnControl(self.ldap_controls, '', self.PAGE_SIZE)
          break
        except ldap.SIZELIMIT_EXCEEDED:
          self.log.warning('LDAP server size limit exceeded; using page size {0}.'.format(self.PAGE_SIZE))
          return
        except ldap.NO_SUCH_OBJECT:
          self.log.debug('Returning due to ldap.NO_SUCH_OBJECT')
          return
        except ldap.TIMELIMIT_EXCEEDED:
          timeout_retries += 1
          self.log.warning('Timeout on LDAP results, attempt #%s.', timeout_retries)
          if timeout_retries >= self._conf['retry_max']:
            self.log.debug('max retries hit, returning')
            return
          self.log.debug('sleeping %d seconds', self._conf['retry_delay'])
          time.sleep(self.conf['retry_delay'])

      if result_type == ldap.RES_SEARCH_RESULT:
        self.log.debug('Returning due to RES_SEARCH_RESULT')
        return

      if result_type != ldap.RES_SEARCH_ENTRY:
        self.log.info('Unknown result type %r, ignoring.', result_type)

      if not data:
        self.log.debug('Returning due to len(data) == 0')
        return

      for record in data:
        # If the dn is requested, return it along with the payload,
        # otherwise ignore it.
        if self._dn_requested:
          merged_records = {'dn': record[0]}
          merged_records.update(record[1])
          yield merged_records
        else:
          yield record[1]

  def GetSshkeyMap(self, since=None):
    """Return the sshkey map from this source.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
      for all data.

    Returns:
      instance of maps.SshkeyMap
    """
    return SshkeyUpdateGetter(self.conf).GetUpdates(source=self,
                                           search_base=self.conf['base'],
                                           search_filter=self.conf['filter'],
                                           search_scope=self.conf['scope'],
                                           since=since)
  def GetPasswdMap(self, since=None):
    """Return the passwd map from this source.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
      for all data.

    Returns:
      instance of maps.PasswdMap
    """
    return PasswdUpdateGetter(self.conf).GetUpdates(source=self,
                                           search_base=self.conf['base'],
                                           search_filter=self.conf['filter'],
                                           search_scope=self.conf['scope'],
                                           since=since)

  def GetGroupMap(self, since=None):
    """Return the group map from this source.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
      for all data.

    Returns:
      instance of maps.GroupMap
    """
    return GroupUpdateGetter(self.conf).GetUpdates(source=self,
                                          search_base=self.conf['base'],
                                          search_filter=self.conf['filter'],
                                          search_scope=self.conf['scope'],
                                          since=since)

  def GetShadowMap(self, since=None):
    """Return the shadow map from this source.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
      for all data.

    Returns:
      instance of ShadowMap
    """
    return ShadowUpdateGetter(self.conf).GetUpdates(source=self,
                                           search_base=self.conf['base'],
                                           search_filter=self.conf['filter'],
                                           search_scope=self.conf['scope'],
                                           since=since)

  def GetNetgroupMap(self, since=None):
    """Return the netgroup map from this source.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
      for all data.

    Returns:
      instance of NetgroupMap
    """
    return NetgroupUpdateGetter(self.conf).GetUpdates(source=self,
                                             search_base=self.conf['base'],
                                             search_filter=self.conf['filter'],
                                             search_scope=self.conf['scope'],
                                             since=since)

  def GetAutomountMap(self, since=None, location=None):
    """Return an automount map from this source.

    Note that autmount maps are stored in multiple locations, thus we expect
    a caller to provide a location.  We also follow the automount spec and
    set our search scope to be 'one'.

    Args:
      since: Get data only changed since this timestamp (inclusive) or None
        for all data.
      location: Currently a string containing our search base, later we
        may support hostname and additional parameters.

    Returns:
      instance of AutomountMap
    """
    if location is None:
      self.log.error('A location is required to retrieve an automount map!')
      raise error.EmptyMap

    autofs_filter = '(objectclass=automount)'
    return AutomountUpdateGetter(self.conf).GetUpdates(source=self,
                                              search_base=location,
                                              search_filter=autofs_filter,
                                              search_scope='one',
                                              since=since)

  def GetAutomountMasterMap(self):
    """Return the autmount master map from this source.

    The automount master map is a special-case map which points to a dynamic
    list of additional maps. We currently support only the schema outlined at
    http://docs.sun.com/source/806-4251-10/mapping.htm commonly used by linux
    automount clients, namely ou=auto.master and objectclass=automount entries.

    Returns:
      an instance of maps.AutomountMap
    """
    search_base = self.conf['base']
    search_scope = ldap.SCOPE_SUBTREE

    # auto.master is stored under ou=auto.master with objectclass=automountMap
    search_filter = '(&(objectclass=automountMap)(ou=auto.master))'
    self.log.debug('retrieving automount master map.')
    self.Search(search_base=search_base, search_filter=search_filter,
                search_scope=search_scope, attrs=['dn'])

    search_base = None
    for obj in self:
      # the dn of the matched object is our search base
      search_base = obj['dn']

    if search_base is None:
      self.log.critical('Could not find automount master map!')
      raise error.EmptyMap

    self.log.debug('found ou=auto.master at %s', search_base)
    master_map = self.GetAutomountMap(location=search_base)

    # fix our location attribute to contain the data we
    # expect returned to us later, namely the new search base(s)
    for map_entry in master_map:
      # we currently ignore hostname and just look for the dn which will
      # be the search_base for this map.  third field, colon delimited.
      map_entry.location = map_entry.location.split(':')[2]
      # and strip the space seperated options
      map_entry.location = map_entry.location.split(' ')[0]
      self.log.debug('master map has: %s' % map_entry.location)

    return master_map

  def Verify(self, since=None):
    """Verify that this source is contactable and can be queried for data."""
    if since is None:
      # one minute in the future
      since = int(time.time() + 60)
    results = self.GetPasswdMap(since=since)
    return len(results)


class UpdateGetter(object):
  """Base class that gets updates from LDAP."""
  def __init__(self, conf):
    super(UpdateGetter, self).__init__()
    self.conf = conf

  def FromLdapToTimestamp(self, ldap_ts_string):
    """Transforms a LDAP timestamp into the nss_cache internal timestamp.

    Args:
      ldap_ts_string: An LDAP timestamp string in the format %Y%m%d%H%M%SZ

    Returns:
      number of seconds since epoch.
    """
    try:
      t = time.strptime(ldap_ts_string, '%Y%m%d%H%M%SZ')
    except ValueError:
      # Some systems add a decimal component; try to filter it:
      m = re.match('([0-9]*)(\.[0-9]*)?(Z)', ldap_ts_string)
      if m:
        ldap_ts_string = m.group(1) + m.group(3)
      t = time.strptime(ldap_ts_string, '%Y%m%d%H%M%SZ')
    return int(calendar.timegm(t))

  def FromTimestampToLdap(self, ts):
    """Transforms nss_cache internal timestamp into a LDAP timestamp.

    Args:
      ts: number of seconds since epoch

    Returns:
      LDAP format timestamp string.
    """
    t = time.strftime('%Y%m%d%H%M%SZ', time.gmtime(ts))
    return t

  def GetUpdates(self, source, search_base, search_filter,
                 search_scope, since):
    """Get updates from a source.

    Args:
      source: a data source
      search_base: the LDAP base of the tree
      search_filter: the LDAP object filter
      search_scope:  the LDAP scope filter, one of 'base', 'one', or 'sub'.
      since: a timestamp to get updates since (None for 'get everything')

    Returns:
      a tuple containing the map of updates and a maximum timestamp

    Raises:
      error.ConfigurationError: scope is invalid
      ValueError: an object in the source map is malformed
    """
    self.attrs.append('modifyTimestamp')

    if since is not None:
      ts = self.FromTimestampToLdap(since)
      # since openldap disallows modifyTimestamp "greater than" we have to
      # increment by one second.
      ts = int(ts.rstrip('Z')) + 1
      ts = '%sZ' % ts
      search_filter = ('(&%s(modifyTimestamp>=%s))' % (search_filter, ts))

    if search_scope == 'base':
      search_scope = ldap.SCOPE_BASE
    elif search_scope in ['one', 'onelevel']:
      search_scope = ldap.SCOPE_ONELEVEL
    elif search_scope in ['sub', 'subtree']:
      search_scope = ldap.SCOPE_SUBTREE
    else:
      raise error.ConfigurationError('Invalid scope: %s' % search_scope)

    source.Search(search_base=search_base, search_filter=search_filter,
                  search_scope=search_scope, attrs=self.attrs)

    # Don't initialize with since, because we really want to get the
    # latest timestamp read, and if somehow a larger 'since' slips through
    # the checks in main(), we'd better catch it here.
    max_ts = None

    data_map = self.CreateMap()

    for obj in source:
      for field in self.essential_fields:
        if field not in obj:
          logging.warn('invalid object passed: %r not in %r', field, obj)
          raise ValueError('Invalid object passed: %r', obj)

      try:
        obj_ts = self.FromLdapToTimestamp(obj['modifyTimestamp'][0])
      except KeyError:
        obj_ts = self.FromLdapToTimestamp(obj['modifyTimeStamp'][0])

      if max_ts is None or obj_ts > max_ts:
        max_ts = obj_ts

      try:
        if not data_map.Add(self.Transform(obj)):
          logging.info('could not add obj: %r', obj)
      except AttributeError, e:
        logging.warning('error %r, discarding malformed obj: %r',
                        str(e), obj)
    # Perform some post processing on the data_map.
    self.PostProcess(data_map, source, search_filter, search_scope)

    data_map.SetModifyTimestamp(max_ts)

    return data_map

  def PostProcess(self, data_map, source, search_filter, search_scope):
    """Perform some post-process of the data."""
    pass


class PasswdUpdateGetter(UpdateGetter):
  """Get passwd updates."""

  def __init__(self, conf):
    super(PasswdUpdateGetter, self).__init__(conf)
    self.attrs = ['uid', 'uidNumber', 'gidNumber', 'gecos', 'cn',
                  'homeDirectory', 'loginShell', 'fullName']
    if self.conf.get('ad'):
      self.attrs.extend(('sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory'))
      self.essential_fields = ['sAMAccountName', 'objectSid']
    else:
      if 'uidattr' in self.conf:
        self.attrs.append(self.conf['uidattr'])
      if 'uidregex' in self.conf:
        self.uidregex = re.compile(self.conf['uidregex'])
      self.essential_fields = ['uid', 'uidNumber', 'gidNumber']
    self.log = logging.getLogger(self.__class__.__name__)

  def CreateMap(self):
    """Returns a new PasswdMap instance to have PasswdMapEntries added to it."""
    return passwd.PasswdMap()

  def Transform(self, obj):
    """Transforms a LDAP posixAccount data structure into a PasswdMapEntry."""

    pw = passwd.PasswdMapEntry()

    if 'displayName' in obj:
      pw.gecos = obj['displayName'][0]
    elif 'gecos' in obj:
      pw.gecos = obj['gecos'][0]
    elif 'cn' in obj:
      pw.gecos = obj['cn'][0]
    elif 'fullName' in obj:
      pw.gecos = obj['fullName'][0]
    else:
      raise ValueError('Neither gecos nor cn found')

    pw.gecos = pw.gecos.replace('\n','')

    if 'sAMAccountName' in obj:
      pw.name = obj['sAMAccountName'][0]
    elif 'uidattr' in self.conf:
      pw.name = obj[self.conf['uidattr']][0]
    else:
      pw.name = obj['uid'][0]

    if hasattr(self, 'uidregex'):
      pw.name = ''.join([x for x in self.uidregex.findall(pw.name)])

    if 'override_shell' in self.conf:
      pw.shell = self.conf['override_shell']
    elif 'loginShell' in obj:
      pw.shell = obj['loginShell'][0]
    else:
      pw.shell = ''

    if self.conf.get('ad'):
      pw.uid = int(sidToStr(obj['objectSid'][0]).split('-')[-1])
      pw.gid = int(sidToStr(obj['objectSid'][0]).split('-')[-1])
    else:
      pw.uid = int(obj['uidNumber'][0])
      pw.gid = int(obj['gidNumber'][0])

    if self.conf.get('homedir'):
      pw.dir = '/home/%s' % pw.name
    elif 'homeDirectory' in obj:
      pw.dir = obj['homeDirectory'][0]
    elif 'unixHomeDirectory' in obj:
      pw.dir = obj['unixHomeDirectory'][0]
    else:
      pw.dir = ''

    # hack
    pw.passwd = 'x'

    return pw


class GroupUpdateGetter(UpdateGetter):
  """Get group updates."""

  def __init__(self, conf):
    super(GroupUpdateGetter, self).__init__(conf)
    # TODO: Merge multiple rcf2307bis[_alt] options into a single option.
    if self.conf.get('ad'):
      self.attrs = ['sAMAccountName', 'gidNumber', 'member', 'objectSid']
      self.essential_fields = ['sAMAccountName']
    else:
      if conf.get('rfc2307bis'):
        self.attrs = ['cn', 'gidNumber', 'member']
      elif conf.get('rfc2307bis_alt'):
        self.attrs = ['cn', 'gidNumber', 'uniqueMember']
      else:
        self.attrs = ['cn', 'gidNumber', 'memberUid']
      if 'groupregex' in conf:
        self.groupregex = re.compile(self.conf['groupregex'])
      self.essential_fields = ['cn']
    self.log = logging.getLogger(self.__class__.__name__)

  def CreateMap(self):
    """Return a GroupMap instance."""
    return group.GroupMap()

  def Transform(self, obj):
    """Transforms a LDAP posixGroup object into a group(5) entry."""

    gr = group.GroupMapEntry()

    if 'sAMAccountName' in obj:
      gr.name = obj['sAMAccountName'][0]
    else:
      gr.name = obj['cn'][0]
    # group passwords are deferred to gshadow
    gr.passwd = '*'
    base = self.conf.get("base")
    members = []
    group_members = []
    if 'memberUid' in obj:
      if hasattr(self, 'groupregex'):
        members.extend(''.join([x for x in self.groupregex.findall(obj['memberUid'])]))
      else:
        members.extend(obj['memberUid'])
    elif 'member' in obj:
      for member_dn in obj['member']:
        member_uid = member_dn.split(',')[0].split('=')[1]
        # Note that there is not currently a way to consistently distinguish
        # a group from a person
        group_members.append(member_uid)
        if hasattr(self, 'groupregex'):
          members.append(''.join([x for x in self.groupregex.findall(member_uid)]))
        else:
          members.append(member_uid)
    elif 'uniqueMember' in obj:
      """ This contains a DN and is processed in PostProcess in GetUpdates."""
      members.extend(obj['uniqueMember'])
    members.sort()

    if self.conf.get('ad'):
      gr.gid = int(sidToStr(obj['objectSid'][0]).split('-')[-1])
    else:
      gr.gid = int(obj['gidNumber'][0])

    gr.members = members
    gr.groupmembers = group_members

    return gr

  def PostProcess(self, data_map, source, search_filter, search_scope):
    """Perform some post-process of the data."""
    if 'uniqueMember' in self.attrs:
      for gr in data_map:
        uidmembers=[]
        for member in gr.members:
          source.Search(search_base=member,
                        search_filter='(objectClass=*)',
                        search_scope=ldap.SCOPE_BASE,
                        attrs=['uid'])
          for obj in source:
            if 'uid' in obj:
              uidmembers.extend(obj['uid'])
        del gr.members[:]
        gr.members.extend(uidmembers)

    _group_map = {i.name: i for i in data_map}
    
    def _expand_members(obj, visited=None):
      """Expand all subgroups recursively"""
      for member_name in obj.groupmembers:
        if member_name in _group_map and member_name not in visited:
          gmember = _group_map[member_name]
          for member in gmember.members:
            if member not in obj.members:
              obj.members.append(member)
          for submember_name in gmember.groupmembers:
            if submember_name in _group_map and submember_name not in visited:
              visited.append(submember_name)
              _expand_members(_group_map[submember_name], visited)
    
    if self.conf.get("nested_groups"):
      self.log.info("Expanding nested groups")
      for gr in data_map:
        _expand_members(gr, [gr.name])


class ShadowUpdateGetter(UpdateGetter):
  """Get Shadow updates from the LDAP Source."""

  def __init__(self, conf):
    super(ShadowUpdateGetter, self).__init__(conf)
    self.attrs = ['uid', 'shadowLastChange', 'shadowMin',
                  'shadowMax', 'shadowWarning', 'shadowInactive',
                  'shadowExpire', 'shadowFlag', 'userPassword']
    if self.conf.get('ad'):
      self.attrs.extend(('sAMAccountName', 'pwdLastSet'))
      self.essential_fields = ['sAMAccountName']
    else:
      if 'uidattr' in self.conf:
        self.attrs.append(self.conf['uidattr'])
      if 'uidregex' in self.conf:
        self.uidregex = re.compile(self.conf['uidregex'])
      self.essential_fields = ['uid']
    self.log = logging.getLogger(self.__class__.__name__)

  def CreateMap(self):
    """Return a ShadowMap instance."""
    return shadow.ShadowMap()

  def Transform(self, obj):
    """Transforms an LDAP shadowAccont object into a shadow(5) entry."""
    shadow_ent = shadow.ShadowMapEntry()
    if 'sAMAccountName' in obj:
      shadow_ent.name = obj['sAMAccountName'][0]
    elif 'uidattr' in self.conf:
      shadow_ent.name = obj[uidattr][0]
    else:
      shadow_ent.name = obj['uid'][0]

    if hasattr(self, 'uidregex'):
      shadow_ent.name = ''.join([x for x in self.uidregex.findall(shadow_end.name)])

    # TODO(jaq): does nss_ldap check the contents of the userPassword
    # attribute?
    shadow_ent.passwd = '*'
    if 'pwdLastSet' in obj:
      shadow_ent.lstchg = int((int(obj['pwdLastSet'][0])/10000000 - 11644473600) / 86400 )
    elif 'shadowLastChange' in obj:
      shadow_ent.lstchg = int(obj['shadowLastChange'][0])
    if 'shadowMin' in obj:
      shadow_ent.min = int(obj['shadowMin'][0])
    if 'shadowMax' in obj:
      shadow_ent.max = int(obj['shadowMax'][0])
    if 'shadowWarning' in obj:
      shadow_ent.warn = int(obj['shadowWarning'][0])
    if 'shadowInactive' in obj:
      shadow_ent.inact = int(obj['shadowInactive'][0])
    if 'shadowExpire' in obj:
      shadow_ent.expire = int(obj['shadowExpire'][0])
    if 'shadowFlag' in obj:
      shadow_ent.flag = int(obj['shadowFlag'][0])
    if shadow_ent.flag is None:
      shadow_ent.flag = 0
    if 'userPassword' in obj:
      passwd = obj['userPassword'][0]
      if passwd[:7].lower() == '{crypt}':
        shadow_ent.passwd = passwd[7:]
      else:
        logging.info('Ignored password that was not in crypt format')
    return shadow_ent


class NetgroupUpdateGetter(UpdateGetter):
  """Get netgroup updates."""

  def __init__(self, conf):
    super(NetgroupUpdateGetter, self).__init__(conf)
    self.attrs = ['cn', 'memberNisNetgroup', 'nisNetgroupTriple']
    self.essential_fields = ['cn']

  def CreateMap(self):
    """Return a NetgroupMap instance."""
    return netgroup.NetgroupMap()

  def Transform(self, obj):
    """Transforms an LDAP nisNetgroup object into a netgroup(5) entry."""
    netgroup_ent = netgroup.NetgroupMapEntry()
    netgroup_ent.name = obj['cn'][0]

    entries = set()
    if 'memberNisNetgroup' in obj:
      entries.update(obj['memberNisNetgroup'])
    if 'nisNetgroupTriple' in obj:
      entries.update(obj['nisNetgroupTriple'])

    # final data is stored as a string in the object
    netgroup_ent.entries = ' '.join(sorted(entries))

    return netgroup_ent


class AutomountUpdateGetter(UpdateGetter):
  """Get specific automount maps."""

  def __init__(self, conf):
    super(AutomountUpdateGetter, self).__init__(conf)
    self.attrs = ['cn', 'automountInformation']
    self.essential_fields = ['cn']

  def CreateMap(self):
    """Return a AutomountMap instance."""
    return automount.AutomountMap()

  def Transform(self, obj):
    """Transforms an LDAP automount object into an autofs(5) entry."""
    automount_ent = automount.AutomountMapEntry()
    automount_ent.key = obj['cn'][0]

    automount_information = obj['automountInformation'][0]

    if automount_information.startswith('ldap'):
      # we are creating an autmount master map, pointing to other maps in LDAP
      automount_ent.location = automount_information
    else:
      # we are creating normal automount maps, with filesystems and options
      automount_ent.options = automount_information.split(' ')[0]
      automount_ent.location = automount_information.split(' ')[1]

    return automount_ent


class SshkeyUpdateGetter(UpdateGetter):
  """Fetches SSH keys."""

  def __init__(self, conf):
    super(SshkeyUpdateGetter, self).__init__(conf)
    self.attrs = ['uid', 'sshPublicKey']
    if 'uidattr' in self.conf:
      self.attrs.append(self.conf['uidattr'])
    if 'uidregex' in self.conf:
       self.uidregex = re.compile(self.conf['uidregex'])
    self.essential_fields = ['uid']

  def CreateMap(self):
    """Returns a new SshkeyMap instance to have SshkeyMapEntries added to it."""
    return sshkey.SshkeyMap()

  def Transform(self, obj):
    """Transforms a LDAP posixAccount data structure into a SshkeyMapEntry."""

    skey = sshkey.SshkeyMapEntry()

    if 'uidattr' in self.conf:
      skey.name = obj[uidattr][0]
    else:
      skey.name = obj['uid'][0]

    if hasattr(self, 'uidregex'):
      skey.name = ''.join([x for x in self.uidregex.findall(pw.name)])

    if 'sshPublicKey' in obj:
      skey.sshkey = obj['sshPublicKey']
    else:
      skey.sshkey = ''

    return skey

nsscache.conf

# Example /etc/nsscache.conf - configuration for nsscache
#
# nsscache loads a config file from the environment variable NSSCACHE_CONFIG
#
# By default this is /etc/nsscache.conf
#
# Commented values are overrideable defaults, uncommented values
# require you to set them.

[DEFAULT]

# Default NSS data source module name
source = ldap
ldap_ad = 1

# Default NSS data cache module name; 'files' is compatible with the
# libnss-cache NSS module.  'nssdb' is deprecated, and should not be used for
# new installations.
cache = files

# NSS maps to be cached
maps = passwd, group, shadow, netgroup, automount

# Directory to store our update/modify timestamps
timestamp_dir = /var/lib/nsscache

# Lockfile to use for update/repair operations
#lockfile = /var/run/nsscache

# Defaults for specific modules; prefaced with "modulename_"

##
# ldap module defaults.
#

# LDAP URI to query for NSS data
ldap_uri = ldaps://ldap

# Base for LDAP searches
ldap_base = ou=people,dc=example,dc=com

# Default LDAP search filter for maps
ldap_filter = (objectclass=posixAccount)

# Default LDAP search scope
#ldap_scope = one

# Default LDAP BIND DN, empty string is an anonymous bind
#ldap_bind_dn = ""

# Default LDAP password, empty DN and empty password is used for
# anonymous binds
#ldap_bind_password = ""

# Default timelimit for LDAP queries, in seconds.
# The query will block for this number of seconds, or indefinitely if negative.
#ldap_timelimit = -1

# Default number of retry attempts
#ldap_retry_max = 3

# Default delay in between retry attempts
#ldap_retry_delay = 5

# Default setting for requiring tls certificates, one of:
# never, hard, demand, allow, try
#ldap_tls_require_cert = 'demand'

# Default directoy for trusted CAs
#ldap_tls_cacertdir = '/usr/share/ssl'

# Default filename for trusted CAs
#ldap_tls_cacertfile = '/usr/share/ssl/cert.pem'

# If you wish to use mTLS, set these to the paths of the TLS certificate and key.
#ldap_tls_certfile = ''
#ldap_tls_keyfile = ''

# Should we issue STARTTLS?
#ldap_tls_starttls = 1

# Default uid-like attribute
#ldap_uidattr = 'uid'

# A Python regex to extract uid components from the uid-like attribute.
# All matching groups are concatenated without spaces.
# For example:  '(.*)@example.com' would return a uid to the left of
# the @example.com domain.  Default is no regex.
#ldap_uidregex = ''

# A Python regex to extract group member components from the member or
# memberOf attributes.  All matching groups are concatenated without spaces.
# For example:  '(.*)@example.com' would return a member without the
# the @example.com domain.  Default is no regex.
#ldap_groupregex = ''

# Replace all users' shells with the specified one.
#ldap_override_shell='/bin/bash'

# Create home directory for all users.
ldap_homedir = 1

# Default uses rfc2307 schema. If rfc2307bis (groups stored as a list of DNs
# in 'member' attr), set this to 1
#ldap_rfc2307bis = 0

# Default uses rfc2307 schema. If rfc2307bis_alt (groups stored as a list of DNs
# in 'uniqueMember' attr), set this to 1
#ldap_rfc2307bis_alt = 0

# Debug logging
#ldap_debug = 3

# SASL
# Use SASL for authentication
#ldap_use_sasl = False

# SASL mechanism. Only 'gssapi' is supported now
#ldap_sasl_mech = 'gssapi'
#ldap_sasl_authzid = ''

##
# nssdb module defaults

# Directory to store nssdb databases.  Current libnss_db code requires
# the path below
nssdb_dir = /var/lib/misc

# Path to `makedb', supplied by the nss_db module
#nssdb_makedb = /usr/bin/makedb

##
# files module defaults

# Directory to store the plain text files
files_dir = /etc

# Suffix used on the files module database files
files_cache_filename_suffix = cache

###
# Optional per-map sections, if present they will override the above
# defaults.  The examples below show you some common values to override
#
# [passwd]
#
# ldap_base = ou=people,dc=example,dc=com

[group]

ldap_base = ou=group,dc=example,dc=com
ldap_filter = (objectclass=posixGroup)
# If ldap_nested_groups is enabled, any groups are members of other groups
# will be expanded recursively.
# Note: This will only work with full updates. Incremental updates will not
# propagate changes in child groups to their parents.
# ldap_nested_groups = 1

[shadow]

ldap_filter = (objectclass=shadowAccount)

[netgroup]

ldap_base = ou=netgroup,dc=example,dc=com
ldap_filter = (objectclass=nisNetgroup)
files_cache_filename_suffix =

[automount]

ldap_base = ou=automounts,dc=example,dc=com
files_cache_filename_suffix =
cache = files

# Files module has an option that lets you leave the local master map alone
# (e.g. /etc/auto.master) so that maps can be enabled/disabled locally.
#
# This also causes nsscache to limit automount updates to only the maps which
# are defined both in the local master map (/etc/auto.master) and in the source
# master map -- versus pulling local copies of all maps defined in the source,
# regardless.  Effectively this makes for local control of which automount maps
# are used and updated.
#
# files_local_automount_master = no

##
## SSH Keys stored in LDAP
##
# For SSH keys stored in LDAP under the sshPublicKey attribute.
# sshd_config should contain a config option for AuthorizedKeysCommand that
# runs a script like:
#
# awk -F: -v name="$1" '$0 ~ name { print $2 }' /etc/sshkey.cache | \
#   tr -d "[']" | \
#   sed -e 's/, /\n/g'
#
# A featureful example is in examples/authorized-keys-command.py

#[sshkey]
#
#ldap_base = ou=people,dc=yourdomain,dc=com

[suffix]
prefix = ""
suffix = ""

I still needs improvements. The nsscache.conf needs to be modified for AD support as well.

I'll be happy for any suggestions.

EDIT:
Sorry for the many corrections, I just keep finding better ways to map the AD attributes.

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Please let me know if I should rearrange the if statements and try except conditions. I have the feeling the could be placed/used much better.
I am considering to separate AD entries from LDAP entries as well and may be putting all AD conf in one if statement for passwd as well as for group and shadow:

  if self.conf.get('ad'):
    pw.name = obj['sAMAccountName'][0]
    ...............
  else:
    pw.name = obj['uid'][0]
    ...............

and so on. what do you think?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I think this should be changed to the following if we want to nsscache update --full from any directory in FS:

parser.read('/etc/nsscache.conf')

unless we use nsscache -c /etc/nsscache.conf update --full. Otherwise there is an error:

Traceback (most recent call last):
  File "/usr/sbin/nsscache", line 28, in <module>
    from nss_cache import app
  File "/usr/lib/python2.7/dist-packages/nss_cache/app.py", line 34, in <module>
    from nss_cache import command
  File "/usr/lib/python2.7/dist-packages/nss_cache/command.py", line 35, in <module>
    from nss_cache.caches import cache_factory
  File "/usr/lib/python2.7/dist-packages/nss_cache/caches/cache_factory.py", line 28, in <module>
    from nss_cache.caches import files
  File "/usr/lib/python2.7/dist-packages/nss_cache/caches/files.py", line 57, in <module>
    prefix = parser.get('suffix', 'prefix')
  File "/usr/lib/python2.7/ConfigParser.py", line 607, in get
    raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'suffix'

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

There is also the possibility to add a new section to nsscache.conf with AD attributes to be mapped. That will make it more flexible, admins will be able to map NSS attributes to any AD object attribute of their choice.
Further more providing an offset option to map the uid/gid/RID to a higher number to avoid conflict with already existing local users, will be a nice to have.

# Default Offset option to map uid and gid to higher number.
#ldap_offest = 10000

# Default Active Directory attributes mapping options. 
# Only enable to override default attributes
#ldap_ad_user_name = sAMAccountName
#ldap_ad_user_uid = RID
#ldap_ad_user_gid = RID
#ldap_ad_group_name = sAMAccountName
#ldap_ad_group_uid = RID
#ldap_ad_group_gid = RID
#ldap_ad_shadow_name = sAMAccountName
#ldap_ad_shadow_last_change = pwdLastSet

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

Hey, you've created several pull requests and keep closing them -- I am happy you are working on it, but just wanted to know what your plan is. If you like, I can merge small changes as there's less to review?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Hi, I am sorry for the mess!
The were multiple issues, I found some typos, logical and implementation errors, that I detect after adding the test cases to ldapsource_test.py. Then I used a function from some else, which was not my own code. So I had to rewrite to be conform with google CLA's.

Please delete these useless pull requests.

I have the final changes here including the tests in ldapsource_test.py.

Would you please review before I open a new pull request?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I found a bug at two lines, Line 795 and Line 911, they should be:

if 'uidattr' in self.conf:
  shadow_ent.name = obj[self.conf['uidattr']][0]
......
......
......

and:

if 'uidattr' in self.conf:
  skey.name = obj[self.conf['uidattr']][0]
......
......
......

EDIT: I fixed it in my AD_support branch

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I just tested the ldap_nested_groups option for Active Directory nested groups. It doesn't work like openldap or slapd. A nested groups search in Active Directory should be done within the search filter.
The following filter will search recursively for all users who are either direct memberOf the admin group or member of groups who are member of the admin group and users are not disabled. So we should be safe here:

 (&(objectCategory=Person)(memberOf:1.2.840.113556.1.4.1941:=CN=Admin,CN=Groups,DC=example,DC=com)(!(UserAccountControl:1.2.840.113556.1.4.803:=2)))

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

According to Line 673 a shadow.cache should always exists, regardless of the authentication method, password or PAM or ssh key based authentication.
I am thinking of adding an option to nsscache.conf to either create a shadow map or not depending on the authentication method. Both PAM and ssh key based authentication do not need a shadow map.
May be something like this:

# By default a shadow map will be created. This is essential
# for password authentication. Enable (set to 1) if using
# different authentication methods
#ldap_no_shadow = (0/1)

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I started the tests today in our company AD. As expected few things don't work. For instance the referrals should be disabled in the ldap options and only objects from type dict should be passt here.
Further more have you considered migrating to python-ldap3?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I pushed the changes to AD_support branch, unfortunately I cannot test neither against openldap nor slapd.
Please give me a heads up if you think the AD_support branch is ready for a pull request.

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

Hi,

so far every thing works well, at my side at least, just one annoying error when I run nsscache -d verify . I believe it comes from here, never the less passwd, group and shadow are being cached correctly. The following fragment might be responsible:

    if self.conf.get('ad'):
      pw.gecos = obj['displayName'][0]

I see the following when I do nsscache -d verify:

  File "/usr/lib/python2.7/dist-packages/nss_cache/sources/ldapsource.py", line 691, in Transform
    pw.gecos = obj['displayName'][0]
KeyError: 'displayName'

And when I just print pw.gecos I get the correct result and it is mapped in passwd and shadow and group, changing it as follows:

    if self.conf.get('ad'):
      if 'displayName' in obj:
        pw.gecos = obj['displayName'][0]
        print(obj)

makes the KeyError disappear, thus there are still errors:

INFO:LdapSource:Unknown result type 115, ignoring.
INFO:LdapSource:Unknown result type 115, ignoring.
INFO:LdapSource:Unknown result type 115, ignoring.
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
..............................
..............................
..............................
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
INFO:Verify:Verifying data caches.
INFO:Verify:Verifying map: passwd.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesPasswdMapHandler:Opening '/etc/passwd.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verifying map: group.
DEBUG:Verify:built NSS map of 58 entries
DEBUG:FilesGroupMapHandler:Opening '/etc/group.cache' for reading existing cache
DEBUG:Verify:built cache map of 2 entries
INFO:Verify:Verifying map: shadow.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesShadowMapHandler:Opening '/etc/shadow.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verification result: 0 warnings, 4 errors
INFO:Verify:Verification failed!
INFO:NSSCacheApp:Exiting nsscache with value 4 runtime 0.038139

Any idea what this could be?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

OK. I got it. It comes from the group map, when I modify it this way:

    if self.conf.get('ad'):
      if 'displayName' not in obj:
        print(obj)

I get the group I am mapping in the ldap filter, but it doesn't make sense to me:

[passwd]
ldap_base = DC= DC=example,DC=com
#ldap_base = CN=Users,DC=example,DC=com
ldap_filter = (&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))

[group]
ldap_base = DC=example,DC=com
#ldap_base = CN=Users,DC=example,DC=com
ldap_filter = (|(&(objectClass=group)(CN=Admins))(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com)))

[shadow]
ldap_base =  DC=example,DC=com
#ldap_base = CN=Users,DC=example,DC=com
ldap_filter = (&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))

It lookup the group 4 times and it doesn't find the attribute displayName.
Could this be fixed?

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

when printing the objects at line 279

print(result_type, data, _, serverctrls)

I get the following:

(100, [('CN=user,CN=Users,DC=example,DC=com', {'pwdLastSet': ['132144982087435710'], 'objectSid': ['\x01\x05\x00\x00\x00\x00\x00\x05\x15\x00\x00\x00\xa0e\xcf~xK\x9b_\xe7|\x87p\t\x1c\x01\x00'], 'displayName': ['User Test'], 'sAMAccountName': ['user'], 'modifyTimeStamp': ['20191020171614.0Z']})], 2, [])
(101, [], 2, [<ldap.controls.libldap.SimplePagedResultsControl instance at 0x7f94b1009fc8>])
(100, [('CN=Admins,CN=Users,DC=example,DC=com', {'member': ['CN=user,CN=Users,DC=example,DC=com'], 'objectSid': ['\x01\x05\x00\x00\x00\x00\x00\x05\x15\x00\x00\x00e\x98\xb3\x96J\x14=\xbe\xe2}e\x9eX\x04\x00\x00'], 'sAMAccountName': ['Admins'], 'modifyTimeStamp': ['20191022175622.0Z']})], 2, [])
(100, [('CN=user,CN=Users,DC=example,DC=com', {'objectSid': ['\x01\x05\x00\x00\x00\x00\x00\x05\x15\x00\x00\x00\xa0e\xcf~xK\x9b_\xe7|\x87p\t\x1c\x01\x00'], 'sAMAccountName': ['user'], 'modifyTimeStamp': ['20191020171614.0Z']})], 2, [])
(115, [(None, ['ldap://example.com/CN=Configuration,DC=example,DC=com'])], 2, [])
(115, [(None, ['ldap://example.com/DC=DomainDnsZones,DC=example,DC=com'])], 2, [])
(115, [(None, ['ldap://example.com/DC=ForestDnsZones,DC=example,DC=com'])], 2, [])
(101, [], 2, [<ldap.controls.libldap.SimplePagedResultsControl instance at 0x7f94b1009f38>])
(100, [('CN=user,CN=Users,DC=example,DC=com', {'pwdLastSet': ['132144982087435710'], 'sAMAccountName': ['user'], 'modifyTimeStamp': ['20191020171614.0Z']})], 2, [])
(101, [], 2, [<ldap.controls.libldap.SimplePagedResultsControl instance at 0x7f94b1001cf8>])

the 115 errors are the referral response, and the 101 errors are kind of ambiguous response. I think the need to be filtered even though every thing else is successful.

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

I fixed the referrals issue in my last commit, thus I don't know if it is the correct way. Still need to fix the following error. The files look correct and I can login to the servers without any problem. I just have no idea what the reason could be. Do you have any ideas?:

INFO:Verify:Verifying data caches.
INFO:Verify:Verifying map: passwd.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesPasswdMapHandler:Opening '/etc/passwd.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verifying map: group.
DEBUG:Verify:built NSS map of 58 entries
DEBUG:FilesGroupMapHandler:Opening '/etc/group.cache' for reading existing cache
DEBUG:Verify:built cache map of 2 entries
INFO:Verify:Verifying map: shadow.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesShadowMapHandler:Opening '/etc/shadow.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verification result: 0 warnings, 4 errors
INFO:Verify:Verification failed!
INFO:NSSCacheApp:Exiting nsscache with value 4 runtime 0.039021

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

after some debugging, it seems the VerifySource() function returns the 4 errors. One for each map entry.
The errors are caused by this call.
Printing the source.Verify() here, shows clearly how they are getting incremented, 1 for passwd, 1 for shadow and 2 for group, since there are two entries.
And printing the print(retval) at the end of the function gives the total.

INFO:Verify:Verifying program and system configuration.
INFO:Verify:Verifying data sources.
DEBUG:LdapSource:opening ldap connection and binding to ldaps://ad.example.com
DEBUG:LdapSource:searching for base='CN=Users,DC=example,DC=com', filter='(&(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
1
DEBUG:LdapSource:searching for base='CN=Users,DC=example,DC=com', filter='(&(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
DEBUG:LdapSource:opening ldap connection and binding to ldaps://ad.example.com
DEBUG:LdapSource:searching for base='CN=Users,DC=example,DC=com', filter='(&(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
1
DEBUG:LdapSource:searching for base='CN=Users,DC=example,DC=com', filter='(&(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
DEBUG:LdapSource:opening ldap connection and binding to ldaps://ad.example.com
DEBUG:LdapSource:searching for base='DC=example,DC=com', filter='(&(|(&(objectClass=group)(CN=Admins))(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com)))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:searching for base='DC=example,DC=com', filter='(&(|(&(objectClass=group)(CN=Admins))(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com)))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'member', 'objectSid', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
2
DEBUG:LdapSource:searching for base='DC=example,DC=com', filter='(&(|(&(objectClass=group)(CN=Admins))(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com)))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'objectSid', 'displayName', 'unixHomeDirectory', 'pwdLastSet', 'loginShell', 'modifyTimestamp']
DEBUG:LdapSource:searching for base='DC=example,DC=com', filter='(&(|(&(objectClass=group)(CN=Admins))(&(objectClass=user)(memberOf=CN=Admins,CN=Users,DC=example,DC=com)))(modifyTimestamp>=20191025205210Z))', scope=2, attrs=['sAMAccountName', 'member', 'objectSid', 'modifyTimestamp']
DEBUG:LdapSource:Returning due to RES_SEARCH_RESULT
4
INFO:Verify:Verifying data caches.
INFO:Verify:Verifying map: passwd.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesPasswdMapHandler:Opening '/etc/passwd.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verifying map: group.
DEBUG:Verify:built NSS map of 58 entries
DEBUG:FilesGroupMapHandler:Opening '/etc/group.cache' for reading existing cache
DEBUG:Verify:built cache map of 2 entries
INFO:Verify:Verifying map: shadow.
DEBUG:Verify:built NSS map of 29 entries
DEBUG:FilesShadowMapHandler:Opening '/etc/shadow.cache' for reading existing cache
DEBUG:Verify:built cache map of 1 entries
INFO:Verify:Verification result: 0 warnings, 4 errors
INFO:Verify:Verification failed!
INFO:NSSCacheApp:Exiting nsscache with value 4 runtime 0.043154

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

The Verify() function returns the 4 objects of AD, that is why I have the 4 errors, unlike the behavior with ldap objects it returns an empty object.

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

The problem was related to the time stamp format as well as the attribute of AD objects, which is whenChanged and the format is %Y%m%d%H%M%S.0Z. I finally have this:

INFO:Verify:Verification result: 0 warnings, 0 errors
INFO:Verify:Verification passed!

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

#93 merged!

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

@3c2b2ff5 if you'd like to fetch the latest and test again, that would be fantastic.

Regarding Python3, yeah I need to do this but haven't had time to do it yet. The Debian people are threatening to pull nsscache because of this :/

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

cool. That feels great. I will definitely fetch the new one and test it at the company in a staging environment until end of the year, then I'll go live.
When will be the Debian package available? would you please add the auto completion script to the new package as well? We are using puppet for deployment, I could use git for sure but for production I prefer a Debian package.

I actually started today to make the code python 3 compatible. I would like to help, but this time I'll will start with small changes i.e print function, exceptions and modules import.
Do you have a plan how to do it? Should I open a new issue?

from nsscache.

jaqx0r avatar jaqx0r commented on August 19, 2024

from nsscache.

3c2b2ff5 avatar 3c2b2ff5 commented on August 19, 2024

will do. Issue can be closed.

from nsscache.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.