This post will be edited over time, please feel free to come back and check for new content.

Last edit: 12-28-2023

Goal: A HomeLab setup that protects itself

This example HomeLab has at its core an OPNSense Router, smart switches with subnet zones, several VMs, a few Docker environments, and specifically for this version of this post: with cron updating near all Rules enabled Suricata, and after design of Zones, Firewall rules, Port Forwards, and the like, we installed OPNSense’s CrowdSec plugin.

Once installed, as we will have additional agents on the various services in your HomeLab, we will likely want to transition from the SQLite DB to a Postgres DB or something similar.

Near all additional services are actually deployed as Docker Stacks or Containers.

Further, we have designed this using MACVLAN as we will have an IDS involved and wish to parse the network traffic as is via Suricata. Extra to note, we designed this with an SSL Terminating Reverse Proxy in front of some of the publicly available ‘HomeLab Services’ allowing Suricata do see everything.

The DB has been setup on a subnet adjacent Docker environment, and I’ve used the ‘Custom LAPI’ config option in the OPNSense GUI CrowdSec plugin settings page, plus a persistent ‘/usr/local/etc/crowdsec/config.yaml.local’ that CrowdSec supports (it only replaces the configs in the original config.yaml with what you set in it). Will expand on this portion as it is likely of hot interest shortly as it also enables the rest of the Multi-Server setup. The ‘config.yaml.local’ and CrowdSec and a Postgres DB with Multi-Server should get you close on a Google search, I will update this detail on this page in time.

Now we are going to skip a few details about the beginning, many middle sections, and reach the most recent realization….

Suricata EVE without payload perfect for CrowdSec Parser

The ‘printable_payload’ element of the JSON output of eve.json is a bit much for most any parser, with this in mind, it is highly advisable to enable an additional EVE output from Suricata without this field for CrowdSec’s agent to easily parse the state and respond quickly.

In the example below we demonstrate doing so with an overwrite setting for the Reverse Proxy that is part of the network stack.

To do this, persistently, you must create or edit the following files on your OPNSense router

Step 1

The OPNSense CrowdSec plugin installs observing a few default logs from OPNSense (lighttpd/sshd/pf) but does not come configured for any Suricata log listening. A CrowdSec Acquis file must be created or modified to get the feature we are adding here and without this, the evexff.json file will go unobserved, you will also need the CrowdSec Hub elements to enable the parsing/alerting for Suricata, so console into the OPNSense and enter the following command (ssh in, select option ‘8’):

cscli collections install crowdsecurity/suricata
cscli collections install crowdsecurity/whitelist-good-actors
cscli parsers install crowdsecurity/whitelists

Then after creating the following file, it is my recommendation to hit ‘save’ on the CrowdSec plugin GUI of your OPNSense, this appears to reload instead of restarting the service as desired.

/usr/local/etc/crowdsec/acquis.d/suricata.yaml:

---
filenames:
  - /var/log/suricata/evexff.json
labels:
  type: suricata-evelogs
---

Step 2

Now that CrowdSec is aware and ‘listening’ if you will, we will want to create, and rotate those evexff.json logs, let’s setup the rotation of ‘/var/log/suricata/evexff.json’ as a custom config

/usr/local/etc/newsyslog.conf.d/suricataxff.conf:

# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/suricata/evexff.json      root:wheel      640     1       500000  $W0D23  B       /var/run/suricata.pid   1

Step 3

To note: It appears the ‘custom.yaml‘ file you will likely edit, needs the entire ‘output:’ stanza of the original (/usr/local/etc/suricata/suricata.yaml) Suricata config and then edit as desired – the example below is the working ‘custom.yaml’ with the upgraded edit being the additional EVE ouput.

Worst case currently, an admin will have to be aware of updates to the original and diff the two wisely, as I develop that I will share here. It appears like you replace at ‘stanza’ level, so the other features (threading/etc.) of Suricata appear to be performing as expected, but, my experience on this feels suddenly fresher than it used to. It would possibly be better to have this config at the surface of the OPNSense available to the same spaces as the ‘EVE’ logs for Suricata. Might try to develop/contribute in the future – barely have time for this post lol. ^_^

/usr/local/opnsense/service/templates/OPNsense/IDS/custom.yaml:

%YAML 1.1
---
# empty stub for custom modifications, add custom persistent config below
# Configure the type of alert (and other) logging you would like.
outputs:

  # a line based alerts log similar to Snort's fast.log
  - fast:
      enabled: no
      filename: fast.log
      append: yes
      #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'

  # Extensible Event Format (nicknamed EVE) to CrowdSec
  - eve-log:
      enabled: yes
      filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
      filename: evexff.json
      xff:
        enabled: yes
        # Two operation modes are available, "extra-data" and "overwrite".
        mode: overwrite
        # Two proxy deployments are supported, "reverse" and "forward". In
        # a "reverse" deployment the IP address used is the last one, in a
        # "forward" deployment the first IP address is used.
        deployment: reverse
        # Header name where the actual IP address will be reported, if more
        # than one IP address is present, the last IP address will be the
        # one taken into consideration.
        header: X-Forwarded-For
      types:
        - alert:
            payload: no
            payload-buffer-size: 100kb
            payload-printable: no
            # packet: yes              # enable dumping of packet (without stream segments)
            metadata: yes             # enable inclusion of app layer metadata with alert. Default yes
            # http-body: yes           # Requires metadata; enable dumping of http body in Base64
            # http-body-printable: yes # Requires metadata; enable dumping of http body in printable format
            # Enable the logging of tagged packets for rules using the
            # "tag" keyword.
            tagged-packets: yes
            http: yes
            tls: yes

  # Extensible Event Format (nicknamed EVE) event log in JSON format
  - eve-log:
      enabled: yes
      filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
      filename: eve.json
      #prefix: "@cee: " # prefix to prepend to each log entry
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      #redis:
      #  server: 127.0.0.1
      #  port: 6379
      #  async: true ## if redis replies are read asynchronously
      #  mode: list ## possible values: list|lpush (default), rpush, channel|publish
      #             ## lpush and rpush are using a Redis list. "list" is an alias for lpush
      #             ## publish is using a Redis channel. "channel" is an alias for publish
      #  key: suricata ## key or channel to use (default to suricata)
      # Redis pipelining set up. This will enable to only do a query every
      # 'batch-size' events. This should lower the latency induced by network
      # connection at the cost of some memory. There is no flushing implemented
      # so this setting as to be reserved to high traffic suricata.
      #  pipelining:
      #    enabled: yes ## set enable to yes to enable query pipelining
      #    batch-size: 10 ## number of entry to keep in buffer

      # Include top level metadata. Default yes.
      #metadata: no

      # include the name of the input pcap file in pcap file processing mode
      pcap-file: false


      # Community Flow ID
      # Adds a 'community_id' field to EVE records. These are meant to give
      # a records a predictable flow id that can be used to match records to
      # output of other tools such as Bro.
      #
      # Takes a 'seed' that needs to be same across sensors and tools
      # to make the id less predictable.

      # enable/disable the community id feature.
      community-id: false
      # Seed value for the ID output. Valid values are 0-65535.
      community-id-seed: 0


      # HTTP X-Forwarded-For support by adding an extra field or overwriting
      # the source or destination IP address (depending on flow direction)
      # with the one reported in the X-Forwarded-For HTTP header. This is
      # helpful when reviewing alerts for traffic that is being reverse
      # or forward proxied.
      xff:
        enabled: no
        # Two operation modes are available, "extra-data" and "overwrite".
        mode: extra-data
        # Two proxy deployments are supported, "reverse" and "forward". In
        # a "reverse" deployment the IP address used is the last one, in a
        # "forward" deployment the first IP address is used.
        deployment: reverse
        # Header name where the actual IP address will be reported, if more
        # than one IP address is present, the last IP address will be the
        # one taken into consideration.
        header: X-Forwarded-For


      types:
        - alert:
             payload: yes
             payload-buffer-size: 100kb
             payload-printable: yes
             # packet: yes              # enable dumping of packet (without stream segments)
             # metadata: no             # enable inclusion of app layer metadata with alert. Default yes
             # http-body: yes           # Requires metadata; enable dumping of http body in Base64
             # http-body-printable: yes # Requires metadata; enable dumping of http body in printable format

             # Enable the logging of tagged packets for rules using the
             # "tag" keyword.
             tagged-packets: yes

             http: yes
             tls: yes

        - anomaly:
            # Anomaly log records describe unexpected conditions such
            # as truncated packets, packets with invalid IP/UDP/TCP
            # length values, and other events that render the packet
            # invalid for further processing or describe unexpected
            # behavior on an established stream. Networks which
            # experience high occurrences of anomalies may experience
            # packet processing degradation.
            #
            # Anomalies are reported for the following:
            # 1. Decode: Values and conditions that are detected while
            # decoding individual packets. This includes invalid or
            # unexpected values for low-level protocol lengths as well
            # as stream related events (TCP 3-way handshake issues,
            # unexpected sequence number, etc).
            # 2. Stream: This includes stream related events (TCP
            # 3-way handshake issues, unexpected sequence number,
            # etc).
            # 3. Application layer: These denote application layer
            # specific conditions that are unexpected, invalid or are
            # unexpected given the application monitoring state.
            #
            # By default, anomaly logging is disabled. When anomaly
            # logging is enabled, applayer anomaly reporting is
            # enabled.
            # enabled: yes
            #
            # Choose one or more types of anomaly logging and whether to enable
            # logging of the packet header for packet anomalies.
            types:
              # decode: no
              # stream: no
              # applayer: yes
            #packethdr: no


#        - http:
#            #extended: yes     # enable this for extended logging information
#            # custom allows additional http fields to be included in eve-log
#            # the example below adds three additional fields when uncommented
#            #custom: [Accept-Encoding, Accept-Language, Authorization]
#            # set this value to one and only one among {both, request, response}
#            # to dump all http headers for every http request and/or response
#            # dump-all-headers: none
#        - dns:
#            # This configuration uses the new DNS logging format,
#            # the old configuration is still available:
#            # https://suricata.readthedocs.io/en/latest/output/eve/eve-json-output.html#dns-v1-format
#
#            # As of Suricata 5.0, version 2 of the eve dns output
#            # format is the default.
#            #version: 2
#
#            # Enable/disable this logger. Default: enabled.
#            #enabled: yes
#
#            # Control logging of requests and responses:
#            # - requests: enable logging of DNS queries
#            # - responses: enable logging of DNS answers
#            # By default both requests and responses are logged.
#            #requests: no
#            #responses: no
#
#            # Format of answer logging:
#            # - detailed: array item per answer
#            # - grouped: answers aggregated by type
#            # Default: all
#            #formats: [detailed, grouped]
#
#            # Types to log, based on the query type.
#            # Default: all.
#            #types: [a, aaaa, cname, mx, ns, ptr, txt]
#        - tls:
#            extended: yes     # enable this for extended logging information
#            # output TLS transaction where the session is resumed using a
#            # session id
#            #session-resumption: no
#            # custom allows to control which tls fields that are included
#            # in eve-log
#            #custom: [subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after, certificate, chain, ja3, ja3s]
#        - files:
#            force-magic: no   # force logging magic on all logged files
#            # force logging of checksums, available hash functions are md5,
#            # sha1 and sha256
#            #force-hash: [md5]

        - drop:
            alerts: yes      # log alerts that caused drops
            flows: start     # start or all: 'start' logs only a single drop
                             # per flow direction. All logs each dropped pkt.

#        - smtp:
#            #extended: yes # enable this for extended logging information
#            # this includes: bcc, message-id, subject, x_mailer, user-agent
#            # custom fields logging from the list:
#            #  reply-to, bcc, message-id, subject, x-mailer, user-agent, received,
#            #  x-originating-ip, in-reply-to, references, importance, priority,
#            #  sensitivity, organization, content-md5, date
#            #custom: [received, x-mailer, x-originating-ip, relays, reply-to, bcc]
#            # output md5 of fields: body, subject
#            # for the body you need to set app-layer.protocols.smtp.mime.body-md5
#            # to yes
#            #md5: [body, subject]

        #- dnp3
#        - ftp
        #- rdp
#        - nfs
#        - smb
#        - tftp
#        - ikev2
#        - krb5
#        - snmp
        #- sip
#        - dhcp:
#            enabled: yes
#            # When extended mode is on, all DHCP messages are logged
#            # with full detail. When extended mode is off (the
#            # default), just enough information to map a MAC address
#            # to an IP address is logged.
#            extended: no
#        - ssh
#        # - stats:
#        #     totals: yes       # stats for all threads merged together
#        #     threads: no       # per thread stats
#        #     deltas: no        # include delta values
#        # bi-directional flows
#        - flow
#        # uni-directional flows
#        #- netflow

        # Metadata event type. Triggered whenever a pktvar is saved
        # and will include the pktvars, flowvars, flowbits and
        # flowints.
        #- metadata

  # Extensible Event Format (nicknamed EVE) to syslog
  - eve-log:
      enabled: yes
      type: syslog
      identity: "suricata"
      facility: local5
      level: Info
      types:
        - alert:
             payload: no
             payload-buffer-size: 4kb
             payload-printable: yes
             http: yes
             tls: yes

  # deprecated - unified2 alert format for use with Barnyard2
  - unified2-alert:
      enabled: no
      # for further options see:
      # https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#alert-output-for-use-with-barnyard2-unified2-alert

  # a line based log of HTTP requests (no alerts)
  - http-log:
      enabled: no
      filename: http.log
      append: yes
      #extended: yes     # enable this for extended logging information
      #custom: yes       # enabled the custom logging format (defined by customformat)
      #customformat: "%%D-%H:%M:%St.%z %X-Forwarded-Fori %H %m %h %u %s %B %a:%p -> %A:%P"
      #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'

  # a line based log of TLS handshake parameters (no alerts)
  - tls-log:
      enabled: no  # Log TLS connections.
      filename: tls.log # File to store TLS logs.
      append: yes
      #extended: yes     # Log extended information like fingerprint
      #custom: yes       # enabled the custom logging format (defined by customformat)
      #customformat: "%%D-%H:%M:%St.%z %a:%p -> %A:%P %v %n %d %D"
      #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
      # output TLS transaction where the session is resumed using a
      # session id
      #session-resumption: no

  # output module to store certificates chain to disk
  - tls-store:
      enabled: no
      #certs-log-dir: certs # directory to store the certificates files

  # Packet log... log packets in pcap format. 3 modes of operation: "normal"
  # "multi" and "sguil".
  #
  # In normal mode a pcap file "filename" is created in the default-log-dir,
  # or are as specified by "dir".
  # In multi mode, a file is created per thread. This will perform much
  # better, but will create multiple files where 'normal' would create one.
  # In multi mode the filename takes a few special variables:
  # - %n -- thread number
  # - %i -- thread id
  # - %t -- timestamp (secs or secs.usecs based on 'ts-format'
  # E.g. filename: pcap.%n.%t
  #
  # Note that it's possible to use directories, but the directories are not
  # created by Suricata. E.g. filename: pcaps/%n/log.%s will log into the
  # per thread directory.
  #
  # Also note that the limit and max-files settings are enforced per thread.
  # So the size limit when using 8 threads with 1000mb files and 2000 files
  # is: 8*1000*2000 ~ 16TiB.
  #
  # In Sguil mode "dir" indicates the base directory. In this base dir the
  # pcaps are created in th directory structure Sguil expects:
  #
  # $sguil-base-dir/YYYY-MM-DD/$filename.<timestamp>
  #
  # By default all packets are logged except:
  # - TCP streams beyond stream.reassembly.depth
  # - encrypted streams after the key exchange
  #
  - pcap-log:
      enabled: no
      filename: log.pcap

      # File size limit.  Can be specified in kb, mb, gb.  Just a number
      # is parsed as bytes.
      limit: 1000mb

      # If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit"
      max-files: 2000

      # Compression algorithm for pcap files. Possible values: none, lz4.
      # Enabling compression is incompatible with the sguil mode. Note also
      # that on Windows, enabling compression will *increase* disk I/O.
      compression: none

      # Further options for lz4 compression. The compression level can be set
      # to a value between 0 and 16, where higher values result in higher
      # compression.
      #lz4-checksum: no
      #lz4-level: 0

      mode: normal # normal, multi or sguil.

      # Directory to place pcap files. If not provided the default log
      # directory will be used. Required for "sguil" mode.
      #dir: /nsm_data/

      #ts-format: usec # sec or usec second format (default) is filename.sec usec is filename.sec.usec
      use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets
      honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stopped being logged.

  # a full alerts log containing much information for signature writers
  # or for investigating suspected false positives.
  - alert-debug:
      enabled: no
      filename: alert-debug.log
      append: yes
      #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'

  # alert output to prelude (https://www.prelude-siem.org/) only
  # available if Suricata has been compiled with --enable-prelude
  - alert-prelude:
      enabled: no
      profile: suricata
      log-packet-content: no
      log-packet-header: yes

  # Stats.log contains data from various counters of the suricata engine.
  - stats:
      enabled: yes
      filename: stats.log
      append: yes       # append to file (yes) or overwrite it (no)
      totals: yes       # stats for all threads merged together
      threads: no       # per thread stats
      #null-values: yes  # print counters that have value 0

  # a line based alerts log similar to fast.log into syslog
  - syslog:
      enabled: yes
      # reported identity to syslog. If omitted the program name (usually
      # suricata) will be used.
      #identity: "suricata"
      facility: local5
      level: Notice ## possible levels: Emergency, Alert, Critical,
                    ## Error, Warning, Notice, Info, Debug

  # deprecated a line based information for dropped packets in IPS mode
  - drop:
      enabled: no
      # further options documented at:
      # https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#drop-log-a-line-based-information-for-dropped-packets

  # Output module for storing files on disk. Files are stored in a
  # directory names consisting of the first 2 characters of the
  # SHA256 of the file. Each file is given its SHA256 as a filename.
  #
  # When a duplicate file is found, the existing file is touched to
  # have its timestamps updated.
  #
  # Unlike the older filestore, metadata is not written out by default
  # as each file should already have a "fileinfo" record in the
  # eve.log. If write-fileinfo is set to yes, the each file will have
  # one more associated .json files that consists of the fileinfo
  # record. A fileinfo file will be written for each occurrence of the
  # file seen using a filename suffix to ensure uniqueness.
  #
  # To prune the filestore directory see the "suricatactl filestore
  # prune" command which can delete files over a certain age.
  - file-store:
      version: 2
      enabled: no

      # Set the directory for the filestore. If the path is not
      # absolute will be be relative to the default-log-dir.
      #dir: filestore

      # Write out a fileinfo record for each occurrence of a
      # file. Disabled by default as each occurrence is already logged
      # as a fileinfo record to the main eve-log.
      #write-fileinfo: yes

      # Force storing of all files. Default: no.
      #force-filestore: yes

      # Override the global stream-depth for sessions in which we want
      # to perform file extraction. Set to 0 for unlimited.
      #stream-depth: 0

      # Uncomment the following variable to define how many files can
      # remain open for filestore by Suricata. Default value is 0 which
      # means files get closed after each write
      #max-open-files: 1000

      # Force logging of checksums, available hash functions are md5,
      # sha1 and sha256. Note that SHA256 is automatically forced by
      # the use of this output module as it uses the SHA256 as the
      # file naming scheme.
      #force-hash: [sha1, md5]
      # NOTE: X-Forwarded configuration is ignored if write-fileinfo is disabled
      # HTTP X-Forwarded-For support by adding an extra field or overwriting
      # the source or destination IP address (depending on flow direction)
      # with the one reported in the X-Forwarded-For HTTP header. This is
      # helpful when reviewing alerts for traffic that is being reverse
      # or forward proxied.
      xff:
        enabled: no
        # Two operation modes are available, "extra-data" and "overwrite".
        mode: extra-data
        # Two proxy deployments are supported, "reverse" and "forward". In
        # a "reverse" deployment the IP address used is the last one, in a
        # "forward" deployment the first IP address is used.
        deployment: reverse
        # Header name where the actual IP address will be reported, if more
        # than one IP address is present, the last IP address will be the
        # one taken into consideration.
        header: X-Forwarded-For

  # deprecated - file-store v1
  - file-store:
      enabled: no
      # further options documented at:
      # https://suricata.readthedocs.io/en/suricata-5.0.0/file-extraction/file-extraction.html#file-store-version-1


  # Log TCP data after stream normalization
  # 2 types: file or dir. File logs into a single logfile. Dir creates
  # 2 files per TCP session and stores the raw TCP data into them.
  # Using 'both' will enable both file and dir modes.
  #
  # Note: limited by stream.reassembly.depth
  - tcp-data:
      enabled: no
      type: file
      filename: tcp-data.log

  # Log HTTP body data after normalization, dechunking and unzipping.
  # 2 types: file or dir. File logs into a single logfile. Dir creates
  # 2 files per HTTP session and stores the normalized data into them.
  # Using 'both' will enable both file and dir modes.
  #
  # Note: limited by the body limit settings
  - http-body-data:
      enabled: no
      type: file
      filename: http-data.log

  # Lua Output Support - execute lua script to generate alert and event
  # output.
  # Documented at:
  # https://suricata.readthedocs.io/en/latest/output/lua-output.html
  - lua:
      enabled: no
      #scripts-dir: /etc/suricata/lua-output/
      scripts:
      #   - script1.lua

HomeLab : Series

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.