在ELK堆栈上处理和可视化ModSecurity日志

在本教程中,您将学习如何在ELK Stack上处理和可视化ModSecurity日志。 ModSecurity 是Trustwave的SpiderLabs开发的开源,跨平台Web应用程序防火墙(WAF)模块。它被称为WAF的“瑞士军刀”,它使Web应用程序防御者能够了解HTTP(S)流量,并提供电源规则语言和API来实施高级保护。

ModSecurity使用OWASP核心规则集,可用于保护Web应用程序免受各种攻击,包括排名前10位的OWASP,例如SQL注入,跨站点脚本,本地文件包含等。

OWASP最近在其报告中, OWASP 2017十佳,包括 日志和监控不足 作为十大Web应用程序安全风险之一。

“日志记录和监控不足,再加上事件响应的缺失或无效集成,使攻击者可以进一步攻击系统,保持持久性,转向更多系统以及篡改,提取或破坏数据。大多数违规研究表明,发现违规的时间超过200天,通常是由外部各方而不是内部流程或监控来发现的。

另一方面,ELK / Elastic Stack是一个强大的平台,可以收集和处理来自多个数据源的数据,并将这些数据存储在搜索和分析引擎中,使用户能够使用图表,表格,图形进行可视化。

在ELK堆栈上处理和可视化ModSecurity日志

因此,作为确保记录和监视Web应用程序日志的一种方式,我们将学习如何在ELK / Elastic Stack上处理和可视化ModSecurity WAF日志。

先决条件

安装和设置ELK /弹性堆叠

在继续之前,请确保已设置ELK堆栈。您可以按照下面的链接在Linux上安装和设置ELK / Elastic堆栈。

在Ubuntu 20.04上安装ELK Stack

在CentOS 8上安装ELK Stack

在CentOS / Ubuntu上安装和设置ModSecurity

使用ModSecurity安装并启用Web应用程序保护。您可以查看以下指南以安装ModSecurity 3(libModSecurity);

在CentOS 8上使用Apache配置LibModsecurity

在CentOS 8上使用Nginx配置LibModsecurity

在Ubuntu 18.04上使用Apache安装LibModsecurity

配置Logstash以处理ModSecurity日志

Logstash 是一种服务器端数据处理管道,该管道从多个源中提取数据,对其进行转换,然后将其存储到搜索和数据分析引擎(例如Elasticsearch)中。

Logstash管道包括三个部分:

  • 输入:从不同来源收集数据
  • 过滤:(可选)对数据进行进一步处理。
  • 输出:将接收到的数据存储到目标数据存储中,例如Elasticsearch。

配置Logstash输入插件

Logstash支持各种 输入插件 使它能够从各种来源读取事件数据。

在本教程中,我们将使用Filebeat收集ModSecurity日志并将其推送到Logstash。因此,我们将Logstash配置为接收来自Elastic Beats框架的事件,在这种情况下为Filebeat。

要将Logstash配置为从Filebeat或任何Elastic Beat收集事件,请创建一个输入插件配置文件。

vim /etc/logstash/conf.d/modsecurity-filter.conf
input {
  beats {
    port => 5044
  }
}

配置Logstash过滤器插件

我们将利用 Logstash Grok过滤器 处理我们的ModSecurity日志。

下面是一个示例ModSecurity审核日志行。审核日志行提供有关被阻止的交易以及阻止原因的详细信息。

[Sun Jul 12 21:22:47.978339 2020] [:error] [pid 78188:tid 140587634259712] [client 37.233.77.228:54556] ModSecurity: Warning. Matched "Operator `PmFromFile' with parameter `scanners-user-agents.data' against variable `REQUEST_HEADERS:User-Agent' (Value: `Mozilla/5.0 zgrab/0.x' ) [file "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"] [line "33"] [id "913100"] [rev ""] [msg "Found User-Agent associated with security scanner"] [data "Matched Data: zgrab found within REQUEST_HEADERS:User-Agent: mozilla/5.0 zgrab/0.x"] [severity "2"] [ver "OWASP_CRS/3.2.0"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-reputation-scanner"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "OWASP_CRS/AUTOMATION/SECURITY_SCANNER"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"] [hostname "kifarunix-demo.com"] [uri "/"] [unique_id "15945817674.229523"] [ref "o12,5v48,21t:lowercase"]

每个ModSecurity警报出现在Apache错误日志中时,都遵循以下格式:

[Sun Jun 24 10:19:58 2007] [error] [client 192.168.0.1] ModSecurity: ALERT_MESSAGE

为了从这种非结构化日志中获得良好的可视化效果,我们需要创建过滤器以仅提取有意义的特定字段,并提供有关攻击的信息。

您可以利用现成的 Logstash grok模式 或使用正则表达式(如果您需要提取的日志部分没有grok过滤器)。

您可以在Kibana上使用Grok Debugger来测试您的Grok模式/正则表达式, 开发人员工具> Grok调试器, 要么 http://grokdebug.herokuapp.comhttp://grokconstructor.appspot.com/ 应用。

我们创建Logstash Grok模式来处理ModSecurity审核日志并提取以下字段。因此,在警报消息(alert_message),我们将进一步提取以下字段;

  • 活动时间(event_time
  • 日志严重性级别(log_level
  • 客户端IP地址(攻击源, src_ip
  • 规则文件(rules_file
  • 基于规则文件的攻击类型(attack_type
  • 规则编号(rule_id
  • 攻击讯息(alert_msg
  • 用户代理 (user_agent
  • 主机名 (dst_host
  • 要求URI(request_uri
  • 推荐人(如果有)。 (referer

为了提取这些部分,下面是我们的Logstash Grok过滤器。

filter {
    # Extract event time, log severity level, source of attack (client), and the alert message.
    grok {
      match => { "message" => "(?%{MONTH}s%{MONTHDAY}s%{TIME}s%{YEAR})] [:%{LOGLEVEL:log_level}.*clients%{IPORHOST:src_ip}:d+]s(?.*)" }
    }
    # Extract Rules File from Alert Message
    grok {
      match => { "alert_message" => "(?[file "(/.+.conf)"])" }
    }	
    grok {
      match => { "rulesfile" => "(?/.+.conf)" }
    }	
    # Extract Attack Type from Rules File
    grok {
      match => { "rulesfile" => "(?[A-Z]+-[A-Z][^.]+)" }
    }	
    # Extract Rule ID from Alert Message
    grok {
      match => { "alert_message" => "(?[id "(d+)"])" }
    }	
    grok {
      match => { "ruleid" => "(?d+)" }
    }
    # Extract Attack Message (msg) from Alert Message 	
    grok {
      match => { "alert_message" => "(?[msg S(.*?)"])" }
    }	
    grok {
      match => { "msg" => "(?"(.*?)")" }
    }
    # Extract the User/Scanner Agent from Alert Message	
    grok {
      match => { "alert_message" => "(?User-Agent' SValue: `(.*?)')" }
    }	
    grok {
      match => { "scanner" => "(?:(.*?)')" }
    }	
    grok {
      match => { "alert_message" => "(?User-Agent: (.*?)')" }
    }	
    grok {
      match => { "agent" => "(?: (.*?)')" }
    }	
    # Extract the Target Host
    grok {
      match => { "alert_message" => "(hostname "%{IPORHOST:dst_host})" }
    }	
    # Extract the Request URI
    grok {
      match => { "alert_message" => "(uri "%{URIPATH:request_uri})" }
    }
    grok {
      match => { "alert_message" => "(?referer: (.*))" }
    }	
    grok {
      match => { "ref" => "(? (.*))" }
    }
    mutate {
      # Remove unnecessary characters from the fields.
      gsub => [
        "alert_msg", "["]", "",
        "user_agent", "[:"'`]", "",
        "user_agent", "^s*", "",
        "referer", "^s*", ""
      ]
      # Remove the Unnecessary fields so we can only remain with
      # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. 
      remove_field => [ "alert_message", "rulesfile", "ruleid", "msg", "scanner", "agent", "ref" ]
    }	
}

那么这个过滤器是如何工作的呢?

在第一个grok过滤器中;

    grok {
      match => { "message" => "(?%{MONTH}s%{MONTHDAY}s%{TIME}s%{YEAR})] [:%{LOGLEVEL:log_level}.*clients%{IPORHOST:src_ip}:d+]s(?.*)" }
    }

我们提取事件时间,日志严重性级别,攻击源(客户端)和警报消息。

{
  "src_ip": "37.233.77.228",
  "alert_message": "ModSecurity: Warning. Matched "Operator `PmFromFile' with parameter `scanners-user-agents.data' against variable `REQUEST_HEADERS:User-Agent' (Value: `Mozilla/5.0 zgrab/0.x' ) [file "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"] [line "33"] [id "913100"] [rev ""] [msg "Found User-Agent associated with security scanner"] [data "Matched Data: zgrab found within REQUEST_HEADERS:User-Agent: mozilla/5.0 zgrab/0.x"] [severity "2"] [ver "OWASP_CRS/3.2.0"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-reputation-scanner"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "OWASP_CRS/AUTOMATION/SECURITY_SCANNER"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"] [hostname "kifarunix-demo.com"] [uri "/"] [unique_id "15945817674.229523"] [ref "o12,5v48,21t:lowercase"]",
  "log_level": "error",
  "event_time": "Jul 12 21:22:47.978339 2020"
}

来自 alert_message,然后我们继续提取其他字段。

例如,我们从使用模式提取规则文件开始;

(?[file "(/.+.conf)"])
{
  "rulesfile": "[file "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"]"
}

这让我们 [file "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"]

我们只需要获取文件, /etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf 因此,我们进一步使用下面的模式来提取它,从而重命名来自 rulesfilerules_file

(?/.+.conf)
{
  "rules_file_path": "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"
}

其他领域也一样。

我们也 mutated 其他字段并删除不必要的字符,包括 尾随空格双引号 等使用 gsub 选项。例如,在alert_msg字段上,我们删除了双引号,在user_Agent字段上,我们删除了尾随空格等。

    mutate {
      # Remove unnecessary characters from the fields.
      gsub => [
        "alert_msg", "["]", "",
        "user_agent", "[:"'`]", "",
        "user_agent", "^s*", "",
        "referer", "^s*", ""
      ]

我们还使用删除了不必要的字段 remove_field 变异选项。

      # Remove the Unnecessary fields so we can only remain with
      # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. 
      remove_field => [ "alert_message", "rulesfile", "ruleid", "msg", "scanner", "agent", "ref" ]
    }

配置Logstash输出插件

在此设置中,我们会将Logstash处理过的数据转发到ES搜索和分析引擎。

因此,我们的输出插件配置为;

output {
   elasticsearch {
     hosts => ["192.168.56.119:9200"]
     manage_template => false
     index => "modsec-%{+YYYY.MM}"
   }
}

因此,通常,我们的Logstash ModSecurity处理配置文件如下所示;

cat /etc/logstash/conf.d/modsecurity-filter.conf
input {
  beats {
    port => 5044
  }
}
filter {
    # Extract event time, log severity level, source of attack (client), and the alert message.
    grok {
      match => { "message" => "(?%{MONTH}s%{MONTHDAY}s%{TIME}s%{YEAR})] [:%{LOGLEVEL:log_level}.*clients%{IPORHOST:src_ip}:d+]s(?.*)" }
    }
    # Extract Rules File from Alert Message
    grok {
      match => { "alert_message" => "(?[file "(/.+.conf)"])" }
    }	
    grok {
      match => { "rulesfile" => "(?/.+.conf)" }
    }	
    # Extract Attack Type from Rules File
    grok {
      match => { "rulesfile" => "(?[A-Z]+-[A-Z][^.]+)" }
    }	
    # Extract Rule ID from Alert Message
    grok {
      match => { "alert_message" => "(?[id "(d+)"])" }
    }	
    grok {
      match => { "ruleid" => "(?d+)" }
    }
    # Extract Attack Message (msg) from Alert Message 	
    grok {
      match => { "alert_message" => "(?[msg S(.*?)"])" }
    }	
    grok {
      match => { "msg" => "(?"(.*?)")" }
    }
    # Extract the User/Scanner Agent from Alert Message	
    grok {
      match => { "alert_message" => "(?User-Agent' SValue: `(.*?)')" }
    }	
    grok {
      match => { "scanner" => "(?:(.*?)')" }
    }	
    grok {
      match => { "alert_message" => "(?User-Agent: (.*?)')" }
    }	
    grok {
      match => { "agent" => "(?: (.*?)')" }
    }	
    # Extract the Target Host
    grok {
      match => { "alert_message" => "(hostname "%{IPORHOST:dst_host})" }
    }	
    # Extract the Request URI
    grok {
      match => { "alert_message" => "(uri "%{URIPATH:request_uri})" }
    }
    grok {
      match => { "alert_message" => "(?referer: (.*))" }
    }	
    grok {
      match => { "ref" => "(? (.*))" }
    }
    mutate {
      # Remove unnecessary characters from the fields.
      gsub => [
        "alert_msg", "["]", "",
        "user_agent", "[:"'`]", "",
        "user_agent", "^s*", "",
        "referer", "^s*", ""
      ]
      # Remove the Unnecessary fields so we can only remain with
      # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. 
      remove_field => [ "alert_message", "rulesfile", "ruleid", "msg", "scanner", "agent", "ref" ]
    }	
}
output {
   elasticsearch {
     hosts => ["192.168.56.119:9200"]
     manage_template => false
     index => "modsec-%{+YYYY.MM}"
   }
}

请注意,提取的字段并不详尽。随时进一步处理您的日志并提取您感兴趣的字段。

将ModSecurity日志转发到Elastic Stack

在Web服务器上安装并设置libModSecurity之后,将日志转发到Elastic Stack,在这种情况下,转发到Logstash数据处理管道。

在此设置中,我们使用Filebeat收集日志并将其推送到Elastic Stack。请按照下面的链接安装和设置Filebeat。

在CentOS 8上安装和配置Filebeat

在Fedora 30 / Fedora 29 / CentOS 7上安装Filebeat

在Ubuntu 18.04 / Debian 9.8上安装和配置Filebeat 7

确保配置要写入ModSecurity日志的正确路径。

调试Grok过滤器

如果您需要调试过滤器,请如下所示替换Logstash配置文件的输出部分,并将处理发送到标准输出;

output {
   #elasticsearch {
   #  hosts => ["192.168.56.119:9200"]
   #  manage_template => false
   #  index => "modsec-%{+YYYY.MM}"
   #}
  stdout { codec => rubydebug }

}

停止Logstash;

systemctl stop logstash

针对您的过滤器配置文件运行Logstash;

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/modsecurity-filter.conf --path.settings /etc/logstash/

运行此过程时,请确保流入ModSecurity日志,以便可以在看到发生情况时对其进行处理。

这是示例调试输出;

{
            "ecs" => {
        "version" => "1.5.0"
    },
         "src_ip" => "253.63.240.101",
       "dst_host" => "kifarunix-demo.com",
     "@timestamp" => 2020-07-31T08:47:36.940Z,
           "tags" => [
        [0] "beats_input_codec_plain_applied",
        [1] "_grokparsefailure"
    ],
          "input" => {
        "type" => "log"
    },
...
...
    },
     "rules_file" => "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf",
            "log" => {
        "offset" => 10787,
          "file" => {
            "path" => "/var/log/httpd/modsec_audit.log"
        }
    },
        "rule_id" => "913100",
      "alert_msg" => "Found User-Agent associated with security scanner",
     "user_agent" => "sqlmap/1.2.4#stable (http//sqlmap.org)",
        "message" => "[Tue Jul 14 16:39:28.086871 2020] [:error] [pid 83149:tid 139784827672320] [client 253.63.240.101:50102] ModSecurity: Warning. Matched "Operator `PmFromFile' with parameter `scanners-user-agents.data' against variable `REQUEST_HEADERS:User-Agent' (Value: `sqlmap/1.2.4#stable (http://sqlmap.org)' ) [file "/etc/httpd/conf.d/modsecurity.d/owasp-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"] [line "33"] [id "913100"] [rev ""] [msg "Found User-Agent associated with security scanner"] [data "Matched Data: sqlmap found within REQUEST_HEADERS:User-Agent: sqlmap/1.2.4#stable (http://sqlmap.org)"] [severity "2"] [ver "OWASP_CRS/3.2.0"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-reputation-scanner"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "OWASP_CRS/AUTOMATION/SECURITY_SCANNER"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"] [hostname "kifarunix-demo.com"] [uri "/"] [unique_id "159473756862.024448"] [ref "o0,6v179,39t:lowercase"], referer: http://kifarunix-demo.com:80/?p=1006",
    "attack_type" => "SCANNER-DETECTION",
    "request_uri" => "/",
     "event_time" => "Jul 14 16:39:28.086871 2020",
        "referer" => "http://kifarunix-demo.com:80/?p=1006",
      "log_level" => "error",
       "@version" => "1"
}

完成调试后,您可以重新启用Elasticsearch输出。

检查您的Elasticsearch索引是否已创建;

curl -XGET 192.168.56.119:9200/_cat/indices/modsec-*?
yellow open modsec-2020.07 bKLGOrJ8SFCAnewz89XvUA 1 1 46 0 104.4kb 104.4kb

在Kibana上可视化ModSecurity日志

创建基巴纳索引

在Kibana仪表板上,导航到 管理> Kibana>索引模式>创建索引模式。索引模式可以匹配单个索引的名称,或包含通配符(*)以匹配多个索引。

在下一页上,选择 @时间戳 作为时间过滤器,然后单击 创建索引模式

之后,点击 发现标签 在左窗格上查看数据。选择您的索引模式,在这种情况下, modsec- *

下图显示了根据Logstash过滤器选择的字段。

然后你去。您已在Kibana上填充了ModSecurity日志字段。

要完成有关如何在ELK Stack上处理和可视化ModSecurity日志的教程,您现在可以继续创建ModSecurity Kibana可视化仪表板。

为ModSecurity日志创建Kibana可视化仪表板

单击下面的链接创建ModSecurity Kibana可视化仪表板。

进一步阅读

ModSecurity记录和调试

Sidebar