logstash.conf示例(代码片段)

PoetryAndTheDistance PoetryAndTheDistance     2022-11-29     482

关键词:

参考logstash的一个入门资料: http://doc.yonyoucloud.com/doc/logstash-best-practice-cn/index.html

输出ES时创建的索引模板定义:https://www.cnblogs.com/you-you-111/p/9844131.html

https://www.cnblogs.com/cangqinglang/p/12187801.html

 

ongdb_query_log.conf

#输入
input 
  	beats 
		port  => "5044"
		client_inactivity_timeout => 36000
 	


#过滤
filter 
	grok 
		match => 
			"message" =>[
				"%TIMESTAMP_ISO8601:log_timestamp %LOGLEVEL:log_level  %INT:time_consuming %USERNAME:time_consuming_unit:.*client/%IP:client_ip:%INT:client_port.*%IP:server_ip:%INT:server_port.* -[ |\\r\\n]%GREEDYDATA:cypher - .* - .*",
				"%TIMESTAMP_ISO8601:log_timestamp %LOGLEVEL:log_level  %INT:time_consuming %USERNAME:time_consuming_unit:.*(ongdb|graph-user1|graph-user2|neo4j|graph-user3|techfin|esg) -[ |\\r\\n]%GREEDYDATA:cypher - ",
				"%TIMESTAMP_ISO8601:log_timestamp %LOGLEVEL:log_level  %INT:time_consuming %USERNAME:time_consuming_unit:.* -[ |\\r\\n]%GREEDYDATA:cypher - .* - .*"
			]
		
#		add_field => ["day", "%+YYYY.MM.dd"]
#		add_field => ["received_at", "%@timestamp"]
#		add_field => ["received_log", "%host"]
#		remove_field => ["host"]
#		add_field => ["received_logstash", "%host"]
#		remove_field => ["message","@timestamp","tags","log","input","agent","ecs"]

		
		add_field => ["received_at", "%@timestamp"]
		add_field => ["received_from", "%host"]
		add_field => ["day", "%+YYYY.MM.dd"]
		remove_field => ["message","@timestamp","tags","log","input","agent","ecs","host"]
	
#	mutate 
#        		convert => ["time_consuming", "int"]
#   	
#	date 
#		match => [ "log_timestamp", "YYYY-MMM-dd HH:mm:ss.SSS Z" ]
#	

#输出
output 
	elasticsearch 
#        		hosts => "http://10.20.13.130:9200"
        		hosts => "http://10.20.8.155:9200"
		index => "logstash_ongdb_querylog_%day"
#		index => "ongdb_querylog"
		template => "/home/ubuntu/ongdbETL/logstash-7.5.1/bin/conf/logstash_ongdb_querylog.json"
            		template_name => "logstash_ongdb_querylog_*"
           		template_overwrite => true
    	
	stdout 

 filebeat.yml

#运行命令	./filebeat -c filebeat_neo4j_log.yml -e
filebeat.inputs:
- type: log
  enabled: true
  encoding: utf-8
  paths:
    - /home/ongdb/ongdb-enterprise-3.5.22/logs/query.*
  multiline.pattern: '^\\d4-\\d2-\\d2.*'
  multiline.negate: true
  multiline.match: after

output.logstash:
  hosts: ["10.20.4.28:5044"]

logstash_neo4j_querylog.json 


  "template": "logstash_ongdb_querylog_*",
  "order": 1,
  "settings": 
    "number_of_replicas": 0,
    "number_of_shards": 1,
    "refresh_interval": "60s",
    "translog": 
      "flush_threshold_size": "256mb"
    ,
    "merge": 
      "scheduler": 
        "max_thread_count": "1"
      
    ,
    "index": 
      "routing": 
        "allocation": 
          "total_shards_per_node": "1"
        
      
    ,
    "analysis": 
      "normalizer": 
        "my_normalizer": 
          "type": "custom",
          "filter": [
            "lowercase",
            "asciifolding"
          ]
        
      
    
  ,
  "mappings": 
    "properties": 
      "time_consuming": 
        "index": true,
        "store": true,
        "type": "integer"
      ,
      "time_consuming_unit": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "client_ip": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "client_port": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "server_ip": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "server_port": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "cypher": 
        "index": true,
        "store": true,
        "type": "text"
      ,
      "received_from": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "received_at": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "log_level": 
        "index": true,
        "store": true,
        "type": "keyword"
      ,
      "log_timestamp": 
        "index": true,
        "store": true,
        "type": "keyword"
      
    
  ,
        "aliases": 
            "logstash_neo4j_querylog": 
        


 

logstash收集rsyslog日志(代码片段)

(1)rsyslog配置在192.168.1.31配置#vim/etc/rsyslog.conf*.*@@192.168.1.32:514//所有设备名,所有日志级别都发送到192.168.1.32的rsyslog#systemctlrestartrsyslog(2)测试rsyslog标准输入输出1.在192.168.1.32测试rsyslog接收#vim/etc/logstash/conf.d/ 查看详情

logstash安裝配置(代码片段)

logstash安裝配置及优化logstash-7.2.0 1、安装首先从官网下载logstash:https://www.elastic.co/cn/downloads/logstashwindows下载zip,linux下载tar.gz解压到指定目录如果不做任何优化,现在就可以运行bin/logstash-fconfig/logstash.conf l 查看详情

elk上手2在centos下安装logstash和kibana(代码片段)

...安装ElassticSearch一、官网下载地址二、准备环境三、安装Logstash四、安装Kibana1.下载2.配置3.启动4.访问界面一、官网下载地址https://www.elastic.co/cn/downloads/logstash二、准备环境CentOS8JDK8三、安装Logstashwgethttps://artifacts.elastic.co/downloads/lo... 查看详情

elk上手3logback接入logstash(代码片段)

ELK上手3logback接入logstash一、准备工作二、Logstash配置三、Logback设置1.pom.xml引用2.logback-spring.xml设置四、kibana查看日志1.设置索引2.查看日志一、准备工作已安装Elasticsearch已安装logstashJava开发环境二、Logstash配置在Logstash的config目录... 查看详情

收集tcp/udp日志(代码片段)

收集TCP/UDP日志通过logstash的tcp/udp插件收集日志,通常用于在向elasticsearch日志补录丢失的部分日志,可以将丢失的日志通过一个TCP端口直接写入到elasticsearch服务器。1.配置Logstash#进入Logstash配置文件目录[root@redis01~]#cd/etc/logstash/con... 查看详情

cron检测并启动logstash(代码片段)

背景线上的logstash总是莫名其妙的挂了,我打算写一个定时任务,一分钟去检查一次logstash进程,不存在时就把它启动步骤编写检测启动脚本让cron定时来调用检测启动脚本1、编写脚本第一次完成是这个样子:#!/usr/bin/envbashpid_blog=... 查看详情

logstash设置服务启动加载自定义的管道配置(代码片段)

...为了可以看到Lostash采集到的日志的效果,编写了一个logstash-springboot.conf管道配置文件,启动也是通过logstahs.sh脚本并指定加载这个管道配置文件。但是安装后,是通过已注册好的服务启动的,但是这样怎么去加载... 查看详情

logstash设置服务启动加载自定义的管道配置(代码片段)

...为了可以看到Lostash采集到的日志的效果,编写了一个logstash-springboot.conf管道配置文件,启动也是通过logstahs.sh脚本并指定加载这个管道配置文件。但是安装后,是通过已注册好的服务启动的,但是这样怎么去加载... 查看详情

如何使用logstash(代码片段)

目录一、什么是Logstash二、如何安装三、快速使用四、Input输入插件五、codec编码插件六、filter过滤器插件七、output输出插件八、总结一、什么是LogstashLogstash是一个日志收集器,可以理解为一个管道,或者中间件。功能是从定义... 查看详情

logstash服务配置

配置文件:/usr/lib/systemd/system/logstash.service相关目录:logstash.conf位于/etc/logstash/conf.d中,logstash.yml位于/etc/logstash中[Unit]Description=LogstashDocumentation=http://www.elastic.coAfter=elasticsearch.service[Service]Environment=LS_HOME=/var/lib/logstashEnvironment=LS_HE... 查看详情

logstash/conf.d文件编写

logstash-01.confinputbeatsport=>5044host=>"0.0.0.0"type=>"logs"codec=>"json"filterif([type]=="nginx-access")grokmatch=>"request"=>"s+(?<api_path>.+?)(?.*)?s+"grok 查看详情

logstash-shipper.conf

inputfilepath=>‘/data/rsyslog/*/*/*.log‘start_position=>‘beginning‘sincedb_path=>‘/data/rsyslog/sincedb/.db‘filtergrokmatch=>path=>"%GREEDYDATA/%GREEDYDATA:type.log"mutate 查看详情

logstash_agent.conf语法注意事项

编写配置文件时要注意语法,如新版本的logstash对参数host变更为hosts,去除了port参数等。[[email protected]logstash]#catlogstash_agent.confinput{file{type=>"messages"path=>["/var/log/messages"]}}output{elasticsearch{hosts=> 查看详情

logstash实战input插件syslog

vim/etc/logstash/conf.d/syslog.confinput{syslog{type=>"system-syslog"port=>514}}output{stdout{codec=>rubydebug}}启动/opt/logstash/bin/logstash-f/etc/logstash/conf.d/syslog.conf在开一个窗口可以看到514端口启动 查看详情

logstash.conf配置

...Ainputtcp##host:port就是上面appender中的destination,这里其实把logstash作为服务,开启9601端口接收logback发出的消息host=>"0.0.0.0"port=>9601#模式选择为servermode=>"server"tags=>["tags"]#格式jsoncodec=>json_linesoutputelas... 查看详情

logstash之四:logstash接收kafka数据

3、kafka+logstash整合logstash1.5以后已经集成了对kafka的支持扩展,可以在conf配置中直接使用vim/etc/logstash/conf.d/pay.confinput{kafka{zk_connect=>"yourzookeeperaddress:2181"group_id=>"logstash"topic_id=>"pay-replicated" 查看详情

elasticsearch(elk)集群环境部署(代码片段)

...)Kibana下载2)Kibana安装3)Kibana修改配置4)Kibana启动七、logstash部署1)logstash下载解压2)解压测试数据集3)创建并编辑logstash.conf文件,添加如下内容(Ruby语法)4)导入数据一、概述ELK是一个由三个开源软件工具组成的数据处... 查看详情

elasticsearch数据库同步插件logstash

   1.下载和elasticsearch相同版本的logstash.  2.进行解压后,进入bin下,新建一个文件mysql.conf,并输入inputstdinoutputstdout3.cmd进入bin下,输入logstash-fmysql.conf 启动后,输入  http://127.0.0.1:9600/4.ok;说明已经启动了   查看详情