关键词:
[[email protected] ~]# cat /etc/redhat-release (经实践,elk5.0.1 需要内核3.点几以上版本支持)
CentOS Linux release 7.2.1511 (Core)
IP: 本地 192.168.1.73
[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.73 centos7.spring.study
[[email protected] src]# tar zxf elasticsearch-5.0.1.tar.gz
[[email protected] src]# mv elasticsearch-5.0.1 /usr/local/
[[email protected] local]# ln -s elasticsearch-5.0.1 /usr/local/elasticsearch
[[email protected] config]# vim elasticsearch.yml
cluster.name: ranruichun
node.name: "linux-node1"
path.data: /usr/local/elasticsearch/data
path.logs: /usr/local/elasticsearch/logs
bootstrap.memory_lock: true
#groupadd elk
#useradd elk -g elk
# su elk /usr/local/services/elk/elasticsearch-5.0.1/bin/elasticsearch
编写启动脚本
[[email protected] elasticsearch]# cat /usr/local/elasticsearch/run.sh
su elk -l -c "nohup /usr/local/elasticsearch/bin/elasticsearch > /usr/local/elasticsearch/log.out &"
su elk -l -c "nohup /usr/local/elasticsearch/bin/elasticsearch > /usr/local/elasticsearch/log.out &"
[[email protected] elasticsearch]# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
[[email protected] elasticsearch-5.0.1]# vim /etc/sysctl.conf
vm.max_map_count=655360
vm.swappiness = 0
[[email protected] elasticsearch]# curl http://192.168.1.73:9200
{
"name" : "linux-node1",
"cluster_name" : "xxxxxx",
"cluster_uuid" : "IRmR9sPtSBqIqj5gA7oUiw",
"version" : {
"number" : "5.0.1",
"build_hash" : "080bb47",
"build_date" : "2016-11-11T22:08:49.812Z",
"build_snapshot" : false,
"lucene_version" : "6.2.1"
},
"tagline" : "You Know, for Search"
}
# su elk -l -c "/usr/local/elasticsearch/bin/elasticsearch -d" 后台启动
# cd /usr/local/src/
[[email protected] src]# git clone https://github.com/elastic/elasticsearch-servicewrapper.g
[[email protected] src]# mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/
[[email protected] service]# /usr/local/elasticsearch/bin/service/elasticsearch install
Detected RHEL or Fedora:
Installing the Elasticsearch daemon..
[[email protected] service]# ls /etc/init.d/elasticsearch
/etc/init.d/elasticsearch
[[email protected] elasticsearch]# curl -i -XGET ‘http://192.168.1.73:9200/_count?pretty‘ -d ‘
{
"query":{
"match_all":{}
}
}
‘
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
装集群管理插件 head
[[email protected] src]# git clone git://github.com/mobz/elasticsearch-head.git
看文章
http://blog.csdn.net/reblue520/article/details/53909409
http://blog.csdn.net/sulei12341/article/details/52935271?locationNum=4&fps=1
http://hnr520.blog.51cto.com/4484939/1867033
[[email protected] node_modules]# /usr/local/elasticsearch-head/node_modules/grunt/bin/grunt server
[[email protected] elasticsearch-head]# npm install grunt --save
npm WARN package.json [email protected] license should be a valid SPDX license expression
[email protected] node_modules/grunt
├── [email protected]
... head 后面还要改很多参数才可以访问es (看上面文章)
Logstash
# cd /usr/local/src
# wget https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz
# tar zxf logstash-1.5.4.tar.gz
# mv logstash-1.5.4 /usr/local/logstash
[[email protected] local]# java -version 验证java环境
java version "1.7.0_09-icedtea"
-- 生产建议是yum 安装 (网上比我牛的人如此说) 这儿就介绍方法
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[[email protected] local]# cat /etc/yum.repos.d/logstash.repo
[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[[email protected] ~]# yum install logstash
启动
/usr/local/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }‘
[[email protected] local]# /usr/local/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{codec => rubydebug} }‘
Logstash startup completed
hehe
{
"message" => "hehe",
"@version" => "1",
"@timestamp" => "2016-12-13T21:50:51.837Z",
"host" => "rui.study.com"
}
[[email protected] local]# /usr/local/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch { host => "192.168.1.104" protocol =>"<a href="http" }"="">http"} }‘
配置文件 官方文档
https://www.elastic.co/guide/en/logstash/current/configuration.html
生产需要
gork规则debug调试 好用!
https://grokdebug.herokuapp.com/
https://grokdebug.herokuapp.com/patterns#
http://www.open-open.com/lib/view/open1453623562651.html
apache日志 --> 标准格式
tomcat 自定义日志样本
[[email protected] filebeat-5.0.1-linux-x86_64]# tail -1 /root/tomcat/tomcat1/logs/fblive-web-www.log.2017-01-22.log
[iZ2535e0vgsZ|10.24.190.246|[fblive-web-www]|2017-01-22 18:44:04.665|[pool-6-thread-1]|WARN |org.hibernate.internal.util.xml.DTDEntityResolver|DTDEntityResolver.java|org.hibernate.internal.util.xml.DTDEntityResolver|resolveEntity|75|1485078461818|HHH000223: Recognized obsolete hibernate namespace http://hibernate.sourceforge.net/. Use namespace http://www.hibernate.org/dtd/ instead. Refer to Hibernate 3.6 Migration Guide!||||
修改tomcat匹配规则
# Log Levels
LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo |INFO |[Ww]arn?(?:ing)?|WARN?(?:ING)? |[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND})
TOMCATLOG %{TOMCAT_DATESTAMP:timestamp} | %{LOGLEVEL:level} | %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}
TOMCATFBLOG [%{IPORHOST:hostname}|%{IP:serverip}|%{SYSLOG5424SD:application}|%{TOMCAT_DATESTAMP:timestamp}|[%{DATA:thread}]|%{LOGLEVEL:level}|%{JAVACLASS:logger}|%{JAVACLASS:file}|%{JAVACLASS:class}|%{HOSTNAME:method}|%{NUMBER:line}|%{NUMBER:lsn}|%{GREEDYDATA:msg}
logstash配置文件样列(报警设置是linux mail 发送 有错误写进文件,定时任务有错误每5分钟发送,有时间再学习更科学的方法,有错误来直接发)
input {
beats {
port => 9500
#mode => "server"
ssl => false
}
}
filter {
if [type] == "apache-accesslog" {
grok {
patterns_dir => "/usr/local/services/elk/logstash-5.0.1/logstash-patterns-core/patterns"
match => { "message" => "%{COMMONAPACHELOG}" }
}
geoip {
source => "clientip"
add_tag => [ "geoip" ]
}
if ([clientip] =~ "^100.109") {
drop {}
}
if([request] =~ "server-status"){
drop {}
}
mutate {
split => ["request", "?"]
}
mutate {
add_field => {
"requesturl" => "%{[request][0]}"
"requestparams" => "%{[request][1]}"
}
}
mutate {
join => ["request", "?"]
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}
if [type] == "tomcat-accesslog" {
grok {
patterns_dir => ["/usr/local/services/elk/logstash-5.0.1/logstash-patterns-core/patterns"]
match => { "message" => "%{TOMCATFBLOG}" }
}
if "fblive-api-web" in [application] {
mutate { replace => { type => "tomcat-fblive-api-web-accesslog" } }
}
else if "fblive-web-www" in [application] {
mutate { replace => { type => "tomcat-fblive-web-www-accesslog" } }
}
}
}
output {
if [type] == "apache-accesslog" {
elasticsearch {
template_overwrite => "true"
hosts => ["127.0.0.1:9200"]
index => "logstash-apache-accesslog-%{+YYYY.MM.dd}"
}
}
if [type] == "tomcat-fblive-api-web-accesslog" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "tomcat-fblive-api-web-accesslog-%{+YYYY.MM.dd}"
}
if [level] == "ERROR" {
file {
path => "/root/elk/error_mail/%{+yyyyMMdd}/fblive-api-web%{+HH}.log"
}
}
}
if [type] == "tomcat-fblive-web-www-accesslog" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "tomcat-fblive-web-www-accesslog-%{+YYYY.MM.dd}"
}
if [level] == "ERROR" {
file {
path => "/root/elk/error_mail/%{+yyyyMMdd}/fblive-web-www%{+HH}.log"
}
}
}
}
[[email protected] error_mail]# cat sendmail.sh
#!/bin/sh
#sendmail error log to someone
function sendErrorMail(){
file=/root/elk/error_mail/$(date -d ‘-8 hour‘ +%Y%m%d/$1%H.log)
# echo $file
if [ -f "$file" ]; then
echo ‘send mail‘$file
mail -s ‘[error]‘$1 [email protected],[email protected] < $file
mv $file $file.send
else
echo ‘no file:‘$file
fi
}
#end
sendErrorMail fblive-api-web
sendErrorMail fblive-web-www
[[email protected] ~]# crontab -l
00 10 * * * /root/elk/elasticsearch-5.0.1/rm_es_tomcat_7_day_ago.sh delete_tomcat
*/5 * * * * /root/elk/error_mail/sendmail.sh
*/30 * * * * /root/elk/error_mail/stat15m.sh
Kibana
http://kibana.logstash.es/content/index.html 中文指南
https://github.com/chenryn/ELKstack-guide-cn/releases/tag/ELK 中文指南下下来看
[[email protected] src]# tar zxf kibana-5.0.1-linux-x86_64.tar.gz
[[email protected] src]# mv kibana-5.0.1-linux-x86_64 /usr/local/kibana
[[email protected] src]# cd /usr/local/kibana/config/
[[email protected] config]# ll
总用量 8
-rw-rw-r--. 1 spring spring 4426 12月 30 10:01 kibana.yml
[[email protected] config]# vim kibana.yml
elasticsearch.url: "<a href="http://192.168.1.73:9200" "="">http://192.168.1.73:9200"
elasticsearch.username: "elastic"
elasticsearch.password: "changeme"
生产环境自己加了个可以加个nignx方向代理做访问认证.
filebeat配置 (tomcat日志有多行配置)
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /root/apache/logs/access*
document_type: apache-accesslog
-
paths:
- /root/tomcat/tomcat1/logs/fblive*
- /root/data/../bin/logs/fblive*
document_type: tomcat-accesslog
multiline:
pattern: ^[
negate: true
match: after
有需求来了, elk立功了!!!
elasticdump 迁移与导出es数据
导出 搜索匹配 :request":"/redenv_AfterShare.ss 的数据
/root/software/node_modules/elasticdump/bin/elasticdump --input=http://127.0.0.1:9200/logstash-apache-accesslog-2016.12.13 --output=logstash-apache-accesslog-2016.12.13.json --searchBody ‘{"query": {"match":{"request":"/redenv_AfterShare.ss"}}}‘ --type=data --sourceOnly
只导出message
/root/software/node_modules/elasticdump/bin/elasticdump --input=http://127.0.0.1:9200/logstash-apache-accesslog-2016.12.04
--output=/root/software/node_modules/elasticdump/bin/apache_accesslog/logstash-apache-accesslog-2016-12-04.json
--searchBody=‘{ "_source": "message", "query": {"match_all": {}} }‘ --type=data
本文出自 “小松鼠” 博客,请务必保留此出处http://8295531.blog.51cto.com/8285531/1896417
elk部署生产实践部署
... ELK部署生产实践部署(1)具体文档参照我的笔记 ###日志采集前规范解决事项:1、开发人员不能登录线上服务器查看详细日志。2、各个系统都有日志,日志数据分散难以 查看详情
jmeter学习笔记9-性能测试实践_ftp程序
【1】需求:上传一个文件到服务器(put) 下载一个文件到本地(get)【2】演示示例:从CRMS_Xshell_FTP中的/crmsfs/bank/completed/2017/20170831/dealSucc目录下下载:YGRpyPlan-20170831 &nbs 查看详情
『tensorflow』常用函数实践笔记
查询列表:『TensorFlow』函数查询列表_数值计算『TensorFlow』函数查询列表_张量属性调整『TensorFlow』函数查询列表_神经网络相关经验之谈:节点张量铺设好了之后,只要不加sess.run(),可以运行脚本检查张量节点是否匹配,无需... 查看详情
2105课程笔记
第一部分:微服务架构最佳实践https://blog.csdn.net/maitian_2008/category_11190676.html第二部分:Docker技术最佳事件https://blog.csdn.net/maitian_2008/category_11285781.html?spm=1001.2014.3001.5482第三部分:Redis技术最佳实践https://blo 查看详情
cgb2105课程笔记
第一部分:微服务架构最佳实践https://blog.csdn.net/maitian_2008/category_11190676.html第二部分:Docker技术最佳事件https://blog.csdn.net/maitian_2008/category_11285781.html?spm=1001.2014.3001.5482第三部分:Redis技术最佳实践https://blo 查看详情
springboot生产中16条最佳实践(代码片段)
Hollis的新书限时折扣中,一本深入讲解Java基础的干货笔记!SpringBoot是最流行的用于开发微服务的Java框架。在本文中,我将与你分享自2016年以来我在专业开发中使用SpringBoot所采用的最佳实践。这些内容是基于我的个... 查看详情
springboot生产中16条最佳实践(代码片段)
Hollis的新书限时折扣中,一本深入讲解Java基础的干货笔记!SpringBoot是最流行的用于开发微服务的Java框架。在本文中,我将与你分享自2016年以来我在专业开发中使用SpringBoot所采用的最佳实践。这些内容是基于我的个... 查看详情
学习java的杂乱笔记
1.局部变量:定义在方法中的变量,定义在方法中的参数的变量,定义在for循环中变量,都是局部变量,在栈内存中开辟一个空间,数据使用完毕,自动释放。2.何定义一个函数?(不需要死记,只需了解) 1.既然函数是一个独... 查看详情
swift_3.0_取消杂乱无章的log输出
一举例:输出的杂乱无章的东西subsystem:com.apple.UIKit,category:HIDEventFiltered,enable_level:0,persist_level:0,default_ttl:0,info_ttl:0,debug_ttl:0,generate_symptoms:0,enable_oversize:1,privacy_setting:2,enable_pri 查看详情
郑捷《机器学习算法原理与编程实践》学习笔记(第六章神经网络初步)6.5boltzmann机算法
6.5Boltzmann机算法6.5.1问题的提出6.5.2模拟退化原理6.5.3Boltzmann分布与退火过程6.5.4Boltzmann机类与退火过程 Boltzmann网络初始时,需要根据参数设置一系列的初始值,主要参数在_init_中 (1)构造方法如下classBoltzmannNet(object... 查看详情
算法笔记之排序
...排序”.排序是一种非常基础的算法,有着广泛的理论和实践基础。对一个排序算法来说,一般从如下3个方面衡量算法的优劣:时间复杂度:主要是分析关键字的比较次数和记录的移动次数。空间复杂度:分析排序算法中需要多... 查看详情
设计模式实践笔记第二节:抽象工厂模式(代码片段)
...。那么什么是产品族呢?产品族可以理解为一个工厂生产的一系列产品。抽象工厂定义一系列产品(产品族)的创建方法,具体工厂实现各自的产品族的生产过程。于是客户端在使用抽象工厂的时候无需关心具体... 查看详情
设计模式实践笔记第二节:抽象工厂模式(代码片段)
...。那么什么是产品族呢?产品族可以理解为一个工厂生产的一系列产品。抽象工厂定义一系列产品(产品族)的创建方法,具体工厂实现各自的产品族的生产过程。于是客户端在使用抽象工厂的时候无需关心具体... 查看详情
郑捷《机器学习算法原理与编程实践》学习笔记(第三章决策树的发展)_scikit-learn与回归树
(上接第三章) 3.4Scikit-Learn与回归树 3.4.1回归算法原理 在预测中,CART使用最小剩余方差(squaredResidualsMinimization)来判断回归时的最优划分,这个准则期望划分之后的子树与样本点的误差方差最小。这样决策... 查看详情
kafka组件(角色)介绍_结合官网教材和实践
...说明,英语中文对照表如下:ComponentName中文名称Producer生产者Consumer消费者Consumer消费者组Topic主题Partition分区Replica副本Offset 偏移量目录一、Producer二、Consumer三、ConsumerGroup四、Broker五、Topic六、Partition七、Replica八... 查看详情
docker学习笔记6
...数据管理;镜像构建;私有仓库;核心技术;生产实践;生态圈;一、概念:https://www.docker.com/Dockeristhewo 查看详情
JMS 生产者最佳实践 [关闭]
】JMS生产者最佳实践[关闭]【英文标题】:JMSProducerBestPractices[closed]【发布时间】:2021-12-3112:09:28【问题描述】:我想将消息发送到servlet顶部的过滤器中的队列。publicclassFilterimplementsjavax.servlet.Filter@InjectJMSContextcontext;@Resourceprivate... 查看详情
学习笔记硬件设备选型
...架构:系统运维实践》章节【序言】硬件平台是支撑生产系统运行的基础设施。随着企业的不断发展,应用负载和数据量在日益增加,只有搭建一套性能优良、稳定可靠的硬件平台,才能保障生产系统高效、稳定... 查看详情