Centos 7 通过 targz 文件安装 Elastic Search 服务
区别于通过发行版自带的仓库, 介绍如何通过 targz 文件安装 Elastic Search 服务, 使用的 Linux 为 Centos 7
下载
https://www.elastic.co/downloads/elasticsearch
选择 Linux x86_64, 下载 elasticsearch-8.8.0-linux-x86_64.tar.gz
安装
解压到 /opt/elasticsearch, 并加上软链
tar xvf elasticsearch-8.8.0-linux-x86_64.tar.gz
cd /opt/
sudo mkdir elasticsearch
cd elasticsearch/
sudo mv ~/backup/elasticsearch-8.8.0 .
sudo chown -R milton:milton elasticsearch-8.8.0/
sudo ln -s elasticsearch-8.8.0 latest
这个版本的 Elastic Search 自带 JVM, 版本为 openjdk version "20.0.1" 2023-04-18
配置
可能需要修改的配置
# Use a descriptive name for your cluster:
#cluster.name: my-application
# Use a descriptive name for the node:
node.name: centos7001
# Add custom attributes to the node:
#node.attr.rack: r1
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /home/milton/es_run/data
# Path to log files:
path.logs: /home/milton/es_run/logs
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
network.host: 192.168.9.10
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#http.port: 9200
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#discovery.seed_hosts: ["centos7001"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["centos7001"]
# For more information, consult the discovery and cluster formation module documentation.
# Allow wildcard deletion of indices:
#action.destructive_requires_name: false
xpack.security.enabled: false
- cluster.name: my-application 集群名称
- node.name 要改成当前服务器的hostname
- path.data: /somew/data 数据路径
- path.logs: /somewhere/logs 日志路径
- network.host: 192.168.123.123 监听的网口, 默认只监听127.0.0.1
- http.port: 9200 监听的端口, 默认为9200
- discovery.seed_hosts: ["192.168.123.123"] 集群主机列表, 和下面的cluster.initial_master_nodes必须写一个, 不然启动会报错. 如果只是单节点, 这行可以注释掉
- cluster.initial_master_nodes: ["centos7001"] 启动时初始化的参与选主的node 对应的 hostname, 要能解析为IP
node.name 和 cluster.initial_master_nodes, 可以填IP也可以填hostname, 但是要一致
系统配置
以下的配置用于解决下面的问题
- max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
- max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
- the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
- Transport SSL must be enabled if security is enabled. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]
1. max file descriptors 65535
修改/etc/security/limits.conf (或者 /etc/security/limits.d/20-nproc.conf), 增加或修改为以下内容
* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535
root soft nproc unlimited
需要重启, 用 ulimit -n 检查
2. vm.max_map_count 262144
修改/etc/sysctl.conf 或者 /etc/sysctl.d/99-sysctl.conf文件,增加或修改为以下内容
vm.max_map_count=262144
3. the default discovery settings are unsuitable for production use
需要配置 discovery.seed_hosts,discovery.seed_providers,cluster.initial_master_nodes中的至少一个参数
- discovery.seed_hosts: 集群主机列表
- discovery.seed_providers: 基于配置文件配置集群主机列表
- cluster.initial_master_nodes: 启动时初始化的参与选主的node
修改配置文件 config/elasticsearch.yml, 配置以下两项
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
4. Transport SSL must be enabled if security is enabled
修改配置文件 config/elasticsearch.yml, 增加
xpack.security.enabled: false
5. WARN: This node is a fully-formed single-node cluster
如果在日志中看到类似这样的错误
[2023-06-09T07:29:43,781][WARN ][o.e.c.c.Coordinator ] [centos7001] This node is a fully-formed single-node cluster with cluster UUID [6ejfGD71SVe6OpypK-1HmA], but it is configured as if to discover other nodes and form a multi-node cluster via the [discovery.seed_hosts=[192.168.123.123]] setting. Fully-formed clusters do not attempt to discover other nodes, and nodes with different cluster UUIDs cannot belong to the same cluster. The cluster UUID persists across restarts and can only be changed by deleting the contents of the node's data path(s). Remove the discovery configuration to suppress this message.
说明这是一个单节点的ES, 但是配置文件中配置其去发现另一个节点. 需要将 discovery.seed_hosts 中设置的节点去掉
运行
直接运行, 这样会将日志直接输出到控制台
/opt/elasticsearch/latest/bin/elasticsearch
后台运行, 在命令后加 -d -p pid-file, 在输出一段控制台日志后, 如果没有报错, 会转入后台运行
/opt/elasticsearch/latest/bin/elasticsearch -d -p /opt/elasticsearch/latest/logs/pid
停止
根据记录的 pid 停止, 启动时记录用的哪个文件, 这里就用对应的文件
pkill -F /opt/elasticsearch/latest/logs/pid
访问服务
浏览器打开 http://192.168.123.123:9200/ 能看到ES的输出, 就说明运行成功
{
"name" : "centos70",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "8.8.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "c01029875a091076ed42cdb3a41c10b1a9a5a22f",
"build_date" : "2023-05-23T17:16:07.179039820Z",
"build_snapshot" : false,
"lucene_version" : "9.6.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
查询集群运行状况
curl -XGET "127.0.0.1:9200/_cat/health?v"
查询集群所有索引
$ curl -XGET "192.168.123.123:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open commodity002 XIrCTL_XQq2vteuEflY6vA 1 1 0 0 247b 247b
yellow open commodity001 Z-LKjzsuR8uMLgVlYYALEw 1 1 0 0 247b 247b
yellow open commodity004 sSxEiwNBSvernMH6EYsEvw 1 1 0 0 247b 247b
yellow open commodity003 JSRUndkHQ8mQVdTkN9eCPw 1 1 0 0 247b 247b
创建索引
不带参数, ?pretty
用于格式化响应的json
curl -X PUT "localhost:9200/commodity?pretty"
带参数
curl -H 'Content-Type: application/json' -X PUT 'http://192.168.123.123:9200/commodity007?pretty' \
--data '{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 2
}
}'
带索引字段,
curl -H 'Content-Type: application/json' -X PUT 'http://192.168.123.123:9200/commodity008?pretty' \
--data '{
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"name":{
"type": "text"
},
"studymodel":{
"type": "keyword"
},
"price":{
"type": "double"
},
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
},
"pic":{
"type":"text",
"index": false
}
}
}
}'
对于嵌套存在的字段, mappings是可以用层级的, 例如对 type1下的obj1的索引
{
"mappings": {
"type1": {
"properties": {
"obj1": {
"type": "nested"
}
}
}
}
}
查看索引字段及设置
curl -X GET 'http://192.168.123.123:9200/commodity008?pretty'
往索引写入内容
通过路径指定 _id = 1, 对同一个 _id可以再次调用进行更新, 结果中的_version会递增
curl --location --request PUT 'http://192.168.123.123:9200/commodity008/_doc/1?pretty' \
--header 'Content-Type: application/json' \
--data '{
"name": "commodity008001",
"studymodel": "202306",
"price": 123.12,
"timestamp": "2023-05-25 19:11:35",
"pic": "23/06/01/a123b1fde0428.jpg"
}'
查询
可以通过URL路径区分不同索引
- /_search 所有索引
- /commodity008/_search commodity008索引
- /commodity007,commodity008/_search commodity007 和 commodity008
- /commodity*/_search 以 commodity 开头的索引
查询所有索引下的内容
curl -X GET 'http://192.168.123.123:9200/_search?pretty'
查询一个索引下的内容
curl -X GET 'http://192.168.123.123:9200/commodity008/_search?pretty'
带条件查询
curl -H 'Content-Type: application/json' -X GET 'http://192.168.9.10:9200/commodity008/_search?pretty=null' \
--data '{
"query" : {
"match" : {
"name": "commodity008001"
}
}
}'
带偏移和结果数量, 请求加上 from 和 size 参数
{
"from":10,
"size":20,
"query":{
"match_all": {}
}
}
排序, 请求加上 sort 参数
{
"sort":[{"year":"desc"}],
"query":{
"match_all": {}
}
}
限制返回的字段, 请求加上 _source 字段
{
"_source":["title"],
"query":{
"match_all": {}
}
}
结果格式
{
"took": 422,
"timed_out": false,
"_shards": {
"total": 2,
"successful": 2,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 2,
"relation": "eq"
},
"max_score": 1.0,
"hits": [
{
"_index": "commodity008",
"_id": "1",
"_score": 1.0,
"_source": {
"name": "commodity008001",
"studymodel": "202307",
"price": 123.53,
"timestamp": "2023-05-25 19:11:35",
"pic": "23/06/01/a123b1fde0428.jpg"
}
},
...
]
}
}
参考
- 通过 targz 安装 Elastic Search https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html