ELK第五篇:使用JavaAPI6.0.0操作-DocumentAPIs

本文详细介绍了如何使用 Elasticsearch 6.0 的 Java API 进行文档操作,包括插入、检索、更新和删除等核心功能。特别关注了 TransportClient 的使用,并提到了未来将被 REST 客户端取代的趋势。

说明

1、TransportClient
2、JestClient
3、RestClient

还有种是2.3中有的NodeClient,在5.5.1之后就没有了。还有种是spring-data-elasticsearch

elasticsearch安装的不同服务器版本使用的JavaAPI也需要和其版本相对应,因为它的每个版本的API变动太大,并且不支持向前兼容,最好是直接使用相对应的版本。
本文相对应的版本是
elasticsearch6.0.0

7.0中TransportClient警告
We plan on deprecating the TransportClient in Elasticsearch 7.0 and removing it completely in 8.0. Instead, you should be using the Java High Level REST Client, which executes HTTP requests rather than serialized Java requests. The migration guide describes all the steps needed to migrate.
我们计划在Elasticsearch 7.0中弃用TransportClient并在8.0中完全删除它。 相反,您应该使用Java高级别REST客户端,它执行HTTP请求而不是序列化的Java请求。 迁移指南介绍了迁移所需的所有步骤。【更新时间:2019-05-24,感谢评论区用户名guofangyao的提醒】
因此推荐后面直接使用REST方式进行调用,不在推荐TransportClient方式

maven

引入相关的依赖:
看清楚版本号要对应,否则会报错,特别是对client的链接

<!-- elasticsearch -->
		<dependency>  
            <groupId>org.elasticsearch</groupId>  
            <artifactId>elasticsearch</artifactId>  
            <version>6.0.0</version>
        </dependency>
        <dependency>  
            <groupId>org.elasticsearch.client</groupId>  
            <artifactId>transport</artifactId>  
            <version>6.0.0</version>
        </dependency>    
        <dependency>  
            <groupId>com.alibaba</groupId>
			<artifactId>fastjson</artifactId>
            <version>1.2.8</version>  
		</dependency>

java使用API

import java.io.IOException;
import java.net.InetAddress;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;

import org.elasticsearch.action.bulk.BackoffPolicy;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkProcessor.Listener;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.script.Script;
import org.elasticsearch.transport.client.PreBuiltTransportClient;

/***********
 * 
 * 版本elasticsearch6.0.0
 * Document APIs的使用
 * 
 * @author bamboo
 *
 */
public class ElkMain {

	
	private static TransportClient client;
	
	 public static void getClient() {
		// TODO Auto-generated method stub
		 try {  
		      
		      //设置集群名称  
		  
		      Settings settings = Settings.builder().put("cluster.name", "my-application").build();  
		      //创建client  
		      //@SuppressWarnings("resource")  
		      client = new PreBuiltTransportClient(settings)  
		              .addTransportAddress(new TransportAddress(InetAddress.getByName("192.168.0.91"), 9300));  
		      
		      
		      //写入数据  
		     // createDate(client);  
		      //搜索数据  
//		      GetResponse response = client.prepareGet("accounts", "person", "1").execute().actionGet();  
//		      //输出结果  
//		      System.out.println(response.getSource());  
		        
		      //关闭client  
		      //client.close();  
		  } catch (Exception e) {  
		      e.printStackTrace();  
		  }  
	}
	
	
	
	
	/** 
	   * 插入单条数据 
	   * {"account_number":44,"balance":34487,"firstname":"Aurelia","lastname":"Harding","age":37,"gender":"M","address":"502 Baycliff Terrace","employer":"Orbalix","email":"aureliaharding@orbalix.com","city":"Yardville","state":"DE"}

	   * @param client 
	   */  
   public static void addDate(){
    Map<String,Object> map = new HashMap<String, Object>();  
    map.put("account_number", 45);  
    map.put("balance", 34487);  
    map.put("firstname", "Aurelia");  
    map.put("lastname", "Harding");  
    map.put("age", 37);  
    map.put("gender","M");  
    map.put("address","浙江省杭州市江干区人民公园中国梦");  
    map.put("gender","M");  
    map.put("employer","小李");
    map.put("email","aureliaharding@orbalix.com");
    map.put("city","Yardville");
    map.put("state","DE");  
    
    
    try {  
	    //json格式
    	// String json=JSON.toJSONString(map);
    	//IndexResponse response = client.prepareIndex("accounts", "person","47").setSource(json, XContentType.JSON).get();
	   //map格式
        IndexResponse response = client.prepareIndex("accounts", "person","45")  //UUID.randomUUID().toString()
                   .setSource(map).execute().actionGet();  
        System.out.println(response.toString());  
        System.out.println("写入数据结果=" + response.status().getStatus() + "!id=" + response.getId());  
    } catch (Exception e) {
        // TODO: handle exception
        e.printStackTrace();
    }
   } 
	
   
   
   
   //获取单条数据
   public static void getDataByID(String id) {
		// TODO Auto-generated method stub
		 try {  
		      
		      //搜索数据  
		      GetResponse response = client.prepareGet("accounts", "person", id).execute().actionGet();  
		      //输出结果  
		      System.out.println(response.getSource());  
		        
		      //关闭client  
		      //client.close();  
		  } catch (Exception e) {  
		      e.printStackTrace();  
		  }  
	}
   
   /**
    * 删除单条记录 delete api
    * 
    */
   public static void deleteById(String id) {
       DeleteResponse response = client.prepareDelete("accounts", "person", id)
               .get();
       String index = response.getIndex();
       String type = response.getType();
       String rid = response.getId();
       long version = response.getVersion();
       System.out.println(response.toString());  
       System.out.println(index + " : " + type + ": " + rid + ": " + version);
   }
	
   
   
   
   /**
    * 测试更新 update API
    * 使用 updateRequest 对象
    * @throws Exception 
    */
   public static void update(String id)  {
		UpdateRequest updateRequest = new UpdateRequest();
      
      //client.prepareUpdate("accounts", "person", "1")和下面相同
//	       updateRequest.index("accounts");
//	       updateRequest.type("person");
//	       updateRequest.id(id);
//	       updateRequest.doc(XContentFactory.jsonBuilder()
//	               .startObject()
//	               // 对没有的字段添加, 对已有的字段替换
//	               .field("employer", "小李子22")
//	                .field("gender", "M")
//	               .endObject());		
 //UpdateResponse response = client.update(updateRequest).get();
      
      // 使用Script对象进行更新
//    UpdateResponse response = client.prepareUpdate("accounts", "person", id)
//		     .setScript(new Script("ctx._source.employer = \"小李子22\""))
//		     .get();
    
    // 使用XContFactory.jsonBuilder() 进行更新
     UpdateResponse response = null;
		try {
			response = client.prepareUpdate("accounts", "person", id)
			         .setDoc(XContentFactory.jsonBuilder()
			                 .startObject()
			                     .field("employer", "小李子333")
			                 .endObject()).get();
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
    
    // 使用updateRequest对象及script
//     UpdateRequest updateRequest = new UpdateRequest("accounts", "person", id)
//             .script(new Script("ctx._source.gender=\"male\""));
//     UpdateResponse response = client.update(updateRequest).get();
      
      
     
      
      // 打印
      String index = response.getIndex();
      String type = response.getType();
      String rid = response.getId();
      long version = response.getVersion();
      System.out.println(response.toString());  
      System.out.println(index + " : " + type + ": " + rid + ": " + version);
   }
   
   
   /**
   * 测试upsert方法
   * 查询条件, 查找不到则添加生效
   * @throws Exception 
   * 
   */
  public static void upSet(String id) {
      try {
		// 设置查询条件, 查找不到则添加生效
		  IndexRequest indexRequest = new IndexRequest("accounts", "person", id)
		      .source(XContentFactory.jsonBuilder()
		          .startObject()
		              .field("name", "qergef")
		              .field("gender", "malfdsae")
		          .endObject());
		  // 设置更新, 查找到更新下面的设置
		  UpdateRequest upSet = new UpdateRequest("accounts", "person", id)
		      .doc(XContentFactory.jsonBuilder()
		              .startObject()
		                  .field("user", "wenbronk")
		              .endObject())
		      .upsert(indexRequest);
		  
		  client.update(upSet).get();
	} catch (IOException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	} catch (InterruptedException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	} catch (ExecutionException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
      
      
  }
  
   /**
   * 测试multi get api
   * 从不同的index, type, 和id中获取
   */ 
  public static void testMultiGet() {
      MultiGetResponse multiGetResponse = client.prepareMultiGet()
      .add("twitter", "tweet", "1")
      .add("accounts", "person",  "6", "13", "18")//因为我数据里面刚好有这几个ID,根据你自己的情况设置
      .add("index", "type", "foo")
      .get();
      
      for (MultiGetItemResponse itemResponse : multiGetResponse) {
          GetResponse response = itemResponse.getResponse();
          if (response!=null&&response.isExists()) {
              String sourceAsString = response.getSourceAsString();
              System.out.println(sourceAsString);
          }
      }
  }
  
  
  
  
  
  /**
   * bulk 批量执行
   * 一次查询可以update 或 delete多个document
   */
  public static void testBulk()  {
      try {
		BulkRequestBuilder bulkRequest = client.prepareBulk();
		  bulkRequest.add(client.prepareIndex("accounts", "person", "50")
		          .setSource(XContentFactory.jsonBuilder()
		                  .startObject()
		                      .field("user", "kimchy")
		                      .field("postDate", new Date())
		                      .field("message", "trying out Elasticsearch")
		                  .endObject()));
		  bulkRequest.add(client.prepareIndex("accounts", "person", "51")
		          .setSource(XContentFactory.jsonBuilder()
		                  .startObject()
		                      .field("user", "kimchy")
		                      .field("postDate", new Date())
		                      .field("message", "another post")
		                  .endObject()));
		  BulkResponse response = bulkRequest.get();
		  System.out.println(response.toString());
	} catch (IOException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
  }
   
  
  
  /**
   * 使用bulk processor
   * @throws Exception 
   */
  public static void testBulkProcessor() throws Exception {
      // 创建BulkPorcessor对象
      BulkProcessor bulkProcessor = BulkProcessor.builder(client, new Listener() {

		@Override
		public void beforeBulk(long executionId, BulkRequest request) {
			// TODO Auto-generated method stub
			
		}

		@Override
		public void afterBulk(long executionId, BulkRequest request,
				BulkResponse response) {
			// TODO Auto-generated method stub
			
		}

		@Override
		public void afterBulk(long executionId, BulkRequest request,
				Throwable failure) {
			// TODO Auto-generated method stub
			
		}
         
      })
      // 1w次请求执行一次bulk
      .setBulkActions(10000)
      // 1gb的数据刷新一次bulk
      .setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB))
      // 固定5s必须刷新一次
      .setFlushInterval(TimeValue.timeValueSeconds(5))
      // 并发请求数量, 0不并发, 1并发允许执行
      .setConcurrentRequests(1)
      // 设置退避, 100ms后执行, 最大请求3次
      .setBackoffPolicy(
              BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3))
      .build();
      
      // 添加单次请求
      bulkProcessor.add(new IndexRequest("accounts", "person", "50"));
      bulkProcessor.add(new DeleteRequest("accounts", "person", "51"));
      
      // 关闭
      bulkProcessor.awaitClose(10, TimeUnit.MINUTES);
      // 或者
      bulkProcessor.close();
  }
   
	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		getClient();
		//addDate();
		//getDataByID("44");
		//deleteById("45");
//		update("44");
//		getDataByID("44");
		
//		upSet("46");//查找设置
//		getDataByID("46");
		
//		testMultiGet();
		
		//testBulk();//批量执行
		//getDataByID("51");
		
		
		testBulkProcessor();//批量处理,少于1w次不会执行
		getDataByID("51");
		
	}

}

参考文档

官方API6.0.0文档中文版本
Java API [6.0] » Document APIs
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs.html

### 三级标题:Docker Compose ELK Stack 网络与版本冲突修复 在使用 `docker-compose.yml` 配置 ELK 栈时,若指定的 `version: '3.8'` 并出现网络错误 `service "logstash" refers to undefined network elk-net`,说明服务中引用了未定义的自定义网络。为确保容器间的通信正常运行,必须在 `networks` 段显式声明所有自定义网络[^1]。 #### 正确配置自定义网络 在 YAML 文件顶部添加如下网络声明: ```yaml networks: elk-net: driver: bridge ``` Logstash、Elasticsearch 和 Kibana 的服务配置需统一使用该网络: ```yaml logstash: image: docker.elastic.co/logstash/logstash:7.2.0 networks: - elk-net ports: - "5044:5044" - "9600:9600" depends_on: - elasticsearch ``` --- #### 完整 ELK 栈配置示例(适配 Logstash 7.2.0) 以下是一个适用于 Logstash 7.2.0 的 `docker-compose.yml` 示例片段,确保兼容性与功能性: ```yaml version: '3.8' networks: elk-net: driver: bridge services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0 container_name: elasticsearch environment: - discovery.type=single-node ports: - "9200:9200" - "9300:9300" networks: - elk-net logstash: image: docker.elastic.co/logstash/logstash:7.2.0 container_name: logstash ports: - "5044:5044" - "9600:9600" volumes: - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf depends_on: - elasticsearch networks: - elk-net kibana: image: docker.elastic.co/kibana/kibana:7.2.0 container_name: kibana ports: - "5601:5601" depends_on: - elasticsearch networks: - elk-net ``` --- #### 日志管道无输出问题排查 若 Logstash 启动成功但数据无法写入 Elasticsearch 或控制台无输出,可检查日志确认是否因安全认证失败导致: ``` [security_exception] unable to authenticate user [elastic] for REST request ``` 此错误表明 Logstash 使用的账号权限不足或未正确配置安全凭证。可在 Logstash 输出插件中明确指定用户名和密码: ```ruby output { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "elastic" password => "your_secure_password" } } ``` 此外,确保 Elasticsearch 已关闭安全验证或已正确配置 TLS 加密与用户权限管理[^2]。 --- #### 网络连通性验证 进入 Logstash 容器并测试对 Elasticsearch 的访问能力: ```bash docker exec -it logstash bash curl -v http://elasticsearch:9200 ``` 若返回 `Connection refused`,应进一步检查 Elasticsearch 是否启动成功,并确认其绑定地址是否为 `0.0.0.0` 而非仅限于 `localhost`。 --- #### 版本兼容性建议 Logstash 7.2.0 属于较旧版本,需确保与 Elasticsearch 和 Kibana 的版本一致,避免因协议变更或功能弃用导致不可预见的问题。官方推荐使用相同主版本的组件以保证稳定性。对于新部署项目,建议优先考虑更新至 8.x 系列以获得更好的安全性与性能优化。 ---
评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值