情况1
User user1 = new User(1,"安那","anna");
User user2 = new User(2,"杰克","jake");
User user3 = new User(3,"卢卡斯","Lucas");
List<User> userList = new ArrayList<>();
userList.add(user1);
userList.add(user1);
userList.add(user2);
userList.add(user3);
List<User> distinctUserList = userList.stream().distinct().collect(Collectors.toList());
distinctUserList.forEach(user -> System.out.println(User.outputUserInfo(user)));
运动结果:
1安那anna
2杰克jake
3卢卡斯Lucas
情况2
User user1 = new User(1,"安那","anna");
User user2 = new User(2,"杰克","jake");
User user3 = new User(3,"卢卡斯","Lucas");
List<User> userList = new ArrayList<>();
userList.add(user1);
userList.add(new User(1,"安那","anna"));
userList.add(user2);
userList.add(user3);
List<User> distinctUserList = userList.stream().distinct().collect(Collectors.toList());
distinctUserList.forEach(user -> System.out.println(User.outputUserInfo(user)));
运行结果:
1安那anna
1安那anna
2杰克jake
3卢卡斯Lucas
distinct()是基于hashCode()和equals()工作的。
自定义方法:对某个字段过滤重复数据
static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Map<Object,Boolean> seen = new ConcurrentHashMap<>();
return t -> seen.putIfAbsent(keyExtractor.apply(t), Boolean.TRUE) == null;
}
使用:
List<User> distinctUserList2 = userList.stream().
filter(distinctByKey(user -> user.getId())).collect(Collectors.toList());
Java8:Stream API实现List去重
本文介绍了如何在Java8中利用Stream API的distinct()方法去除List中的重复元素,并探讨了该方法依赖于hashCode()和equals()的原理。此外,还提供了一种自定义方法,针对特定字段进行去重操作。
8645

被折叠的 条评论
为什么被折叠?



