题目
Given a sorted array, remove the duplicates in place such that each element appear only once and return the new length.
Do not allocate extra space for another array, you must do this in place with constant memory.
For example,
Given input array nums = [1,1,2],
Your function should return length = 2, with the first two elements of nums being 1 and 2 respectively.
It doesn't matter what you leave beyond the new length.
就是将一个已经排好序的数组进行去重,并且不考虑数组的去重之后的其他内容。
此题比较简单,因为之前也有一道去重的题目,所以做这道题就有点得心应手了。但是这里我用了一个TreeSet来处理,因为TreeSet会自动进行排序,一开始我是用HashSet来进行排序的,后来发现是HashSet是根据HashCode来进行排序的,所以不能用这个HashSet,于是我考虑用TreeSet来处理。
public class Solution
{
public static int removeDuplicates(int[] nums)
{
int length = nums.length;
if(length == 0)
return 0;
else if(length == 1)
return 1;
else
{
int num = 0;
Set uniqueSet = new TreeSet();
uniqueSet.add(nums[0]);
for(int i = 1; i < length; i++)
{
if(uniqueSet.contains(nums[i]))
num += 1;
else
uniqueSet.add(nums[i]);
}
Iterator iter = uniqueSet.iterator();
int j = 0;
while(iter.hasNext())
{
Integer it = (Integer)iter.next();
nums[j] = it.intValue();
j++;
}
return length - num;
}
}
}
有了上一题做铺垫,那么接下来的一题就是非常简单的变种:
Follow up for "Remove Duplicates":
What if duplicates are allowed at most twice?
For example,
Given sorted array nums = [1,1,1,2,2,3],
Your function should return length = 5, with the first five elements of nums being 1, 1, 2, 2 and 3.
It doesn't matter what you leave beyond the new length.
public int removeDuplicates(int[] nums)
{
int count = 0;
int length = nums.length;
int[] result = new int[length];
HashMap<Integer,Integer> hashMap = new HashMap<Integer,Integer>();
int j = 0;
for(int i = 0; i < length; i++)
{
if(hashMap.containsKey(nums[i]))
{
if((Integer)hashMap.get(nums[i]).intValue() < 2)
{
int num = hashMap.get(nums[i]);
num++;
hashMap.put(nums[i], num);
result[j] = nums[i];
j++;
}
else
continue;
}
else
{
hashMap.put(nums[i], 1);
result[j] = nums[i];
j++;
}
}
for(int k = 0; k < j; k++)
nums[k] = result[k];
return j;
}
本文介绍如何在不使用额外空间的情况下,通过排序和哈希集合的方法去除已排序数组中的重复元素,并返回去除重复后的数组长度。具体实现包括使用TreeSet自动排序和HashSet的哈希特性来辅助去重过程。
1005

被折叠的 条评论
为什么被折叠?



