Middle-题目62:86. Partition List

本文介绍了一种链表分隔算法,该算法将小于给定值x的节点置于大于等于x的节点之前,同时保持原有相对顺序不变。通过具体实现案例展示了如何用Java语言解决此类问题。

题目原文:
Given a linked list and a value x, partition it such that all nodes less than x come before nodes greater than or equal to x.

You should preserve the original relative order of the nodes in each of the two partitions.
题目大意:
给出一个链表和一个值x,把<x的节点都搬到x前面,大于x的节点都搬到x后面,并保持原有的顺序。
题目分析:
类似快排,注意一些边界情况和特殊情况即可。多看看错误的test case,多在纸上画画就好了。
源码:(language:java)

public class Solution {
    public static ListNode partition(ListNode head, int x) {
        if(head == null || head.next == null) return head;
        ListNode beforex = null,afterx = null;
        for(ListNode node = head; node!=null; node=node.next) {
            if(node.val<x) {
                if(beforex == null)
                    beforex = new ListNode(node.val);
                else {
                    ListNode temp = beforex;
                    while(temp.next!=null)
                        temp=temp.next;
                    temp.next=new ListNode(node.val);
                }
            }
            else {
                if(afterx == null)
                    afterx = new ListNode(node.val);
                else {
                    ListNode temp = afterx;
                    while(temp.next!=null)
                        temp=temp.next;
                    temp.next=new ListNode(node.val);
                }
            }
        }
        ListNode temp = beforex;
        if (temp!=null) {
            while(temp.next!=null)
                temp=temp.next;
            temp.next=afterx;
            return beforex;
        }
        else
            return afterx;
    }
}

成绩:
1ms,beats 4.36%,众数1ms,95.64%
cmershen的碎碎念:
本题的代码写的很复杂,以后应考虑有没有可以合并的边界情况。

实验一、Hive内置函数 1、聚合函数 1.1 统计员工信息表employess_table中的部门数。 SELECT COUNT(DISTINCT staff_dept) dept_num FROM hive_database.employess_table; 1.2 获取商品销售表sales_table中每个省份销售额最高的城市。 SELECT province,SUM(sales_amount) province_amount FROM hive_database.sales_table GROUP BY province; 1.3 获取商品销售表sales_table中每个省份销售额最高的城市。 SELECT province,city,sales_amount FROM hive_database.sales_table WHERE sales_amount IN( SELECT MAX(sales_amount) FROM hive_database.sales_table GROUP BY province); 1.4 获取商品销售表sales_table中每个省份包含的城市。 SELECT province,COLLECT_SET(city) city_set FROM hive_database.sales_table GROUP BY province; 2、数学函数 获取商品销售表sales_table中每个省份的平均销售额。 SELECT province,ROUND(AVG(sales_amount),2) FROM hive_database.sales_table GROUP BY province; 3、集合函数 判断学生考试成绩表student_exam_table中意向大学填写了Peking University的学生。 SELECT student_name, ARRAY_CONTAINS(intent_university,"Peking University") FROM hive_database.student_exam_table; 4、类型转换函数 获取员工信息表employess_table中员工薪资的整数数据。 SELECT staff_salary,CAST(staff_salary AS INT) FROM hive_database.employess_table; 5、日期函数 在数据库hive_database中创建日期表date_table。 CREATE EXTERNAL TABLE IF NOT EXISTS hive_database.date_table( start_date STRING, end_date STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS textfile LOCATION '/hive_data/test_date'; 向日期表date_table中插入两条数据。 INSERT INTO TABLE hive_database.date_table VALUES ("2020-01-08 12:23:43","2020-06-16 06:13:23"), ("2020-05-25 10:11:22","2020-11-28 11:53:03"); 5.1 计算日期表date_table中,开始日期与结束日期相差的月以及开始日期与当前日期相差的月。 SELECT start_date, end_date, CURRENT_TIMESTAMP() now_date, ROUND(MONTHS_BETWEEN(CURRENT_TIMESTAMP(),start_date),1) between_now_date, ROUND(MONTHS_BETWEEN(end_date,start_date),1) between_start_end_date FROM hive_database.date_table; 5.2 在日期表date_table中,将开始日期延迟7天,结束日期提前5天。 SELECT start_date, end_date, DATE_ADD(start_date,7) add_startdate, DATE_SUB(end_date,5) sub_enddate FROM hive_database.date_table; 5.3 将日期表date_table中,开始日期的时间格式化为yyyy/MM/dd HH的形式。 SELECT start_date, DATE_FORMAT(start_date,'yyyy/MM/dd HH') format_date FROM hive_database.date_table; 6、条件函数 6.1 根据员工信息表employess_table中员工年龄数据,判断员工属于中年还是青年。 SELECT staff_name, IF(staff_age >= 40,"middle age","youth") FROM employess_table; 6.2 根据员工信息表employess_table中员工薪资数据,判断员工薪资级别。 SELECT staff_name,CASE WHEN staff_salary < 5000 THEN "low" WHEN staff_salary >= 5000 AND staff_salary < 8000 THEN "middle" ELSE "high" END FROM hive_database.employess_table; 7、字符串函数 7.1 将商品销售表sales_table中省份名和城市名拼接在一起。 SELECT CONCAT_WS(':',province,city) FROM hive_database.sales_table; 7.2 去除员工信息表employess_table中员工姓名中的空格。 SELECT staff_name, REGEXP_REPLACE(staff_name,'\\s','') FROM hive_database.employess_table; 7.3 获取员工信息表employess_table中员工的姓氏。 SELECT SUBSTR(staff_name, INSTR(staff_name,' ')) FROM hive_database.employess_table; 7.4 重命名员工信息表部门名称,去除部门名称后的“ Department”字符串,在部门名 称前添加“DEPT-”字符串。 SELECT staff_dept, LPAD(SUBSTR(staff_dept, 0, (LENGTH(staff_dept)-LENGTH(SUBSTR(staff_dept, INSTR(staff_dept,' '))))), (LENGTH(staff_dept)-LENGTH(SUBSTR(staff_dept, INSTR(staff_dept,' ')))+5),'DEPT-') FROM hive_database.employess_table; 8、表生成函数 8.1 拆分学生成绩表student_exam_table的意向大学数据。 SELECT intent_university,university_new FROM hive_database.student_exam_table LATERAL VIEW explode(intent_university) intent_university AS university_new; 8.2 拆分学生成绩表student_exam_table的文综或理综数据。 SELECT humanities_or_sciences,key,value FROM hive_database.student_exam_table LATERAL VIEW explode(humanities_or_sciences) humanities_or_sciences AS key,value; 8.3 拆分URL地址数据。 SELECT PARSE_URL_TUPLE("https://www.sogou.com/sogou?prs=5&rfg=1",'HOST','PATH','QUERY') AS (host,path,query); 实验二、Hive自定义函数 UDF(用户自定义函数): 1、打开eclipse,选择新建一个Maven项目,配置Maven项目的组织名(GroupId)和项目工程名(ArtifactId),分别设置为“cn.itcast”和“HIveFunction”。 2、项目中的XML文件pom.xml用于管理Maven项目依赖的配置文件,本项目需要在配置文件pom.xml中添加用于开发Hive程序的依赖。其中添加的依赖需要与Hive版本对应。 <dependencies> <!-- Hive依赖--> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-exec</artifactId> <version>1.2.1</version> </dependency> </dependencies> 之后点击保存即可,需要等待后台自动下载相关依赖包,界面右下角可查看下载进度,中间不要停止,否则会导致报错。 3、选中并右击项目HiveFunction 中的“java”目录,在弹出的菜单栏中依次选择“New→Package”,从而新建Package包,将名称命名为cn.itcast.hive。 4、选中包“cn.itcast.hive”并单击鼠标右键,在弹出的菜单中依次选择“New→Java Class”新建Java类,将Java类命名为hiveUDF。 5、编写UDF类,在类hiveUDF中实现比较两列数值是否相等,具体内容如下。 package cn.itcast.hive; import org.apache.hadoop.hive.ql.exec.UDF; public class hiveUDF extends UDF { public String evaluate(int col1,float col2){ if (col1>col2){ return "max:"+col1+",diffe:"+(col1-col2); }else if(col1<col2){ return "max:"+col2+",diffe:"+(col2-col1); }else { return "0"; } } } 6、右键点击新建的UDF类,选择“Export→JAR file”封装导出jar包,之后勾选需要导出的“hiveUDF”类文件,设置jar包导出位置, 封装的jar包名称为“hive_UDF.jar” 7、在虚拟机node-01中创建目录/export/jar/,在目录/export/jar/中执行“rz”命令,将hive_UDF.jar上传到虚拟机node-01目录/export/jar/下。 mkdir -p /export/jar/ cd /export/jar/ 8、在虚拟机node-02中使用Hive客户端工具Beeline,远程连接虚拟机node-01的HiveServer2服务操作Hive,将虚拟机node-01中目录/export/jar/下的hive_UDF.jar添加到Hive中。之后执行“LIST JARS;”命令,查看当前Hive中包含的jar包。 ADD JAR /export/jar/hive_UDF.jar; LIST JARS; 9、在Hive客户端工具Beeline中,创建临时函数CompareSize。执行“SHOW FUNCTIONS LIKE 'Com*';”命令,查看创建的函数CompareSize,若不指定子句LIKE,则会查询Hive的所有函数包括内置函数。 CREATE TEMPORARY FUNCTION CompareSize AS 'cn.itcast.hive.hiveUDF'; SHOW FUNCTIONS LIKE 'Com*'; 10、使用函数CompareSize,比较员工信息表employess_table中列employess_table和staff_age的值,命令如下。 SELECT CompareSize(staff_age,late_deduction) FROM hive_database.employess_table; UDTF(用户自定表生成函数): 1、UDTF(用户自定表生成函数)的创建、封装、上传过程和UDF(用户自定义函数)完全一致,在上个实验创建的Maven工程的基础上进行类文件的新建即可,无需再次添加依赖信息。 主要区别是创建的java类文件名称为“hiveUDTF”,封装的jar包名称为“hive_UDTF.jar”类文件的具体内容见提供的文件“hiveUDTF.java”。 2、在虚拟机node-02中使用Hive客户端工具Beeline,远程连接虚拟机node-01的HiveServer2服务操作Hive,将虚拟机node-01中目录/export/jar/下的hive_UDTF.jar添加到Hive中。之后执行“LIST JARS;”命令,查看当前Hive中包含的jar包。 ADD JAR /export/jar/hive_UDTF.jar; LIST JARS; 3、创建临时函数spiltname。执行SHOW FUNCTIONS LIKE 'spilt*'命令,查看创建的函数spiltname,若不指定子句LIKE,则会查询Hive的所有函数包括内置函数。 CREATE TEMPORARY FUNCTION spiltname AS 'cn.itcast.hive.hiveUDTF'; SHOW FUNCTIONS LIKE 'spilt*'; 4、使用函数spiltname,将员工信息表employess_table中员工姓名拆分为两列。 SELECT spiltname(staff_name) FROM hive_database.employess_table; UDAF(用户自定聚合函数): 1、UDAF(用户自定聚合函数)的创建、封装、上传过程和UDF(用户自定义函数)完全一致,在上个实验创建的Maven工程的基础上进行类文件的新建即可,无需再次添加依赖信息。 主要区别是创建的java类文件有两个,名称分别为“hiveUDAFCollect”和“hiveUDAFMain”,封装jar包时需要将这两个类文件进行封装,同时jar包名称为“hive_UDAF.jar”。两个类文件具体内容见提供的文件“hiveUDAFCollect.java”和“hiveUDAFMain.java”。 2、在虚拟机node-02中使用Hive客户端工具Beeline,远程连接虚拟机node-01的HiveServer2服务操作Hive,将虚拟机node-01中目录/export/jar/下的hive_UDAF.jar添加到Hive中。 添加jar包并创建: ADD JAR /export/jar/hive_UDAF.jar; 3、创建临时函数collectstr。执行SHOW FUNCTIONS LIKE ' collect *'命令,查看创建的函数collectstr,若不指定子句LIKE,则会查询Hive的所有函数包括内置函数。 CREATE TEMPORARY FUNCTION collectstr AS 'cn.itcast.hive.hiveUDAFCollect'; SHOW FUNCTIONS LIKE 'collect*'; 4、使用函数collectstr,将学生成绩表student_exam_table中所有学生姓名合并到一 行数据中。 SELECT collectstr(student_name) from hive_database.student_exam_table;
11-07
ota在线升级失败升级log如下 Line 241473: 11-25 22:36:39.568837 1610 1610 I update_engine: [INFO:partition_writer.cc(98)] Caching writes. Line 241474: 11-25 22:36:39.570545 1610 1610 I update_engine: [INFO:partition_writer.cc(221)] Applying 786 operations to partition "odm" Line 241822: 11-25 22:36:43.042018 1610 1610 I update_engine: [INFO:delta_performer.cc(116)] Completed 6430/7476 operations (86%), 263634944/277366802 bytes downloaded (95%), overall progress 90% Line 241997: 11-25 22:36:45.237585 19015 19423 E OTAApplication: ABUpdateManager: UpdateEngineCallback onStatusUpdate-- ,status = 3 ,progress=0.95055115 Line 241998: 11-25 22:36:45.239305 19015 19423 D OTAApplication: EnterpriseContentProvider: query Line 242003: 11-25 22:36:45.244484 19015 19423 D OTAApplication: EnterpriseDBUtil: getOTAState = 0 Line 242004: 11-25 22:36:45.244734 19015 19423 D OTAApplication: ABUpdateManager: DOWNLOADING Line 242005: 11-25 22:36:45.244780 19015 19423 D OTAApplication: OTAService: IABUpdateStateListener onProgressUpdated ,progress=0.95055115 Line 242006: 11-25 22:36:45.268512 19015 19423 D OTAApplication: CommonUtil: isMaintenanceSpace: false Line 242007: 11-25 22:36:45.268876 19015 19423 D OTAApplication: NotificationHelper: NotificationHelper notifyABUpdateProgress Line 242024: 11-25 22:36:45.281052 18987 18987 D OTAApplication_UI: EntryActivity: isLocalUpdate:false Line 242025: 11-25 22:36:45.281076 18987 18987 D OTAApplication_UI: DbHelper: getAllUpdatePackageListInfo Line 242039: 11-25 22:36:45.292932 18987 18987 D OTAApplication_UI: CommonUtil: get language tag: en-GB Line 242040: 11-25 22:36:45.296330 18987 18987 D OTAApplication_UI: WebViewWrapper: onJsPrompt() message = 前端log:按钮状态:{"positiveButton":"Installing… 66.5%","negativeButton":"","progress":"66.5","noticeContent":"","positiveButtonState":"2"} Line 242041: 11-25 22:36:45.296693 18987 18987 D OTAApplication_UI: JSCallJava: call json: 前端log:按钮状态:{"positiveButton":"Installing… 66.5%","negativeButton":"","progress":"66.5","noticeContent":"","positiveButtonState":"2"}, result = not found method() with valid parameters Line 242042: 11-25 22:36:45.298635 18987 18987 D OTAApplication_UI: FragmentBusy: onReceiveValue ,call setScreen! value = null Line 242281: 11-25 22:36:47.705929 1610 1610 I update_engine: [INFO:dynamic_partition_control_android.cc(1008)] SnapshotManager initialized. Line 242282: 11-25 22:36:47.710630 1610 1610 I update_engine: [INFO:partition_writer_factory_android.cc(42)] Virtual AB Compression disabled, using Partition Writer for `preloader_raw` Line 242283: 11-25 22:36:47.712201 1610 1610 I update_engine: [INFO:partition_writer.cc(176)] ddr type is ddr5,partition name: preloader_raw , target path: /dev/block/by-name/preloader_raw_b Line 242284: 11-25 22:36:47.713892 1610 1610 I update_engine: [INFO:partition_writer.cc(209)] Opening /dev/block/by-name/preloader_raw_b partition without O_DSYNC Line 242285: 11-25 22:36:47.716366 1610 1610 I update_engine: [INFO:partition_writer.cc(98)] Caching writes. Line 242286: 11-25 22:36:47.717956 1610 1610 I update_engine: [INFO:partition_writer.cc(221)] Applying 5 operations to partition "preloader_raw" Line 242288: 11-25 22:36:47.874210 1610 1610 E update_engine: [ERROR:fec_file_descriptor.cc(32)] No ECC data in the passed file Line 242289: 11-25 22:36:47.878227 1610 1610 E update_engine: [ERROR:verified_source_fd.cc(54)] Unable to open ECC source partition /dev/block/by-name/preloader_raw_a: Invalid argument (22) Line 242290: 11-25 22:36:47.880896 1610 1610 E update_engine: [ERROR:partition_writer.cc(360)] The hash of the source data on disk for this operation doesn't match the expected value. This could mean that the delta update payload was targeted for another version, or that the source partition was modified after it was installed, for example, by mounting a filesystem. Line 242291: 11-25 22:36:47.883066 1610 1610 E update_engine: [ERROR:partition_writer.cc(365)] Expected: sha256|hex = 15998F46F5F241A59271F44C63A6FDE2413AA559FAE1BC83C71E06058F615BE9 Line 242292: 11-25 22:36:47.885138 1610 1610 E update_engine: [ERROR:partition_writer.cc(368)] Calculated: sha256|hex = 8CDBB925095C68E409E9FBE848F8560CBA3347FB7777472B76661652DD258093 Line 242293: 11-25 22:36:47.887479 1610 1610 E update_engine: [ERROR:partition_writer.cc(379)] Operation source (offset:size) in blocks: 0:124 Line 242294: 11-25 22:36:47.889941 1610 1610 E update_engine: [ERROR:partition_writer.cc(302)] source_fd != nullptr failed. Line 242295: 11-25 22:36:47.892745 1610 1610 E update_engine: [ERROR:delta_performer.cc(1174)] partition_writer_->PerformDiffOperation( operation, error, buffer_.data(), buffer_.size()) failed. Line 242296: 11-25 22:36:47.900090 1610 1610 E update_engine: [ERROR:partition_writer.cc(360)] The hash of the source data on disk for this operation doesn't match the expected value. This could mean that the delta update payload was targeted for another version, or that the source partition was modified after it was installed, for example, by mounting a filesystem. Line 242297: 11-25 22:36:47.902288 1610 1610 E update_engine: [ERROR:partition_writer.cc(365)] Expected: sha256|hex = 15998F46F5F241A59271F44C63A6FDE2413AA559FAE1BC83C71E06058F615BE9 Line 242298: 11-25 22:36:47.904310 1610 1610 E update_engine: [ERROR:partition_writer.cc(368)] Calculated: sha256|hex = 8CDBB925095C68E409E9FBE848F8560CBA3347FB7777472B76661652DD258093 Line 242299: 11-25 22:36:47.906198 1610 1610 E update_engine: [ERROR:partition_writer.cc(379)] Operation source (offset:size) in blocks: 0:124 Line 242300: 11-25 22:36:47.908400 1610 1610 E update_engine: [ERROR:partition_writer.cc(302)] source_fd != nullptr failed. Line 242301: 11-25 22:36:47.910196 1610 1610 E update_engine: [ERROR:delta_performer.cc(1174)] partition_writer_->PerformDiffOperation( operation, error, buffer_.data(), buffer_.size()) failed. Line 242302: 11-25 22:36:47.914802 1610 1610 E update_engine: [ERROR:partition_writer.cc(360)] The hash of the source data on disk for this operation doesn't match the expected value. This could mean that the delta update payload was targeted for another version, or that the source partition was modified after it was installed, for example, by mounting a filesystem. Line 242303: 11-25 22:36:47.917421 1610 1610 E update_engine: [ERROR:partition_writer.cc(365)] Expected: sha256|hex = 15998F46F5F241A59271F44C63A6FDE2413AA559FAE1BC83C71E06058F615BE9 Line 242304: 11-25 22:36:47.919042 1610 1610 E update_engine: [ERROR:partition_writer.cc(368)] Calculated: sha256|hex = 8CDBB925095C68E409E9FBE848F8560CBA3347FB7777472B76661652DD258093 Line 242305: 11-25 22:36:47.921304 1610 1610 E update_engine: [ERROR:partition_writer.cc(379)] Operation source (offset:size) in blocks: 0:124 Line 242306: 11-25 22:36:47.923520 1610 1610 E update_engine: [ERROR:partition_writer.cc(302)] source_fd != nullptr failed. Line 242307: 11-25 22:36:47.925142 1610 1610 E update_engine: [ERROR:delta_performer.cc(1174)] partition_writer_->PerformDiffOperation( operation, error, buffer_.data(), buffer_.size()) failed. Line 242308: 11-25 22:36:47.927662 1610 1610 E update_engine: [ERROR:delta_performer.cc(202)] Failed to perform BROTLI_BSDIFF operation 6935, which is the operation 1 in partition "preloader_raw" Line 242309: 11-25 22:36:47.929885 1610 1610 E update_engine: [ERROR:delta_performer.cc(602)] unable to process operation: BROTLI_BSDIFF Error: ErrorCode::kDownloadStateInitializationError Line 242310: 11-25 22:36:47.932125 1610 1610 E update_engine: [ERROR:download_action.cc(224)] Error ErrorCode::kDownloadStateInitializationError (20) in DeltaPerformer's Write method when processing the received payload -- Terminating processing Line 242311: 11-25 22:36:48.013873 1610 1610 I update_engine: [INFO:delta_performer.cc(221)] Discarding 1169 unused downloaded bytes Line 242335: 11-25 22:36:48.077711 1610 1610 I update_engine: [INFO:multi_range_http_fetcher.cc(175)] Received transfer terminated. Line 242336: 11-25 22:36:48.083781 1610 1610 I update_engine: [INFO:multi_range_http_fetcher.cc(127)] TransferEnded w/ code 200 Line 242337: 11-25 22:36:48.085541 1610 1610 I update_engine: [INFO:multi_range_http_fetcher.cc(129)] Terminating. 回读preloader分区和版本中preloader镜像对比结果存在差异,如何排查什么原因导致镜像差异
12-10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值