【MOS】EVENT: DROP_SEGMENTS - cleanup of TEMPORARY segments (文档 ID 47400.1)

【MOS】EVENT: DROP_SEGMENTS - Forcing cleanup of TEMPORARY segments (文档 ID 47400.1)




***Checked for relevance on 14-Jun-2012***
 The DROP_SEGMENTS event
~~~~~~~~~~~~~~~~~~~~~~~
 Available from 8.0 onwards.
 

   DESCRIPTION Finds all the temporary segments in a tablespace which are not
     currently locked and drops them.
     For the purpose of this event a "temp" segment is defined as a 
     segment (seg$ entry) with TYPE#=3. Sort space in a TEMPORARY
     tablespace does not qualify under this definition as such
     space is managed independently of SEG$ entries.

   PARAMETERS
     level - tablespace number+1. If the value is 2147483647 then
             temp segments in ALL tablespaces are dropped, otherwise, only
             segments in a tablespace whose number is equal to the LEVEL
             specification are dropped.

   NOTES
     This routine does what SMON does in the background, i.e. drops
     temporary segments. It is provided as a manual intervention tool which
     the user may invoke if SMON misses the post and does not get to
     clean the temp segments for another 2 hours. We do not know whether
     missed post is a real possibility or more theoretical situation, so
     we provide this event as an insurance against SMON misbehaviour.

     Under normal operation there is no need to use this event.

     It may be a good idea to 
        alter tablespace <tablespace> coalesce;
     after dropping lots of extents to tidy things up.

 *SQL Session (if you can connect to the database):      
    alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1';

     The TS# can be obtained from v$tablespace view:
     select ts# from v$tablespace where name = '<Tablespace name>';

     Or from SYS.TS$:

     select ts# from sys.ts$ where name = '<Tablespace name>' and online$ != 3;
     
     If ts# is 5, an example of dropping the temporary segments in that tablespace 
     would be:

    alter session set events 'immediate trace name DROP_SEGMENTS level 6'; 




Master Note: Database System Monitor Process (SMON) (文档 ID 1495163.1)

Temporary Segment Cleanup


Oracle Database often requires temporary workspace for intermediate stages of SQL statement execution. Typical operations that may require a temporary segment include sorting, hashing, and merging bitmaps. While creating an index, Oracle Database also places index segments into temporary segments and then converts them into permanent segments when the index is complete.

Oracle Database does not create a temporary segment if an operation can be performed in memory. However, if memory use is not possible, then the database automatically allocates a temporary segment on disk.

Temporary segments will also be created for the following operations a well.

  • CREATE TABLE AS SELECT
  • ALTER INDEX REBUILD
  • DROP TABLE
  • CREATE SNAPSHOT
  • CREATE PARTITION TABLE
When does SMON cleanup temporary segments?

During normal operations, user processes that create temporary segments are responsible for cleanup. If the user process dies before cleaning them up, or the user process receives an error causing the statement to fail, SMON is posted to do the cleanup.

  • Sort segments residing in PERMANENT tablespace are cleaned up by SMON after the sort is completed.
  • For performance issues, extents in TEMPORARY tablespaces are not released or deallocated once the operation is complete. Instead, the extent is simply marked as available for the next sort operation. SMON cleans up the segments at startup.
SMON also might get tied up cleaning uncommitted transactions though, and be too busy to process requests to grow an existing sort segment. Starting with Oracle 8i, playing around with fast_start_parallel_rollback might workaround that.In addition, actions like CREATE INDEX create a temporary segment for the index, and only convert it to permanent once the index has been created. Also, DROP <object> converts the segment to temporary and then cleans up the temporary segment.
Temporary Segments in a Permanent Tablespace

The background process System Monitor (SMON) frees the temporary segments when the statement has been completed.If a large number of sort segments has been created, then SMON may take some time to drop them; this process automatically implies a loss of overall database performance.After SMON has freed up the temporary segment, the space is released for use by other objects.

Temporary Segments in a Temporary Tablespace

The background process SMON actually de-allocates the sort segment after the instance has been started and the database has been opened. Thus, after the database has been opened, SMON may be seen to consume large amounts of CPU as it first de-allocates the (extents from the) temporary segment, and after that performs free space coalescing of the free extents created by the temporary segment cleanup. This behavior will be exaggerated if the temporary tablespace, in which the sort segment resides, has inappropriate (small) default NEXT storage parameters.

How to identify whether SMON is cleaning up temporary extents ?

Check whether there are a large number of temporary extents that might be being cleaned up by running the following query a few times

SELECT COUNT(*) FROM DBA_EXTENTS WHERE SEGMENT_TYPE='TEMPORARY';

If the count returned by the above query is dropping while SMON is working, it is likely that SMON is performing temp segment cleanup.

Effects on Database
  • SMON will continually acquire and then release the ST enqueue in exclusive mode. This can cause contention with other processes and lead to <oerr:ORA-1575> errors.
  • It should be noted that a normal/immediate shutdown will not complete until all temporary segments have been cleaned up. Shutdown will 'kick' SMON to complete cleanup.
  • If you are using TEMPORARY type temporary tablespaces then SMONs cleanup of the segment can be a problem as it will not service sort segment requests while performing cleanup.
  • If SMON is busy cleaning up a TEMP segment containing a lot of extents it cannot service 'sort segment requests' from other sessions. Pointing the users at a PERMANENT tablespace as their temporary tablespace can help keep the system running until SMON is free again.
  • Starting with Oracle8i, rather than reverting back to a PERMANENT tablespace if SMON is cleaning up an old sort segment at startup, you can potentially drop and recreate the tempfiles of the existing TEMPORARY tablespace. The cleanup should be faster anyway since by rule a TEMPORARY tablespace made of tempfiles need to be LOCALLY MANAGED.You can remove tempfiles from TEMPORARY tablespaces and keep the logical structure empty.
Avoidance
  • Do not create temporary tablespaces with small initial and next default storage parameters.
  • Use tablespaces of type TEMPORARY. Sort segments in these tablespaces are not cleaned up. This reduces contention on ST enqueue and also reduces CPU usage by SMON **UNLESS** the database is shutdown and restarted.
  • Beware of creating large objects with inappropriate (small) extents. If the creation of the object fails, SMON cleans up. Also, dropping such an object will create a lot of cleanup work for the user process.
Force Temp Segment cleanup

DROP_SEGMENTS event could be set set to force the cleanup of temporary segments.

This routine does what SMON does in the background, i.e. drops temporary segments. It is provided as a manual intervention tool which the user may invoke if SMON misses the post and does not get to clean the temp segments for another 2 hours.

level - tablespace number+1. If the value is 2147483647 then temp segments in ALL tablespaces are dropped, otherwise, only
segments in a tablespace whose number is equal to the LEVEL specification are dropped.

alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1';ion are dropped.

If ts# is 5, an example of dropping the temporary segments in that tablespace would be:

alter session set events 'immediate trace name DROP_SEGMENTS level 6';

 

Note 177334.1 Overview of Temporary Segments 
Note 35513.1 Removing 'Stray' TEMPORARY Segments
Note 61997.1 SMON - Temporary Segment Cleanup and Free Space Coalescing
NOTE:160426.1 - TEMPORARY Tablespaces : Tempfiles or Datafiles ?
Note 102339.1 Temporary Segments: What Happens When a Sort Occurs 
Note 1039341.6 Temporary Segments Are Not Being De-Allocated After a Sort 
Note 68836.1 How To Efficiently Drop (or Truncate) A Table With Many Extents
Note 47400.1 EVENT: DROP_SEGMENTS - Forcing cleanup of TEMPORARY segments
Note 132913.1 How To Free Temporary Segment in Temporary Tablespace Dynamically

 




About Me

...............................................................................................................................

本文转载自MOS

本文在itpub(http://blog.itpub.net/26736162)、博客园http://www.cnblogs.com/lhrbest和个人微信公众号(xiaomaimiaolhr)上有同步更新

本文itpub地址:http://blog.itpub.net/26736162/abstract/1/

本文博客园地址:http://www.cnblogs.com/lhrbest

本文pdf版小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/

● QQ群:230161599     微信群:私聊

联系我请加QQ好友(646634621),注明添加缘由

2017-04-28 09:00 ~ 2017-04-30 22:00魔都完成

文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

版权所有,欢迎分享本文,转载请保留出处

...............................................................................................................................

拿起手机使用微信客户端扫描下边的左边图片来关注小麦苗的微信公众号:xiaomaimiaolhr,扫描右边的二维码加入小麦苗的QQ群,学习最实用的数据库技术。

ico_mailme_02.png
DBA笔试面试讲解
欢迎与我联系

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26736162/viewspace-2137022/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/26736162/viewspace-2137022/

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Administrator/.m2/repository/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/Users/Administrator/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.10.0/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/Users/Administrator/.m2/repository/org/slf4j/slf4j-simple/1.7.30/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2025-11-24 09:27:04,150 WARN [org.apache.hadoop.metrics2.impl.MetricsConfig] - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2025-11-24 09:27:04,189 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - Scheduled Metric snapshot period at 10 second(s). 2025-11-24 09:27:04,189 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - JobTracker metrics system started 2025-11-24 09:27:04,386 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2025-11-24 09:27:04,395 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2025-11-24 09:27:04,416 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input files to process : 1 2025-11-24 09:27:04,433 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2025-11-24 09:27:04,485 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local2106793956_0001 2025-11-24 09:27:04,486 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Executing with tokens: [] 2025-11-24 09:27:04,554 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2025-11-24 09:27:04,555 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local2106793956_0001 2025-11-24 09:27:04,555 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2025-11-24 09:27:04,559 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2025-11-24 09:27:04,559 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2025-11-24 09:27:04,559 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2025-11-24 09:27:04,589 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2025-11-24 09:27:04,589 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local2106793956_0001_m_000000_0 2025-11-24 09:27:04,601 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2025-11-24 09:27:04,601 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2025-11-24 09:27:04,607 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2025-11-24 09:27:04,631 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1439d592 2025-11-24 09:27:04,634 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/dgh/java/Hadoop/input/subject_score.csv:0+80578 2025-11-24 09:27:04,673 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2025-11-24 09:27:04,673 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2025-11-24 09:27:04,673 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2025-11-24 09:27:04,673 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2025-11-24 09:27:04,673 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2025-11-24 09:27:04,675 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2025-11-24 09:27:04,709 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2025-11-24 09:27:04,709 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2025-11-24 09:27:04,709 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2025-11-24 09:27:04,709 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 42108; bufvoid = 104857600 2025-11-24 09:27:04,709 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26199088(104796352); length = 15309/6553600 2025-11-24 09:27:04,732 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2025-11-24 09:27:04,744 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local2106793956_0001_m_000000_0 is done. And is in the process of committing 2025-11-24 09:27:04,745 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2025-11-24 09:27:04,745 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local2106793956_0001_m_000000_0' done. 2025-11-24 09:27:04,749 INFO [org.apache.hadoop.mapred.Task] - Final Counters for attempt_local2106793956_0001_m_000000_0: Counters: 17 File System Counters FILE: Number of bytes read=80748 FILE: Number of bytes written=446408 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=3828 Map output records=3828 Map output bytes=42108 Map output materialized bytes=49770 Input split bytes=113 Combine input records=0 Spilled Records=3828 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=4 Total committed heap usage (bytes)=265814016 File Input Format Counters Bytes Read=80578 2025-11-24 09:27:04,749 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local2106793956_0001_m_000000_0 2025-11-24 09:27:04,749 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2025-11-24 09:27:04,750 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2025-11-24 09:27:04,751 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local2106793956_0001_r_000000_0 2025-11-24 09:27:04,754 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2025-11-24 09:27:04,754 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2025-11-24 09:27:04,754 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2025-11-24 09:27:04,776 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7b215062 2025-11-24 09:27:04,778 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5bb9ff0f 2025-11-24 09:27:04,779 WARN [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - JobTracker metrics system already initialized! 2025-11-24 09:27:04,785 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1305057664, maxSingleShuffleLimit=326264416, mergeThreshold=861338112, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2025-11-24 09:27:04,786 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local2106793956_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2025-11-24 09:27:04,800 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local2106793956_0001_m_000000_0 decomp: 49766 len: 49770 to MEMORY 2025-11-24 09:27:04,803 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 49766 bytes from map-output for attempt_local2106793956_0001_m_000000_0 2025-11-24 09:27:04,803 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 49766, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->49766 2025-11-24 09:27:04,804 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2025-11-24 09:27:04,804 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2025-11-24 09:27:04,804 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2025-11-24 09:27:04,811 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2025-11-24 09:27:04,811 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 49757 bytes 2025-11-24 09:27:04,817 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 49766 bytes to disk to satisfy reduce memory limit 2025-11-24 09:27:04,818 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 49770 bytes from disk 2025-11-24 09:27:04,818 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2025-11-24 09:27:04,818 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2025-11-24 09:27:04,819 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 49757 bytes 2025-11-24 09:27:04,819 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2025-11-24 09:27:04,822 INFO [org.apache.hadoop.conf.Configuration.deprecation] - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2025-11-24 09:27:04,835 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local2106793956_0001_r_000000_0 is done. And is in the process of committing 2025-11-24 09:27:04,836 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2025-11-24 09:27:04,836 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local2106793956_0001_r_000000_0 is allowed to commit now 2025-11-24 09:27:04,839 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local2106793956_0001_r_000000_0' to file:/D:/dgh/java/Hadoop/output 2025-11-24 09:27:04,839 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2025-11-24 09:27:04,839 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local2106793956_0001_r_000000_0' done. 2025-11-24 09:27:04,840 INFO [org.apache.hadoop.mapred.Task] - Final Counters for attempt_local2106793956_0001_r_000000_0: Counters: 24 File System Counters FILE: Number of bytes read=180320 FILE: Number of bytes written=496250 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Combine input records=0 Combine output records=0 Reduce input groups=6 Reduce shuffle bytes=49770 Reduce input records=3828 Reduce output records=6 Spilled Records=3828 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 Total committed heap usage (bytes)=265814016 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Output Format Counters Bytes Written=72 2025-11-24 09:27:04,840 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local2106793956_0001_r_000000_0 2025-11-24 09:27:04,840 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2025-11-24 09:27:05,564 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local2106793956_0001 running in uber mode : false 2025-11-24 09:27:05,565 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2025-11-24 09:27:05,566 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local2106793956_0001 completed successfully 2025-11-24 09:27:05,569 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 30 File System Counters FILE: Number of bytes read=261068 FILE: Number of bytes written=942658 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=3828 Map output records=3828 Map output bytes=42108 Map output materialized bytes=49770 Input split bytes=113 Combine input records=0 Combine output records=0 Reduce input groups=6 Reduce shuffle bytes=49770 Reduce input records=3828 Reduce output records=6 Spilled Records=7656 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=4 Total committed heap usage (bytes)=531628032 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=80578 File Output Format Counters Bytes Written=72 Process finished with exit code 0 运行以后是这样的
11-25
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值