Some Scripts of Our Own

部署运行你感兴趣的模型镜像

Some Scripts of Our Own

Through the years, we've been writing scripts to do all kinds of things. The ones included in this section are our favorite database tuning scripts--developed, begged, and borrowed over the last few years. These scripts aren't definitive in any way; they could have been, and probably have been, written in a dozen alternative ways. But they do get the work done. (You will find all of these scripts on the O'Reilly web site.)

What Version of Oracle?

The version of Oracle that you are running can have a significant impact on your performance. Use the following command to show version information:

SELECT *
  FROM v$version;
-------------------------------------------------------------- 
Oracle7 Server Release 7.2.3.2.0 - Production Release                          
PL/SQL Release 2.2.2.2.0 - Production
CORE Version 2.3.7.1.0 - Production (LSF Alpha)
TNS for SVR4: Version 2.1.6.0.0 - Production 
NLSRTL Version 2.3.6.0.0 - Production             

What Are the INIT.ORA Settings?

In addition to the standard INIT.ORA parameters, Oracle also has a number of undocumented parameters, that appear with an underscore in front of them in parameter displays. Several of the undocumented parameters, such as _DB_BLOCK_WRITE_BATCH and _LOG_ENTRY_PREBUILD_THRESHOLD, can be used to improve your performance. (See the discussion of such parameters in Chapters and .)

Run this script to show your system's parameter setting; then compare your settings to the recommendations in this book.

# undoc.sql
SELECT name, value 
  FROM v$parameter;
SELECT ksppinm, ksppivl 
  FROM x$ksppi  
 WHERE SUBSTR(ksppinm,1,1) = '_';

We don't show the output from this script because of its excessive length.

Looking Inside the SGA

This script shows details of the System Global Area in memory. The most significant information is the reading for "free memory," which may indicate that the SHARED_POOL_SIZE INIT.ORA parameter should be reduced if the free memory is excessive. If the parameter setting is low, you should not be decreasing the SHARED_POOL_SIZE.

WARNING: Oracle tends to maintain some free memory even when the shared pool size is flooded with activity and needs to be made larger.

Other figures that are useful are the "db_block_buffers," size of the buffer cache, which is usually required to be at least 20 megabytes for optimum performance; the "sql area," which is where all of the shared SQL is placed; the dictionary cache, which is where Oracle's dictionary is placed; and the "log buffer," which is where all changes are written prior to writing to your redo logs.

The log buffer should typically be at least 32,078 bytes or larger. The "shared_sql" and "dictionary cache" sizes are affected by the size of your SHARED_POOL_SIZE INIT.ORA parameter. Unfortunately, the dictionary cache is tuned automatically and not very well by the kernel. The majority of sites operate most efficiently with a shared pool size of at least 30,000,000 bytes.

# sgastat.sql
SELECT * 
FROM v$sgastat
WHERE name IN ('free memory', 'db_block_buffers', 'log_buffer'
               `dictionary cache', `sql area', `library cache'); 
 
NAME                                BYTES                                   
---------------------------------- ----------                                
free memory                             88652                             
db_block_buffers                     20480000                             
log_buffer                             512000                           
dictionary cache                      2528868                          
sql area                             43658416                          
library cache                        13177800                          

Identifying Database Extents

One of the most common activities you'll find yourself doing as a DBA is scanning the physical database, looking for new table and index extents. You ought to do this on a regular basis, ideally as part of an automated daily or weekly overnight procedure. Your goal is to minimize the number of extents on disk. Access to contiguous areas of disk is much faster than access to noncontiguous areas. In one test that we did on a 4,000-row table, we found that when the entire table fit on one extent, it took 0.76 second to scan it; when the table was spread over 10 extents, it took 3.62 seconds.

We have found that the existence of a small number of such extents (fewer than five) doesn't seem to affect performance very much, but it is still good practice to store your objects in a single extent. (In the next section, we describe how to size the table for the reorganization while not wasting valuable disk space.)

The following script assumes that the operating system Oracle block size is 4 kilobytes and that all ROLLBACK segments were created with 10 initial extents (MINEXTENTS parameter).

# objstor.sql
SELECT SUBSTR(s.segment_name,1,20) OBJECT_NAME, 
        SUBSTR(s.segment_type,1,5) TYPE, 
        SUBSTR(s.tablespace_name,1,10) T_SPACE,   
        NVL(NVL(t.initial_extent, i.initial_extent),r.initial_extent)/ 4096
            FST_EXT,
        NVL(NVL(t.next_extent,i.next_extent),R.NEXT_EXTENT) / 4096 NXT_EXT,
            s.extents - 1  tot_ext,
            s.blocks  tot_blks
        FROM    
            dba_rollback_segs R,
            dba_indexes I,
            dbs_tables T,
            dba_segments S
    WHERE s.segment_name     LIKE  UPPER('&S_NAME')  || '%'
      AND s.tablespace_name  LIKE  UPPER('&T_SPACE') || '%'
      AND s.extents             >  1
      AND s.owner            =     t.owner (+)
      AND s.segment_name     =     t.table_name (+)
      AND s.tablespace_name  =     t.tablespace_name (+)
      AND s.owner            =     i.owner (+)
      AND s.segment_name     =     i.index_name (+)
      AND s.tablespace_name  =     i.tablespace_name (+)
      AND s.owner            =     r.owner (+)
      AND s.segment_name     =     r.segment_name (+)
      AND s.tablespace_name  =     r.tablespace_name (+)
    ORDER BY s.segment_name,   s.segment_type;
 
    OBJECT_NAME             TYPE     T_SPACE    FST    NXT    TOT    TOT
                                                EXT    EXT    EXT    BLKS
    ------------------      ------   --------   ---    ---    ---    ----
    ALL_TRAN_AUDX_INDX      INDEX    DEV_IDX    125    63     2      251
    OBJ$                    TABLE    SYSTEM     13     13     1      26
    PRODUCT_PROFILE         TABLE    SYSTEM     13     13     1      26
    RBACK1                  ROLLB    RBK        25     25     9      300
    RBACK2                  ROLLB    RBK        25     25     9      525
    RBACK_BIG               ROLLB    RBK        256    256    9      2560
    XREF$                   TABLE    SYSTEM     13     13     1      26

Performing Database Table Sizing

This section contains several scripts that we use to size a database.

Looking for tablespace space shortages

When tables, indexes, and rollback segments are created, they are preassigned a storage allocation (extent), which is reserved and cannot be used by any other object. Although the objects may not use all of the space allocated at the start, as more information is placed into the area the amount of available free space diminishes. This query helps you instantly find application problems resulting from space shortages. Run it at regular intervals for the best information. Note that this script assumes that you have Oracle data blocks of 4 kilobytes (4,096 bytes). This size is operating system dependent, and you will have to modify the query if your block sizes differ.

## tspuse.sql
SELECT  SUBSTR(D.tablespace_name,1,15)                        TSPACE,
            D.file_id                                         FILE_ID,
            D.bytes / 1024 / 1024                             TOT_MB,
            D.bytes / 4096                                    ORA_BLKS,
            SUM(E.blocks)                                     TOT_USED,
            ROUND(SUM(E.blocks) / D.bytes / 4096, 4 * 100     PCT_USED
    FROM   sys.dba_extents   E, 
           sys.dba_data_files D
    WHERE  D.file_id = E.file_id (+)
    GROUP  BY D.tablespace_name, D.file_id, D.bytes
/
 
TSPACE    FILE_ID    TOT_MB     ORA_BLKS      TOT_USED      PCT_USED
-------   -------    ------     --------      --------      --------
DEV       4          250        64000         36633         57.2 
DEV_AUD   6          100        25600         3691          14.4 
DEV_IDX   5          300        76800         61317         79.8 
HST       7          200        51200         38400         75.0 
INV       8          80         20480         13739         67.1 
INV_IDX   9          50         12800         7673          59.9 
RBK       3          25         6400          4110          64.2 
SYSTEM    1          20         5120          2366          46.2 
TMP       2          50         12800
Looking for tablespace fragmentation

This query of the database gives a detailed breakdown of the fragmentation of each tablespace file within the database.

# fragment.sql
SELECT    SUBSTR(ts.name,1,10)                                TSPACE,
              tf.blocks                                       BLOCKS,
              SUM(f.length)                                   FREE,
              COUNT(*)                                        PIECES,
              MAX(f.length)                                   BIGGEST,
              MIN(f.length)                                   SMALLEST,
              ROUND(AVG(f.length))                            AVERAGE, 
              SUM(DECODE(SIGN(f.length-5), -1, f.length, 0))  DEAD
    FROM      sys.fet$           F, 
              sys.file$          TF, 
              sys.ts$            TS
    WHERE     ts.ts#  =  f.ts#
    AND       ts.ts#  =  tf.ts# 
    GROUP BY  ts.name, tf.blocks;
 
Tspace   Blocks      Free    Pieces  Biggest   Smallest    Average    Dead

DEV      64000       27366   9       25614     105         3041       0
DEV_AUD  25600       21908   1       21908     21908       21908      0
DEV_IDX  76800       15482   16      175       4           968        2
HST      51200       12799   1       12799     12799       12799      0
INV      20480       6740    12      6740      6740        6740       0
INV_IDX  12800       5126    4       2565      63          1282       0
RBK      6400        2289    1       2289      2289        2289       0
SYSTEM   5120        2753    74      487       3           16         12 
TMP      12800       12799   41      1536      11          312        0

The last column, "Dead," is based on the assumption that any contiguous block smaller than five Oracle blocks (20 kilobytes for the operating system that we used for testing) cannot be used. That is, no table or index has an INITIAL or NEXT extent size less than 20 kilobytes.

Looking at space use by individual tables

This query reports how full a particular table actually is. It compares the number of Oracle blocks that have at least one record against the total number of blocks allocated to the table extent(s). You can use this query to interrogate table after table; the table name replaces the name &TAB_NAME in each statement execution.

# tabused.sql
SELECT   BLOCKS                                       ALLOCATED_BLKS,
             COUNT(DISTINCT SUBSTR(T.ROWID,1,8) 
                         || SUBSTR(T.ROWID,15,4))     USED, 
            (COUNT(DISTINCT SUBSTR(T.ROWID,1,8) 
                         || SUBSTR(T.ROWID,15,4)) 
             / BLOCKS) * 100                          PCT_USED
    FROM     SYS.DBA_SEGMENTS E,
             &TAB_NAME T
    WHERE    E.SEGMENT_NAME = UPPER ('&TAB_NAME')
    AND      E.SEGMENT_TYPE = 'TABLE' 
    GROUP BY E.BLOCKS;
 
    ALLOCATED_BLKS         USED          PCT_USED
    --------------         ----          --------
    2560                   1728          67.50
Looking at the average number of records per block

This query reports the number of rows physically residing in Oracle blocks of a table. This query can be used to calculate how much space a table will ultimately require.

# blokrows.sql
SELECT   SUBSTR(T.ROWID,1,8)  || '-' ||SUBSTR(T.ROWID,15,4)      BLOCK,
             COUNT(*)                                            ROW_CNT,
    FROM     &TAB_NAME T
    WHERE    ROWNUM  <  2000
    GROUP BY SUBSTR(T.ROWID,1,8) || '-' || SUBSTR(T.ROWID,15,4);

Output from this query is as follows:

    BLOCK                   ROW_CNT 
    -------------           -------
    00001F52-0002           93 
    00001F53-0002           85 
    00001F54-0002           82
    00001F55-0002           100 
    00001F56-0002           83
    00001F57-0002           71 
    00001F58-0002           82 
    00001F59-0002           91 
    00001F5A-0002           93 
    00001F5B-0002           91 
    00001F5C-0002           63
    00001F5D-0002           69 
    00001F5E-0002           75 
    00001F5F-0002           1 
    00001F60-0002           4 
    00001F61-0002           5
Putting it together

By looking at the results of the set of scripts that are included in this section, you can do a good job of calculating future table requirements:

Script 1
Tells us that the DEV tablespace is only 43% used. Of the initial 250-megabyte allocation, more than 107 megabytes are still free. This tablespace should not be a problem for some time.

Script 2
Tells us that the 107 megabytes of free space within the DEV tablespace comprise only nine contiguous segments and no dead blocks. This tells us that the free space is indeed free and usable.

Script 3
Tells us that the table being analyzed currently has consumed only 67% of its current extent allocations.

Script 4
Tells us that the table being analyzed can store an average of 80 to 90 records per Oracle block (4 kilobytes per block). Therefore the current volume represents one year of growth and is already 40,000 records in size. What storage will be needed for 10 years' growth?

We advise you to be relatively conservative in making estimates so that you don't consume needless amounts of disk space, while being sensible enough not to run out of space too soon. This is not always an easy task! Our calculation is as follows:

Total records      = 40,000 x 10 
                   = 400,000 records  (adjust to 500,000) 

Records Per Block  = 80 . . . 90      (adjust to 75)

Space Requirements = (Total Records / Records Per Block) x Block Size
                       = ( 500,000 / 75 ) * ( 1024 * 4 )
                   = 27,306,667 bytes
                       = 26.4 megabytes

Checking Extent Sizes and PCTINCREASE

If the default extent sizes and PCTINCREASE on your tablespaces are set incorrectly, these can have a marked impact on your performance. The SYSTEM tablespace has a default PCTINCREASE of 50%. If you decrease that value to 0, as many sites do, the number of extents on the dictionary objects becomes excessive and degrades performance. The default INITIAL and NEXT parameters on your temporary tablespace can also cause a lot of extents to be thrown if they are set to a value smaller than the SORT_AREA_SIZE. Ideally, the NEXT extent sizes should be equal to, or a multiple of, the SORT_AREA_SIZE.

SELECT SUBSTR(tablespace_name, 1,18), initial_extent, 
      next_extent, pct_increase 
  FROM dba_tablespaces
 ORDER BY tablespace_name;

Another troublesome problem that can affect performance is when a user other than SYSTEM or SYS has his or her default tablespace set to the SYSTEM tablespace. Use the following query to ensure that all of your users have been assigned to an appropriate tablespace:

SELECT username
  FROM dba_users
 WHERE username NOT IN (`SYS', `SYSTEM')
   AND (default_tablespace = `SYSTEM'
                OR
        temporary_tablespace = `SYSTEM'); 

Looking at Objects That Can't Throw an Extent

Some scripts list all objects that will not have a free extent that is large enough, assuming that the object has filled its current space allocation and is forced to throw an extent. The following script provides the same information but runs 20 times faster:

# exent.sql
SELECT seg.owner, seg.segment_name,
       seg.segment_type, seg.tablespace_name, t.next_extent
FROM sys.dba_segments seg, sys.dba_tables   t
WHERE  (seg.segment_type = 'TABLE'
	AND			seg.segment_name = t.table_name
	AND			seg.owner        = t.owner
	AND			NOT EXISTS
        (SELECT tablespace_name
	       FROM dba_free_space free 
          WHERE free.tablespace_name =  t.tablespace_name 
            AND bytes               >=  t.next_extent     ))
UNION
SELECT seg.owner, seg.segment_name,
       seg.segment_type, seg.tablespace_name, 
       DECODE (seg.segment_type,
						'CLUSTER',  c.next_extent) 
  FROM sys.dba_segments seg, 
       sys.dba_clusters c 
WHERE		(seg.segment_type = 'CLUSTER'
	AND			seg.segment_name = c.cluster_name
	AND			seg.owner        = c.owner
	AND			NOT EXISTS
			(SELECT tablespace_name
								from dba_free_space free
             WHERE free.tablespace_name =  c.tablespace_name 
               AND bytes               >=  c.next_extent     ))
UNION
SELECT seg.owner, seg.segment_name,
seg.segment_type, seg.tablespace_name,
  DECODE (seg.segment_type,
						'INDEX',    i.next_extent ) 
FROM sys.dba_segments seg, 
     sys.dba_indexes  i
WHERE  (seg.segment_type = 'INDEX'
	AND			seg.segment_name = i.index_name
	AND			seg.owner        = i.owner
	AND			NOT EXISTS
      (SELECT tablespace_name
	     FROM dba_free_space free 
        WHERE free.tablespace_name =  i.tablespace_name 
         AND bytes               >=  i.next_extent     ))
UNION
SELECT seg.owner, seg.segment_name, seg.segment_type, 
       seg.tablespace_name, 
       DECODE (seg.segment_type, 'ROLLBACK', r.next_extent)
FROM sys.dba_segments seg, sys.dba_rollback_segs r
where  (seg.segment_type = 'ROLLBACK'
	AND		 seg.segment_name = r.segment_name
	AND		 seg.owner        = r.owner
	AND		 NOT EXISTS
       (SELECT tablespace_name
	      FROM dba_free_space free 
         WHERE free.tablespace_name =  r.tablespace_name
           AND bytes               >=  r.next_extent     ));

Determining Archive Log Disk Location

From both a performance and recovery perspective, it's important to put the archive logs on a different disk from that used by your other data files and redo logs. If the archive logs are on the same disk as other data files or the redos, there is a high likelihood of disk I/O bottlenecks.

# datafile.sql
SELECT value     
FROM   v$parameter
WHERE  name like 'log_archive_dest' 
UNION
SELECT name
FROM   v$datafile
UNION 
SELECT member
FROM   v$logfile
/

Which User Is Using the CPU?

The following query lists four users' usage of the CPU, ordered by largest usage first. You must have TIMED_STATISTICS set to TRUE for the readings to appear. If TIMED_STATISTICS is set to FALSE, all of the readings will be zero.

# sesscpu.sql
SELECT SUBSTR(name,1,30) parameter,
       ss.username||'('||se.sid||') ' user_process, value
FROM   v$session ss, v$sesstat se, v$statname sn
WHERE  se.statistic# = sn.statistic#
AND    name  like '%CPU used by this session%'
AND    se.sid = ss.sid
ORDER BY substr(name,1,25), value DESC
/

Computing the Hit Ratio

Throughout this book, we have emphasized how proper sizing of the Oracle cache buffers can help to reduce I/O. Computing the hit ratio is a very helpful way to do this sizing. The hit ratio tells us how many times Oracle has needed to retrieve a database block and has found it in memory (rather than having to access it on disk). Because memory access is so much faster than disk access, the higher the hit ratio, the better your performance.

You can ordinarily obtain the hit ratio for your application only by looking at either the UTLBSTAT/UTLESTAT statistics or the DBA MONITOR screens, as described earlier in this chapter. The script included below shows how to get the hit ratio from SQL*Plus. If you do this, you can automatically schedule hit ratio queries and can direct the output to a report or another database table. That table can then be used to produce application statistics or management reporting.

# hitrate.sql
SELECT 
   SUM(DECODE(name, 'consistent gets',value, 0))  "Consis Gets",
   SUM(DECODE(name, 'db block gets',value, 0))    "DB Blk Gets",
   SUM(DECODE(name, 'physical reads',value, 0))   "Phys Reads",
  (SUM(DECODE(name, 'consistent gets',value, 0))  
    + SUM(DECODE(name, 'db block gets',value, 0))  
    -  SUM(DECODE(name, 'physical reads',value, 0)))
						/
  (SUM(DECODE(name, 'consistent gets',value, 0)) 
     + SUM(DECODE(name, 'db block gets',value, 0))  )  * 100 "Hit Ratio" 
FROM v$sysstat;

The following output is taken from an actual site. Before the buffer cache was increased and a handful of SQL statements were tuned, the hit ratio was at 26%.

Consis Gets   DB Blk    Gets      Phys Reads  Hit Ratio
 ---------- ----------- ---------- ----------                              
436987321      877262    2142974         99.         5105852    

The next step you must take is to find out which user is causing the poor hit ratio.

# userhit.sql
SELECT se.username||'('|| se.sid||')' "User Session",
 SUM(DECODE(name, 'consistent gets',value, 0))  "Consis Gets", 
 SUM(DECODE(name, 'db block gets',value, 0))    "DB Blk Gets",
 SUM(DECODE(name, 'physical reads',value, 0))   "Phys Reads",
 (SUM(DECODE(name, 'consistent gets',value, 0)) 
   + SUM(DECODE(name, 'db block gets',value, 0))  
   - SUM(DECODE(name, 'physical reads',value, 0)))
					/
 (sum(DECODE(name, 'consistent gets',value, 0))
  + SUM(DECODE(name, 'db block gets',value, 0))  )  * 100 "Hit Ratio" 
 FROM  v$sesstat ss, v$statname sn, v$session se
WHERE  ss.sid    = se.sid
  AND  sn.statistic# = ss.statistic#
 AND   value != 0
 AND   sn.name IN ('db block gets', 'consistent gets', 'physical reads')
GROUP BY se.username||'('|| se.sid||')' ;
User Session  Consis Gets DB Blk Gets Phys Reads Hit Ratio          
------------- ----------- ----------- ---------- ---------           
(5)               27679     8934      11012         69.92
(6)                  36      272         24         92.21
CORRIGANP(16)    173176      385        521         99.70            
GURRYM(18)      1265544     2187      11959         99.06 
OREILLYT(21)      22705      149         21         99.91
RUSSELLD(61)     128754      317        185         99.86

Looking at the Dictionary Cache

This script lets you interrogate the Oracle data dictionary performance tables via SQL*Plus. These tables give you information about all the objects stored in your dictionary (tablespaces, files, users, rollback segments, constraints, synonyms, etc.). This information is available in other ways, but getting at it through SQL*Plus lets you automate your queries, as we described for the hit ratio in the previous section.

If the dictionary cache is perfectly tuned, the query below would return no rows. When entries are loaded into the dictionary cache for the first time, the "Getmisses" figure is incremented by one. If the "Count," which is the number of entries set aside for each dictionary cache type, is set too small, entries will have to be thrown out to make way for new dictionary entries being read from disk into memory. This will cause rows to appear in the following query.

# rowcache.sql
SELECT parameter, count, getmisses 
FROM    v$rowcache
WHERE getmisses > count;
 
         Dictionary Cache (Part of Shared Buffer Pool)
PARAMETER                           COUNT     GETMISSES           
-------------------------------- ---------- ----------        
dc_free_extents                      41        172            
dc_used_extents                      18        150             
dc_segments                         125        202                 
dc_objects                         1798       1815                  
dc_columns                         4428       4639                  

Some people prefer to work with ratios and percentages. The dictionary cache miss ratio should ideally be less than 1%, although when the database is first started, the miss ratio will be higher because each dictionary item loaded into memory will record a miss. If the miss ratio is greater than 2% and you have spare memory, increase your SHARED_POOL_SIZE. If the ratio has decreased, you should have improved your performance.

# rowcache.sql
SELECT SUM(gets) "Gets", SUM(misses) "Misses", 
   TO_CHAR(SUM(getmisses) / SUM(gets) * 100 , `999.99')||'%' "Miss Ratio"
 FROM v$rowcache;
 
    Gets                      Misses                        Miss Ratio
-------------------------------------------------------------------------
119,929,181                  314,100                         2.61%

Looking at Rollback Segment Usage

It is sometimes useful to know which users are accessing the rollback segments. This is important information when a user is continually filling the rollbacks and causing extents to be thrown.

# rolbusrs.sql
SELECT r.name "Rollback Segment Name ",
               p.spid "System Process Id ",
               s.username||'(`||l.sid||')' "Oracle User Pid"
FROM v$lock l, v$process p, v$rollname r, v$session s
WHERE  l.sid = p.pid(+)
AND s.sid=l.sid
AND TRUNC (l.id1(+)/65536) = r.usn
AND l.type(+) = 'TX'
AND l.lmode(+) = 6
ORDER BY r.name
/

Finding Foreign Key Relationships

It's useful to know the foreign keys and the unique or primary keys to which they relate. Foreign keys produce potentially damaging locking problems if the foreign key columns on the child table are not indexed, as we describe in Chapter 8, Selecting a Locking Strategy. The first query below lists all of the foreign keys and the parent table and columns to which they relate.

# forgnkey.sql
SELECT  a.owner , a.table_name , c.column_name ,
        b.owner , b.table_name , d.column_name 
FROM    dba_constraints a, dba_constraints b,
        dba_cons_columns c, dba_cons_columns d
WHERE   a.r_constraint_name = b.constraint_name
  AND   a.constraint_type = 'R'
  AND   b.constraint_type = 'P'
  AND   a.r_owner=b.owner
  AND   a.constraint_name = c.constraint_name
  AND   b.constraint_name=d.constraint_name
  AND   a.owner = c.owner
  AND   a.table_name=c.table_name
  AND   b.owner = d.owner
  AND   b.table_name=d.table_name;

The second query lists all of the foreign keys that do not have the appropriate indexes in place on the child table. It shows the foreign key constraints that cause locking problems.

# forgnkey.sql
SELECT acc.owner||'-> '||acc.constraint_name||'('||acc.column_name
        ||'['||acc.position||'])'||' ***** Missing Index'
  FROM  all_cons_columns acc, all_constraints ac
 WHERE  ac.constraint_name = acc.constraint_name
   AND  ac.constraint_type = 'R'
   AND  (acc.owner, acc.table_name, acc.column_name, acc.position) 
             IN
 (SELECT acc.owner, acc.table_name, acc.column_name, acc.position 
    FROM   all_cons_columns acc, all_constraints ac
   WHERE  ac.constraint_name = acc.constraint_name
     AND   ac.constraint_type = 'R'
  MINUS
  SELECT table_owner, table_name, column_name, column_position
    FROM all_ind_columns)
ORDER BY acc.owner, acc.constraint_name, 
         acc.column_name, acc.position;

Listing Columns with Inconsistent
Data Types or Lengths

The following query lists all columns that have differing lengths or data types but that have the same column name. For example, ACCOUNT_NO may be NUMBER(9) in one table and VARCHAR(9) in another. Having different data types can cause data casting problems and result in indexes not being used. (See Chapter 6 for details on data casting.)

# coldiffs.sql
SELECT owner, column_name
      , table_name||' '||data_type||'('||
      DECODE(data_type, 'NUMBER', data_precision, data_length)||')'
      "Characteristics"
  FROM all_tab_columns 
 WHERE (column_name, owner)
   IN
  (SELECT column_name, owner
     FROM all_tab_columns
    GROUP BY column_name, owner
    HAVING MIN(DECODE(data_type, 'NUMBER', data_precision, 
        data_length))
            < MAX(DECODE(data_type, 'NUMBER', data_precision, 
        data_length)) )
  AND  owner NOT IN ('SYS', 'SYSTEM')

Listing Tables That Are Cached

Oracle7.1 and later provide a mechanism for caching tables in the buffer cache, using the command

ALTER TABLE tablename CACHE

Caching tables will speed up data access and improve performance by finding the data in memory and avoiding disk reads. To determine which tables have been cached, run the following command:

# tabcache.sql
SELECT owner, table_name, cache
FROM   all_tables
WHERE  owner not in ('SYS', 'SYSTEM')
AND    cache not like 'N%';

Listing Invalid Objects

Having invalid objects in your database usually indicates that your underlying tables have been altered to add a new column or have had DDL operations performed on them. The most common objects that become invalid are views, packages, and procedures. Invalid packages and procedures can cause a long response delay because they have to be recompiled. The user is forced to wait for the recompilation to complete. If you do alter your tables, you should always recompile your invalid packages and procedures to avoid user frustration. To obtain a list of all of the invalid objects, run the following query:

# invalobj.sql
SELECT owner, object_type, object_name, status 
FROM   all_objects
WHERE  status = 'INVALID'
ORDER BY owner, object_type, object_name
/

Listing All Triggers

Triggers can significantly affect performance. They can cause major problems if the trigger has been disabled and someone has forgotten to reenable the trigger. In this case, the changes that the trigger was to have made will be difficult to recreate, since it is difficult to determine what updates have been applied to the table after the trigger was disabled. To obtain a list of all of the triggers and their status, run this query:

# triggers.sql
TTITLE 'List All Triggers'
SELECT table_name, trigger_name, status
    FROM all_triggers
 ORDER BY table_name, trigger_name;

Doing Latch Analysis

Latches are low-level locking mechanisms that are used to protect Oracle data and memory structures, such as the least-recently-used list in the buffer cache or the redo allocation of space in the log buffer. (We describe latch tuning in Chapter 11.) The following script investigates who is holding the latches:

# latchhld.sql
SELECT l.name "Latch Held", p.username "User Holding the Latch"
  FROM v$process p,v$latchholder l
 WHERE l.pid  = p.pid;

Checking the Number of Objects

This listing provides you with a list of the number of objects on a per-user basis. It can be run regularly to make sure that your database is operating as you expect it to.

# objcount.sql
SELECT  username,
        COUNT(DECODE(o.type, 2, o.obj#, `')) Tab,
        COUNT(DECODE(o.type, 1, o.obj#, `')) Ind,
        COUNT(DECODE(o.type, 5, o.obj#, `')) Syn,
        COUNT(DECODE(o.type, 4, o.obj#, `')) Vew,
        COUNT(DECODE(o.type, 6, o.obj#, `')) Seq,
        COUNT(DECODE(o.type, 7, o.obj#, `')) Prc,
        COUNT(DECODE(o.type, 8, o.obj#, `')) Fun,
        COUNT(DECODE(o.type, 9, o.obj#, `')) Pck,
        COUNT(DECODE(o.type,12, o.obj#, `')) Trg,
        COUNT(DECODE(o.type,10, o.obj#, `')) Dep
  FROM  sys.obj$ o,  sys.dba_users U
 WHERE  u.user_id = o.owner# (+)
 GROUP  BY username;
http://www.oreilly.com/catalog/oracle2/chapter/ch10.html#73980
[@more@]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/257699/viewspace-814993/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/257699/viewspace-814993/

您可能感兴趣的与本文相关的镜像

Stable-Diffusion-3.5

Stable-Diffusion-3.5

图片生成
Stable-Diffusion

Stable Diffusion 3.5 (SD 3.5) 是由 Stability AI 推出的新一代文本到图像生成模型,相比 3.0 版本,它提升了图像质量、运行速度和硬件效率

#!/usr/bin/env python """ Defines a CLI for compiling Sass (both default and themed) into CSS. Should be run from the root of edx-platform using `npm run` wrapper. Requirements for this scripts are stored in requirements/edx/assets.in. Get more details: npm run compile-sass -- --help npm run compile-sass -- --dry Setup (Tutor and Devstack will do this for you): python -m venv venv . venv/bin/activate pip install -r requirements/edx/assets.txt Usage: npm run compile-sass # prod, no args npm run compile-sass -- ARGS # prod, with args npm run compile-sass-dev # dev, no args npm run compile-sass-dev -- ARGS # dev, with args This script is intentionally implemented in a very simplistic way. It prefers repetition over abstraction, and its dependencies are minimal (just click and libsass-python, ideally). We do this because: * If and when we migrate from libsass-python to something less ancient like node-sass or dart-sass, we will want to re-write this script in Bash or JavaScript so that it can work without any backend tooling. By keeping the script dead simple, that will be easier. * The features this script supports (legacy frontends & comprehensive theming) are on the way out, in favor of micro-frontends, branding, and Paragon design tokens. We're not sure how XBlock view styling will fit into that, but it probably can be much simpler than comprehensive theming. So, we don't need this script to be modular and extensible. We just need it to be obvious, robust, and easy to maintain until we can delete it. See docs/decisions/0017-reimplement-asset-processing.rst for more details. """ from __future__ import annotations import glob import os import subprocess import sys from pathlib import Path import click # Accept both long- and short-forms of these words, but normalize to long form. # We accept both because edx-platform asset build scripts historically use the short form, # but NODE_ENV uses the long form, so to make them integrate more seamlessly we accept both. NORMALIZED_ENVS = { "prod": "production", "dev": "development", "production": "production", "development": "development", } @click.option( "-T", "--theme-dir", "theme_dirs", metavar="PATH", multiple=True, envvar="COMPREHENSIVE_THEME_DIRS", type=click.Path(path_type=Path), help=( "Consider sub-dirs of PATH as themes. " "Multiple theme dirs are accepted. " "If none are provided, we look at colon-separated paths on the COMPREHENSIVE_THEME_DIRS env var." ), ) @click.option( "-t", "--theme", "themes", metavar="NAME", multiple=True, type=str, help=( "A theme to compile. " "NAME should be a sub-dir of a PATH provided by --theme-dir. " "Multiple themes are accepted. " "If none are provided, all available themes are compiled." ), ) @click.option( "--skip-default", is_flag=True, help="Don't compile default Sass.", ) @click.option( "--skip-themes", is_flag=True, help="Don't compile any themes (overrides --theme* options).", ) @click.option( "--skip-lms", is_flag=True, help="Don't compile any LMS Sass.", ) @click.option( "--skip-cms", is_flag=True, help="Don't compile any CMS Sass.", ) @click.option( "--env", type=click.Choice(["dev", "development", "prod", "production"]), default="prod", help="Optimize CSS for this environment. Defaults to 'prod'.", ) @click.option( "--dry", is_flag=True, help="Print what would be compiled, but don't compile it.", ) @click.option( "-h", "--help", "show_help", is_flag=True, help="Print this help.", ) @click.command() @click.pass_context def main( context: click.Context, theme_dirs: list[Path], themes: list[str], skip_default: bool, skip_themes: bool, skip_lms: bool, skip_cms: bool, env: str, dry: bool, show_help: bool, ) -> None: """ Compile Sass for edx-platform and its themes. Default Sass is compiled unless explicitly skipped. Additionally, any number of themes may be specified using --theme-dir and --theme. Default CSS is compiled to css/ directories in edx-platform. Themed CSS is compiled to css/ directories in their source themes. """ def compile_sass_dir( message: str, source_root: Path, target_root: Path, includes: list[Path], tolerate_missing: bool = False, ) -> None: """ Compile a directory of Sass into a target CSS directory, and generate any missing RTL CSS. Structure of source dir is mirrored in target dir. IMPLEMENTATION NOTES: ===================== libsass is a C++ library for compiling Sass (ref: https://github.com/sass/libsass). libsass-python is a small PyPI package wrapping libsass, including: * The `_sass` module, which provides direct Python bindings for the C++ library. (ref: https://github.com/sass/libsass-python/blob/0.10.0/pysass.cpp) * The `sass` module, which adds some friendly Pythonic wrapper functions around `_sass`, notably `sass.compile_dirname(...)`. (ref: https://github.com/sass/libsass-python/blob/0.10.0/sass.py#L198-L201) Our legacy Sass code only works with a super old version of libsass (3.3.2,) which is provided to us by a super old version of libsass-python (0.10.0). In this super old libsass-python version: * the `sass` module DOESN'T support Python 3.11+, but * the `_sass` module DOES support Python 3.11+. Upgrading our Sass to work with newer a libsass version would be arduous and would potentially break comprehensive themes, so we don't want to do that. Forking libsass-python at v0.10.0 and adding Python 3.11+ support would mean adding another repo to the openedx org. Rather than do either of those, we've decided to hack around the problem by just reimplementing what we need of `sass.compile_dirname` here, directly on top of the `_sass` C++ binding module. Eventually, we may eschew libsass-python altogether by switching to SassC@3.3.2, a direct CLI for libsass@3.3.2. (ref: https://github.com/sass/sassc). This would be nice because it would allow us to remove Python from the Sass build pipeline entirely. However, it would mean explicitly compiling & installing both libsass and SassC within the edx-platform build environment, which has its own drawbacks. """ # Constants from libsass-python SASS_STYLE_NESTED = 0 _SASS_STYLE_EXPANDED = 1 _SASS_STYLE_COMPACT = 2 SASS_STYLE_COMPRESSED = 3 SASS_COMMENTS_NONE = 0 SASS_COMMENTS_LINE_NUMBERS = 1 # Defaults from libass-python precision = 5 source_map_filename = None custom_functions = [] importers = None use_dev_settings: bool = NORMALIZED_ENVS[env] == "development" fs_encoding: str = sys.getfilesystemencoding() or sys.getdefaultencoding() output_style: int = SASS_STYLE_NESTED if use_dev_settings else SASS_STYLE_COMPRESSED source_comments: int = SASS_COMMENTS_LINE_NUMBERS if use_dev_settings else SASS_COMMENTS_NONE include_paths: bytes = os.pathsep.join(str(include) for include in includes).encode(fs_encoding) click.secho(f" {message}...", fg="cyan") click.secho(f" Source: {source_root}") click.secho(f" Target: {target_root}") if not source_root.is_dir(): if tolerate_missing: click.secho(f" Skipped because source directory does not exist.", fg="yellow") return else: raise FileNotFoundError(f"missing Sass source dir: {source_root}") click.echo(f" Include paths:") for include in includes: click.echo(f" {include}") click.echo(f" Files:") for dirpath, _, filenames in os.walk(str(source_root)): for filename in filenames: if filename.startswith('_'): continue if not filename.endswith(('.scss', '.sass')): continue source = Path(dirpath) / filename target = (target_root / source.relative_to(source_root)).with_suffix('.css') click.echo(f" {source} -> {target}") if not dry: # Import _sass late so that this script can be dry-run without installing # libsass, which takes a while as it must be compiled from its C source. from _sass import compile_filename # pylint: disable=protected-access success, output, _ = compile_filename( str(source).encode(fs_encoding), output_style, source_comments, include_paths, precision, source_map_filename, custom_functions, importers, ) output_text = output.decode('utf-8') if not success: raise Exception(f"Failed to compile {source}: {output_text}") target.parent.mkdir(parents=True, exist_ok=True) with open(target, 'w', encoding="utf-8") as target_file: target_file.write(output_text) click.secho(f" Done.", fg="green") # For Sass files without explicit RTL versions, generate # an RTL version of the CSS using the rtlcss library. for sass_path in glob.glob(str(source_root) + "/**/*.scss"): if Path(sass_path).name.startswith("_"): # Don't generate RTL CSS for partials continue if sass_path.endswith("-rtl.scss"): # Don't generate RTL CSS if the file is itself an RTL version continue if Path(sass_path.replace(".scss", "-rtl.scss")).exists(): # Don't generate RTL CSS if there is an explicit Sass version for RTL continue click.echo(" Generating missing right-to-left CSS:") source_css_file = sass_path.replace(str(source_root), str(target_root)).replace( ".scss", ".css" ) target_css_file = source_css_file.replace(".css", "-rtl.css") click.echo(f" Source: {source_css_file}") click.echo(f" Target: {target_css_file}") if not dry: subprocess.run(["rtlcss", source_css_file, target_css_file]) click.secho(" Generated.", fg="green") # Information click.secho(f"USING ENV: {NORMALIZED_ENVS[env]}", fg="blue") if dry: click.secho(f"DRY RUN: Will print compile steps, but will not compile anything.", fg="blue") click.echo() # Warnings if show_help: click.echo(context.get_help()) return if skip_lms and skip_cms: click.secho("WARNING: You are skipping both LMS and CMS... nothing will be compiled!", fg="yellow") if skip_default and skip_themes: click.secho("WARNING: You are skipped both default Sass and themed Sass... nothing will be compiled!", fg="yellow") click.echo() # Build a list of theme paths: if skip_themes: theme_paths = [] else: theme_paths = [ theme_dir / theme # For every theme dir, for theme_dir in theme_dirs for theme in ( # for every theme name (if theme names provided), themes or # or for every subdir of theme dirs (if no theme name provided), (theme_dir_entry.name for theme_dir_entry in theme_dir.iterdir()) ) # consider the path a theme if it has a lms/ or cms/ subdirectory. if (theme_dir / theme / "lms").is_dir() or (theme_dir / theme / "cms").is_dir() ] # We expect this script to be run from the edx-platform root. repo = Path(".") if not (repo / "xmodule").is_dir(): # Sanity check: If the xmodule/ folder is missing, we're definitely not at the root # of edx-platform, so save the user some headache by exiting early. raise Exception(f"{__file__} must be run from the root of edx-platform") # Every Sass compilation will use have these directories as lookup paths, # regardless of theme. common_includes = [ repo / "common" / "static", repo / "common" / "static" / "sass", repo / "node_modules" / "@edx", repo / "node_modules", ] if not skip_default: click.secho(f"Compiling default Sass...", fg="cyan", bold=True) if not skip_lms: compile_sass_dir( "Compiling default LMS Sass", repo / "lms" / "static" / "sass", repo / "lms" / "static" / "css", includes=[ *common_includes, repo / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass", ], ) compile_sass_dir( "Compiling default certificate Sass", repo / "lms" / "static" / "certificates" / "sass", repo / "lms" / "static" / "certificates" / "css", includes=[ *common_includes, repo / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass", ], ) if not skip_cms: compile_sass_dir( "Compiling default CMS Sass", repo / "cms" / "static" / "sass", repo / "cms" / "static" / "css", includes=[ *common_includes, repo / "lms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass", ], ) click.secho(f"Done compiling default Sass!", fg="cyan", bold=True) click.echo() for theme in theme_paths: click.secho(f"Compiling Sass for theme at {theme}...", fg="cyan", bold=True) if not skip_lms: compile_sass_dir( "Compiling default LMS Sass with themed partials", repo / "lms" / "static" / "sass", theme / "lms" / "static" / "css", includes=[ *common_includes, theme / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass", ], tolerate_missing=True, ) compile_sass_dir( "Compiling themed LMS Sass as overrides to CSS from previous step", theme / "lms" / "static" / "sass", theme / "lms" / "static" / "css", includes=[ *common_includes, theme / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass" / "partials", repo / "lms" / "static" / "sass", ], tolerate_missing=True, ) compile_sass_dir( "Compiling themed certificate Sass", theme / "lms" / "static" / "certificates" / "sass", theme / "lms" / "static" / "certificates" / "css", includes=[ *common_includes, theme / "lms" / "static" / "sass" / "partials", theme / "lms" / "static" / "sass", ], tolerate_missing=True, ) if not skip_cms: compile_sass_dir( "Compiling default CMS Sass with themed partials", repo / "cms" / "static" / "sass", theme / "cms" / "static" / "css", includes=[ *common_includes, repo / "lms" / "static" / "sass" / "partials", theme / "cms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass", ], tolerate_missing=True, ) compile_sass_dir( "Compiling themed CMS Sass as overrides to CSS from previous step", theme / "cms" / "static" / "sass", theme / "cms" / "static" / "css", includes=[ *common_includes, repo / "lms" / "static" / "sass" / "partials", theme / "cms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass" / "partials", repo / "cms" / "static" / "sass", ], tolerate_missing=True, ) click.secho(f"Done compiling Sass for theme at {theme}!", fg="cyan", bold=True) click.echo() # Report what we did. click.secho("Successfully compiled:", fg="green", bold=True) if not skip_default: click.secho(f" - {repo.absolute()} (default Sass)", fg="green") for theme in theme_paths: click.secho(f" - {theme}", fg="green") if skip_lms: click.secho(f"(skipped LMS)", fg="yellow") if skip_cms: click.secho(f"(skipped CMS)", fg="yellow") if __name__ == "__main__": main(prog_name="npm run compile-sass --") python scripts/compile_sass.py --env=development USING ENV: development Compiling default Sass... Compiling default LMS Sass... Source: lms\static\sass Target: lms\static\css Include paths: common\static common\static\sass node_modules\@edx node_modules lms\static\sass\partials lms\static\sass Files: lms\static\sass\lms-course-rtl.scss -> lms\static\css\lms-course-rtl.css lms\static\sass\lms-course.scss -> lms\static\css\lms-course.css lms\static\sass\lms-footer-edx-rtl.scss -> lms\static\css\lms-footer-edx-rtl.css lms\static\sass\lms-footer-edx.scss -> lms\static\css\lms-footer-edx.css lms\static\sass\lms-footer-rtl.scss -> lms\static\css\lms-footer-rtl.css lms\static\sass\lms-footer.scss -> lms\static\css\lms-footer.css lms\static\sass\lms-main-v1-rtl.scss -> lms\static\css\lms-main-v1-rtl.css Traceback (most recent call last): File "C:\Users\chaoguog\Downloads\edx-platform-master\scripts\compile_sass.py", line 451, in <module> main(prog_name="npm run compile-sass --") File "C:\Users\chaoguog\Downloads\edx-platform-master\.venv\Lib\site-packages\click\core.py", line 1442, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\chaoguog\Downloads\edx-platform-master\.venv\Lib\site-packages\click\core.py", line 1363, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "C:\Users\chaoguog\Downloads\edx-platform-master\.venv\Lib\site-packages\click\core.py", line 1226, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\chaoguog\Downloads\edx-platform-master\.venv\Lib\site-packages\click\core.py", line 794, in invoke return callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\chaoguog\Downloads\edx-platform-master\.venv\Lib\site-packages\click\decorators.py", line 34, in new_func return f(get_current_context(), *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\chaoguog\Downloads\edx-platform-master\scripts\compile_sass.py", line 335, in main compile_sass_dir( File "C:\Users\chaoguog\Downloads\edx-platform-master\scripts\compile_sass.py", line 252, in compile_sass_dir raise Exception(f"Failed to compile {source}: {output_text}") Exception: Failed to compile lms\static\sass\lms-main-v1-rtl.scss: Error: File to import not found or unreadable: mixins Parent style sheet: C:/Users/chaoguog/Downloads/edx-platform-master/lms/static/sass/_build-lms-v1.scss on line 81 of lms/static/sass/_build-lms-v1.scss >> @import 'mixins'; ^
最新发布
09-25
<think>好的,我现在需要解决用户在运行compile_sass.py时遇到的错误。错误信息显示在编译lms-main-v1-rtl.scss时,找不到名为'mixins'的导入文件。用户提供的错误信息指出Parent style sheet是_build-lms-v1.scss,第81行有@import 'mixins';。 首先,我需要确认这个错误的原因。通常,Sass的@import语句无法找到文件可能是因为路径不正确或者文件确实不存在。根据错误信息,导入的是'mixins',而父级样式表位于lms/static/sass/_build-lms-v1.scss。因此,我需要检查该文件中第81行的导入路径是否正确,并确认mixin文件是否存在。 接下来,我应该检查用户的项目结构。用户提到他们的项目路径是C:\Users\chaoguog\Downloads\edx-platform-master。所以,我需要查看该路径下的lms/static/sass目录是否有名为_mixins.scss的文件,或者是否存在其他可能的路径问题。 假设用户按照正常的项目结构,mixins文件可能位于partials目录下。例如,可能在lms/static/sass/partials/目录中有一个_mixin.scss文件。此时,正确的导入路径应该是'partials/mixins'而不是'mixins'。如果原文件中的导入路径是'mixins',那么Sass会在当前目录和include路径中查找,但如果没有找到,就会报错。 另一个可能性是,用户可能没有正确配置Sass的include路径,导致无法找到mixin文件。在compile_sass.py的代码中,common_includes变量包含了多个目录,如common/static、common/static/sass等。需要确认这些目录中是否有包含mixin文件的路径。例如,可能在common/static/sass/partials中有一个_mixin.scss文件,而当前的include路径是否正确指向了这些目录。 此外,用户可能在运行脚本时没有正确设置环境变量或参数,导致include路径未被正确应用。例如,在compile_sass_dir函数中,includes参数是否包含了正确的路径,使得Sass能够找到所需的mixin文件。 还有一种情况是,项目中的某些文件可能缺失或损坏。例如,用户可能从版本控制系统中拉取代码时,某些文件未被正确下载,导致mixin文件不存在。这时需要检查文件是否存在,如果不存在,可能需要重新获取或创建这些文件。 针对这些可能性,我需要逐步检查: 1. 检查lms/static/sass/_build-lms-v1.scss文件的第81行,确认导入路径是否正确。如果路径是'mixins',则需要确定该文件是否存在于当前目录或include路径中。 2. 检查项目结构中的相关目录,特别是common/static/sass和lms/static/sass/partials,是否存在_mixin.scss文件。例如,可能在common/static/sass/partials/mixins.scss或类似的路径下。 3. 确认compile_sass.py中调用compile_sass_dir时传入的includes参数是否正确,是否包含了存放mixin文件的目录。例如,common_includes变量是否包含了正确的路径。 4. 如果文件确实缺失,用户可能需要从版本历史中恢复,或者根据项目文档手动创建所需的mixin文件。 5. 另外,检查是否有其他配置问题,比如Sass的版本兼容性问题,或者Python环境中的依赖是否安装正确,比如libsass-python是否正确安装。 在回答用户时,需要提供具体的步骤来验证和修复这些问题,包括如何检查文件路径、修改导入语句、调整include路径配置等。同时,要确保建议的操作符合用户的项目结构和使用的工具链,避免引入其他问题。</think>该错误表明 Sass 编译失败,具体原因为 **`_build-lms-v1.scss` 文件中导入的 `mixins` 未找到**。以下是系统性修复方案及原理说明: --- ### **1. 检查 `mixins` 文件的实际路径** #### **错误原因** `_build-lms-v1.scss` 中的 `@import 'mixins'` 无法定位到实际的 `_mixins.scss` 文件。 #### **解决方案** - **确认文件是否存在**: ```bash # 检查默认路径(LMS 静态资源目录) dir lms\static\sass\partials\_mixins.scss # 检查全局共享路径 dir common\static\sass\partials\_mixins.scss ``` - **如果文件缺失**: 1. 从 Git 历史恢复: ```bash git checkout <commit-hash> -- lms/static/sass/partials/_mixins.scss ``` 2. 或手动创建缺失文件(需参考项目文档)。 --- ### **2. 修正 `@import` 路径** #### **错误原因** `_build-lms-v1.scss` 中的导入路径未指定完整路径或未使用正确的 partial 前缀(`_`)。 #### **解决方案** - **修改 `_build-lms-v1.scss`**: ```scss // ✅ 修正为完整路径 + partial 前缀 @import 'partials/mixins'; ``` - **原理**:Sass 的 `@import` 会自动忽略 `_` 前缀,因此应始终使用 `_mixins.scss` 的完整路径。 --- ### **3. 确保 `include_paths` 正确配置** #### **错误原因** Webpack 或 `compile_sass.py` 未将 `partials` 目录添加到 Sass 的查找路径中。 #### **解决方案** - **检查 `compile_sass.py` 中的 `common_includes`**: ```python common_includes = [ repo / "common" / "static", repo / "common" / "static" / "sass", repo / "node_modules" / "@edx", repo / "node_modules", ] ``` - **如果缺失 `partials` 路径**: ```python common_includes.append(repo / "lms" / "static" / "sass" / "partials") ``` --- ### **4. 清理缓存并重新编译** #### **执行命令** ```bash # 删除 Sass 编译缓存 rm -rf lms/static/css/*.css # 重新运行编译 python scripts/compile_sass.py --env=development ``` --- ### **5. 验证修复** 1. **观察输出**: - 如果成功生成 CSS 文件且无 `File to import not found` 错误,说明修复成功。 2. **浏览器测试**: - 加载页面,确认样式正常。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值