SQL*Plus FAQ

source: http://www.orafaq.com/wiki/SQL*Plus_FAQ

Contents

[hide]

[edit]What is SQL*Plus and where does it come from?

SQL*Plus is a command line SQL and PL/SQL language interface and reporting tool that ships with the Oracle Database Client and Server software. It can be used interactively or driven from scripts. SQL*Plus is frequently used by DBAs and Developers to interact with the Oracle database.

If you are familiar with other databases, sqlplus is equivalent to:

SQL*Plus's predecessor was called UFI (User Friendly Interface). UFI was included in thefirst Oracle releases up toOracle 4. The UFI interface was extremely primitive and, in today's terms, anything but user friendly. If a statement was entered incorrectly, UFI issued an error and rolled back the entire transaction.

[edit]How does one use the SQL*Plus utility?

Start using SQL*Plus by executing the "sqlplus" command-line utility from the $ORACLE_HOME/bin directory. Some of the command line options:

userid/password@db -- Connection details
/nolog             -- Do not login to Oracle. You will need to do it yourself.
-s or -silent      -- start sqlplus in silent mode. Not recommended for beginners!
@myscript          -- Start executing script called "myscript.sql"

Look at this example session:

sqlplus /nolog
SQL> connect scott/tiger
SQL> select * from tab;
SQL> disconnect
SQL> exit

Please note that one must prepare the environment before starting sqlplus.

Linux/ Unix example:

$ . oraenv
ORACLE_SID = [orcl]? orcl
$ sqlplus scott/tiger

Windows Example:

Click on "Start" -> "Run" and enter "cmd"
C:> set ORACLE_SID=orcl
C:> sqlplus scott/tiger

or...

C:> sqlplus scott/tiger@orcl

[edit]What commands can be executed from SQL*Plus?

One can enter three kinds of commands from the SQL*Plus command prompt:

SQL*Plus commands

SQL*Plus commands are used to set options for SQL*Plus, format reports, edit files, edit the command buffer, and so on. SQL*Plus commands do not interact with the database. These commands do not have to be terminated with a semicolon (;) (as is the case with SQL commands). Examples:

SQL> CLEAR SCREEN
SQL> SHOW USER
SQL> SET PAGESIZE 100
SQL> START myscrip.sql

SQL commands

For more information see the Oracle SQL FAQ. Eg:

SQL> SELECT * FROM user_tables;

PL/SQL blocks

For more information see the Oracle PL/SQL FAQ. Eg:

BEGIN
  dbms_output.put_line('Hello World!');
END;
/

[edit]What are the basic SQL*Plus commands?

Here is a list of some of the most frequently used SQL*Plus commands:

  • ACCEPT - Get input from the user
  • DEFINE - Declare a variable (short: DEF)
  • DESCRIBE - Lists the attributes of tables and other objects (short: DESC)
  • EDIT - Places you in an editor so you can edit a SQL command (short: ED)
  • EXIT or QUIT - Disconnect from the database and terminate SQL*Plus
  • GET - Retrieves a SQL file and places it into the SQL buffer
  • HOST - Issue an operating system command (short:!)
  • LIST - Displays the last command executed/ command in the SQL buffer (short: L)
  • PROMPT - Display a text string on the screen. Eg prompt Hello World!!!
  • RUN - List and Run the command stored in the SQL buffer (short: /)
  • SAVE - Saves command in the SQL buffer to a file. Eg "save x" will create a script file called x.sql
  • SET - Modify the SQL*Plus environment eg. SET PAGESIZE 23
  • SHOW - Show environment settings (short: SHO). Eg SHOW ALL, SHO PAGESIZE etc.
  • SPOOL - Send output to a file. Eg "spool x" will save STDOUT to a file called x.lst
  • START - Run a SQL script file (short: @)

[edit]What is AFIEDT.BUF?

AFIEDT.BUF is the SQL*Plus default edit save file. When you issue the command "ed" or "edit" without arguments, the last SQL or PL/SQL command will be saved to a file called AFIEDT.BUF and opened in the default editor.

In the prehistoric days when SQL*Plus was called UFI (User Friendly Interface) this file was named "ufiedt.buf", short for UFI editing buffer. When new features were added to UFI, it was initially named Advanced UFI and the filename was changed to "aufiedt.buf" and then to "afiedt.buf". They presumably needed to keep the name short for compatibility with some of the odd operating systems that Oracle supported in those days. The name "Advanced UFI" was never used officially, as the name was changed to SQL*Plus before this version was released.

You can overwrite the default edit save file's name like this:

SET EDITFILE "afiedt.buf"

One can edit SQL scripts and the command buffer (the last command entered) with the EDIT (or ED) command. However, sometimes one needs to select an editor before using this command. Examples:

Use the Unix/Linux vi-editor:

DEFINE _EDITOR=vi

Use the Notepad on Windows:

DEFINE _EDITOR=notepad

Use the gVim vi-editor on Windows and return afiedt.buf back to SQL*Plus when done:

DEFINE _EDITOR="gvim -f"

TIP: Add this command in your login.sql or glogin.sql scripts so it executes every time you start sqlplus.

[edit]How does one enable the SQL*Plus HELP facility?

To enable HELP for SQl*Plus, run the supplied "helpins" script in $ORACLE_HOME/bin. The "helpins" command will prompt you for the SYSTEM password and load the help data into the SYSTEM schema.

Alternatively you can load the help facility manually like this:

cd $ORACLE_HOME/sqlplus/admin/help
sqlplus system/manager @helpdrop.sql        # Drop the HELP table
sqlplus system/manager @hlpbld.sql helpus.sql # Create the HELP table and load the data

If the HELP command is not supported on your operating system, you can access the help table with a simple script like this (let's call it help.sql):

select info
from   system.help
where  upper(topic)=upper('&1');

Whenever you need help, you can now run the help.sql script:

@help SELECT

[edit]What is the difference between? and HELP?

There is no difference. Both "?" and HELP will read the SYSTEM.HELP table (if available) and shows help text on the screen.

To use the help facility, type HELP followed by the command you need to learn more about. For example, to get help on the SELECT statement, type:

HELP SELECT

One can also extend the help system by inserting information into the HELP table. Look at this example:

SQL> insert into help values ('MYTOPIC', 1, 'Detail line 1');
1 row created.

SQL> insert into help values ('MYTOPIC', 2, 'Detail line 2');
1 row created.

SQL> help MYTOPIC
Detail line 1
Detail line 2

SQL>? MYTOPIC
Detail line 1
Detail line 2

[edit]What is the difference between @ and @@?

The @ (at symbol) is equivalent to the START command and is used to run SQL*Plus command scripts.

SQL> @myscript.sql

A single @ symbol runs a script in the current directory (or one specified with a full or relative path, or one that is found in your SQLPATH or ORACLE_PATH).

@@ will start a sqlplus script that is in the same directory as the script that called it (relative to the directory of the current script). This is normally used for nested command files. This technique is commonly used by scripts that call subscripts in the?/rdbms/admin directory. The @@ reference does not support relative directory references such as @@dir/file.sql or @@./file.sql.

[edit]What is the difference between & and &&?

"&" is used to create a temporary substitution variable that will prompt you for a value every time it is referenced. Example:

SQL> SELECT sal FROM emp WHERE ename LIKE '&NAME';
Enter value for name: SCOTT
old   1: SELECT sal FROM emp WHERE ename LIKE '&NAME'
new   1: SELECT sal FROM emp WHERE ename LIKE 'SCOTT'
       SAL
----------
      3000

SQL> /
Enter value for name: SCOTT
old   1: SELECT sal FROM emp WHERE ename LIKE '&NAME'
new   1: SELECT sal FROM emp WHERE ename LIKE 'SCOTT'
       SAL
----------
      3000

"&&" is used to create a permanent substitution variable. Once you have entered a value (defined the variable) its value will used every time the variable is referenced. Example:

SQL> SELECT sal FROM emp WHERE ename LIKE '&&NAME';
Enter value for name: SCOTT
old   1: SELECT sal FROM emp WHERE ename LIKE '&&NAME'
new   1: SELECT sal FROM emp WHERE ename LIKE 'SCOTT'

       SAL
----------
      3000

SQL> /
old   1: SELECT sal FROM emp WHERE ename LIKE '&&NAME'
new   1: SELECT sal FROM emp WHERE ename LIKE 'SCOTT'
       SAL
----------
      3000

The "&&" will actually define the variable similarly to what the DEFINE command or OLD_VALUE/ NEW_VALUE clauses of a COLUMN statement would have done.

SQL> define
DEFINE NAME            = "SCOTT" (CHAR)

[edit]What is the difference between! and HOST?

Both "!" and "HOST" will execute operating system commands as child processes of SQL*Plus. The difference is that "HOST" will perform variable substitution (& and && symbols), whereas "!" will not. Examples:

SQL>! whoami
oracle
SQL> DEFINE cmd="whoami"
SQL> HOST &&cmd
oracle

Note: use "$" under OS/390, VMS, and Windows environments, not "!".

[edit]Can one run commands when SQL*Plus starts up?

When SQL*Plus starts up, it looks for a global login script called glogin.sql in the $ORACLE_HOME/sqlplus/admin directory. If found, this script will be executed.

Thereafter, sqlplus will try to find a local login script called login.sql in the directory where you start sqlplus from, alternatively the directories listed in the SQLPATH environment variable. When found, sqlplus will execute it.

NOTE: From Oracle 10g SQL*Plus will attempt to execute glogin.sql and login.sql after each successful connection. This is handy if you want to change the sqlprompt to include the current user. Here is an example (g)login.sql file:

prompt Loading login.sql file...
set sqlprompt "&&_USER@&&_CONNECT_IDENTIFIER SQL>"
define _editor=vi

A bit of history: when SQL*Plus was still called UFI, this file was called login.ufi (located in $ORACLE_HOME/demo).

[edit]Can one set the SQL*Plus command prompt to something more useful?

One can change the default 'SQL> ' prompt by changing the SQLPROMPT setting. For example:

SET SQLPROMPT 'Enter SQLPlus Command> '

The following example scripts can be used to include the connected username and database name into the prompt:

For Oracle 10g and above:

set sqlprompt "_USER'@'_CONNECT_IDENTIFIER _PRIVILEGE> "

Pre 10g:

undefine usr db
col usr new_value usr
col db  new_value db

set termout off
select lower(user) usr,
       substr(global_name, 1, instr(global_name, '.')-1) db
  from   global_name
/
set termout on

set sqlprompt '&&usr.@&&db.> '

NOTE: For the above example it might be better to get the database name from v$database. However, not all users do have access to the v$database view.

[edit]How does one disable interactive prompting in SQL*Plus?

If you run a script that contains "&" symbols, SQL*Plus thinks that you want to prompt the user for a value. Some clients allow one to escape the ampersand character with a backslash, however, that doesn't work from SQL*Plus. Here are a couple of solutions:

SET ESCAPE ON
SET ESCAPE ""
SELECT 'You & me' FROM DUAL;

or

SET DEFINE?
SELECT 'You & me' FROM DUAL;

Atanas Kebedjiev provided this solution:

SELECT 'You '||Chr(38)||' Me' FROM DUAL;

Note: You can disable substitution variable prompting altogether by issuing theSET DEFINE OFF or SET SCAN OFF command.

Homer Vargas provided this solution:"&" sign should always be at the end of the string.

SELECT 'You &'||' me' FROM DUAL;

[edit]How does one trap errors in SQL*Plus?

Use the "WHENEVER SQLERROR ..." command to trap SQL and PL/SQL errors, and the "WHENEVER OSERROR ..." to trap operating system errors. Eg:

SQL> WHENEVER OSERROR  EXIT 9
SQL> WHENEVER SQLERROR EXIT SQL.SQLCODE

[edit]How does one trace (and explain) SQL statements from SQL*Plus?

Method 1: Autotrace Facility

When the AUTOTRACE setting is enabled, SQL*Plus will print an EXPLAIN PLAN and execution statistics after each SQL statement. Look at this example:

SQL> set autotrace on
SQL> select * from dept where deptno = 40;
    DEPTNO DNAME          LOC
---------- -------------- -------------
        40 OPERATIONS     BOSTON

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=18)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'DEPT' (Cost=1 Card=1 Bytes=18)
   2    1     INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        499  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Follow these steps to enable this feature:

  • Run the PLUSTRCE.SQL script from the SYS database user. This script is located in the $ORACLE_HOME/sqlplus/admin directory.
  • Create a PLAN_TABLE table by running the UTLXPLAN.SQL script. This script is in $ORACLE_HOME/rdbms/admin.
  • Use the "SET AUTOTRACE ON" command to trace SQL execution. This will print an explain plan and high level trace information after your query results.

Method 2: DBMS_XPLAN Package

SQL> EXPLAIN PLAN FOR select * from dept where deptno = 40;
Explained.

SQL> set linesize 132

SQL> SELECT * FROM TABLE( dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2852011669

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |     1 |    20 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPT    |     1 |    20 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | PK_DEPT |     1 |       |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPTNO"=40)

14 rows selected.

[edit]How can one prevent SQL*Plus connection warning messages?

When I go to SQl*Plus, I get the following errors:

Error accessing PRODUCT_USER_PROFILE
Warning:  Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM

This messages will stop appearing when you create the PRODUCT_USER_PROFILE table in the SYSTEM schema. This is performed by the PUPBLD.SQL script.

Go to the $ORACLE_HOME/sqlplus/admin directory, connect as SYSTEM and run @PUPBLD.SQL from the sql prompt.

[edit]Can one prevent users from executing devious commands?

Yes, command authorization is verified against the SYSTEM.PRODUCT_USER_PROFILE table. This table is created by the PUPBLD.SQL script. Note that this table is not used when someone signs on as user SYSTEM.

Eg. to disable all users whose names starts with OPS$ from executing the CONNECT command:

SQL> INSERT INTO SYSTEM.PRODUCT_USER_PROFILE 
     VALUES ('SQL*Plus', 'OPS$%', 'CONNECT', NULL, NULL, 'DISABLED', NULL, NULL);

[edit]How does one restore session state in SQL*Plus?

Use the "STORE SET" command to write current settings (SHOW ALL) to a file. This file can later be executed to restore all settings. Look at the following example (Oracle8 and above):

SQL> STORE SET filename REPLACE
SQL> (do whatever you like)
SQL> @filename

[edit]How does one disable SQL*Plus formatting?

SQL*Plus tries to format data from the database into a human friendly format. This formatting can be disabled by issuing the following SET commands:

SET ECHO OFF
SET NEWPAGE 0
SET SPACE 0
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF
SET TRIMSPOOL ON
SET TAB OFF

These settings can also be abbreviated and entered on one line, eg.:

SET ECHO OFF NEWP 0 SPA 0 PAGES 0 FEED OFF HEAD OFF TRIMS ON TAB OFF

You may also want to run the "show all" command to display all the other sqlplus settings that can be changed.

[edit]Can one pass operating system parameters to SQL*Plus?

One can pass operating system variables to sqlplus using this syntax:

sqlplus username/password @cmdfile.sql var1 var2 var3

Parameter var1 will be mapped to SQL*Plus variable &1, var2 to &2, etc. Look at this example:

sqlplus scott/tiger @x.sql  '"test parameter"' dual

Where x.sql consists of:

select '&1' from &2;
exit 5;

Example passing the Windows User's Temp Path Location to sqlplus:

sqlplus username/password @cmdfile.sql%TEMP%

within @cmdfile.sql

SPOOL &1\myscript.log
-- Your script commands
SPOOL OFF

[edit]Something's wrong with my CLOB/ NCLOB/ LONG column. Where is my data?

SQL*Plus by default only shows the first 80 bytes of any LONG, CLOB and NCLOB datatypes. The data is there, but since sqlplus is a command-line tool it tries not to print out too much data. You can override this to tell sqlplus exactly how many bytes you want to see:

SET LONG 32000
SELECT text FROM user_views WHERE rownum = 1;

[edit]How does one copy data from one database to another in SQL*Plus?

The SQL*Plus COPY command is one of the fastest ways of copying data between databases and schemas. This is also one of the few methods that will handle LONG columns correctly. Look at this example:

SQL> COPY FROM scott/tiger@db1 TO scott/tiger@db2 INSERT mytable USING select * from mytable;

From SQL*Plus help:

COPY
----

Copies data from a query to a table in the same or another
database. COPY supports CHAR, DATE, LONG, NUMBER and VARCHAR2.

COPY {FROM database | TO database | FROM database TO database}
           {APPEND|CREATE|INSERT|REPLACE} destination_table
           [(column, column, column, ...)] USING query

where database has the following syntax:
    username[/password]@connect_identifier

Note that this command is no more enhanced since Oracle 8.0 and may be declared as obsolete in a future version (but is still there in 11gR2) but there is no replacement to easily copy tables with LONG column (LONG datatype is obsolete).

[edit]Can one generate HTML reports from SQL*Plus?

One can generate static HTML pages from SQL*Plus (8.1.6 and above) by setting the MARKUP option to HTML ON. This can be done by specifying -MARKUP "HTML ON" from command line, or with the "SET MARKUP HTML ON" command. Look at this example SQL Script:

set markup HTML on
spool index.html
select * from tab;
spool off
set markup HTML off

The spooled index.html file should look like this:

TNAMETABTYPECLUSTERID
BONUSTABLE
DEPTTABLE
EMPTABLE

Note: You can deploy this file on your web site or edit it in an HTML editor (like FrontPage or Dreamweaver). Another good idea is to develop a CSS to present the data more elegantly. One can also embed HTML tags in the select statement to create hyperlinks and add more HTML features.

[edit]How to have optional parameters and default values in SQL*Plus script

If you have a script like the simple following one:

prompt 1="&1"
prompt 2="&2"
prompt 3="&3"
undef 1
undef 2
undef 3

and call it with only 1 or 2 parameters then SQL*Plus asks you for the other parameters:

SQL> @c:\michel v1 v2
1="v1"
2="v2"
Enter value for 3: 
3=""

It would often be useful to be able to call the script with a variable number of parameters. This can be done modifying the previous script as:

set termout off
col p1 new_value 1
col p2 new_value 2
col p3 new_value 3
select null p1, null p2, null p3 from dual where  1=2;
set termout on
prompt 1="&1"
prompt 2="&2"
prompt 3="&3"
undef 1
undef 2
undef 3

So you can call it with a variable number of parameters:

SQL> @c:\michel v1 
1="v1"
2=""
3=""
SQL> @c:\michel v1 v2
1="v1"
2="v2"
3=""
SQL> @c:\michel v1 v2 v3
1="v1"
2="v2"
3="v3"

You can give a default value for these optional parameters modifying the script as:

set termout off
col p1 new_value 1
col p2 new_value 2
col p3 new_value 3
select null p1, null p2, null p3 from dual where  1=2;
select nvl('&1','def1') p1, nvl('&2','def2') p2, nvl('&3','def3') p3 from dual;
set termout on
prompt 1="&1"
prompt 2="&2"
prompt 3="&3"
undef 1
undef 2
undef 3

And so:

SQL> @c:\michel v1 
1="v1"
2="def2"
3="def3"

import matplotlib try: import seaborn as sns seaborn_style = sns.axes_style("whitegrid") matplotlib.style.library['seaborn-whitegrid'] = seaborn_style except Exception as e: print("[WARN] Failed to add seaborn-whitegrid style to matplotlib: ", e) try: matplotlib.style.library['seaborn-whitegrid'] = {} except Exception: pass _orig_style_use = matplotlib.style.use def _safe_style_use(style_name): try: if isinstance(style_name, str) and style_name.lower() == 'seaborn-whitegrid': # 确保库中有这个样式 if 'seaborn-whitegrid' not in matplotlib.style.library: matplotlib.style.library['seaborn-whitegrid'] = {} return _orig_style_use('seaborn-whitegrid') return _orig_style_use(style_name) except Exception as e: print(f"[WARN] Error setting style {style_name}: {e}") if style_name.lower() == 'seaborn-whitegrid': return _orig_style_use('default') else: raise matplotlib.style.use = _safe_style_use # ---------------- seaborn style 补丁 END ---------------- import os import sys import json import logging import traceback import re from datetime import datetime from typing import Any, Dict, List, Optional, Tuple import numpy as np from pyquaternion import Quaternion # try to import NuScenes + NuScenesMap + PredictHelper MAP_API_AVAILABLE = True ARCLINE_AVAILABLE = True map_api_import_exception = None arcline_import_exception = None try: from nuscenes import NuScenes from nuscenes.prediction import PredictHelper try: from nuscenes.map_expansion.map_api import NuScenesMap except Exception as e_map: NuScenesMap = None MAP_API_AVAILABLE = False map_api_import_exception = e_map try: from nuscenes.map_expansion import arcline_path_utils except Exception as e_arc: arcline_path_utils = None ARCLINE_AVAILABLE = False arcline_import_exception = e_arc except Exception as e_all: # If entire import failed, keep going but mark unavailable NuScenes = None PredictHelper = None NuScenesMap = None arcline_path_utils = None MAP_API_AVAILABLE = False ARCLINE_AVAILABLE = False map_api_import_exception = e_all arcline_import_exception = e_all # ---------------- Config ---------------- NUSCENES_ROOT = os.getenv('NUSCENES_ROOT', '/home/ljc/nuscenes') OUTPUT_DIR = os.getenv('OUTPUT_DIR', '/home/ljc/RISG/RISG-nuscenes/datasets') LOGS_DIR = os.path.join(OUTPUT_DIR, 'logs') os.makedirs(LOGS_DIR, exist_ok=True) os.makedirs(OUTPUT_DIR, exist_ok=True) HISTORY_SECONDS = 2.0 FUTURE_SECONDS = 6.0 SAMPLE_INTERVAL = 0.5 MIN_HISTORY_STEPS = int(HISTORY_SECONDS / SAMPLE_INTERVAL) MIN_FUTURE_STEPS = int(FUTURE_SECONDS / SAMPLE_INTERVAL) DEFAULT_MAP_QUERY_RADIUS = 50.0 # meters logfile = os.path.join(LOGS_DIR, f'preprocess_map_fallback_{datetime.now().strftime("%Y%m%d_%H%M%S")}.log') logging.basicConfig( level=logging.DEBUG, format='%(asctime)s [%(levelname)s] %(message)s', handlers=[logging.FileHandler(logfile), logging.StreamHandler(sys.stdout)] ) logger = logging.getLogger('preprocess_map_fallback') logger.info("Starting preprocess_nuscenes_map_fallback.py (debug mode)") logger.info("NUSCENES_ROOT=%s OUTPUT_DIR=%s", NUSCENES_ROOT, OUTPUT_DIR) logger.info("MAP_API_AVAILABLE=%s ARCLINE_AVAILABLE=%s", MAP_API_AVAILABLE, ARCLINE_AVAILABLE) if not MAP_API_AVAILABLE: logger.warning("NuScenesMap import failed: %s", repr(map_api_import_exception)) if not ARCLINE_AVAILABLE: logger.warning("arcline_path_utils import failed: %s", repr(arcline_import_exception)) # ---------------- helpers ---------------- def sanitize_filename(name: str) -> str: name = name.strip().replace(' ', '_') return re.sub(r'[^A-Za-z0-9_\\-\\.]', '_', name) def global_to_ego_xy(global_xy: Tuple[float, float], ego_xy: Tuple[float, float], ego_yaw: float) -> List[float]: x, y = float(global_xy[0]), float(global_xy[1]) x0, y0 = float(ego_xy[0]), float(ego_xy[1]) dx, dy = x - x0, y - y0 cos_t, sin_t = np.cos(ego_yaw), np.sin(ego_yaw) x_ego = cos_t * dx + sin_t * dy y_ego = -sin_t * dx + cos_t * dy return [float(x_ego), float(y_ego)] class NumpyEncoder(json.JSONEncoder): def default(self, obj: Any): if isinstance(obj, (np.integer,)): return int(obj) if isinstance(obj, (np.floating,)): return float(obj) if isinstance(obj, (np.ndarray,)): return obj.tolist() if isinstance(obj, (np.bool_, bool)): return bool(obj) return super().default(obj) # ---------------- map-json parsing caches ---------------- _map_json_cache: Dict[str, Dict] = {} _node_map_cache: Dict[str, Dict[str, Tuple[float, float]]] = {} _line_map_cache: Dict[str, Dict[str, np.ndarray]] = {} def find_map_json_path(dataroot: str, map_name: str) -> Optional[str]: if not map_name: return None maps_dir = os.path.join(dataroot, 'maps') cand = os.path.join(maps_dir, f"{map_name}.json") if os.path.exists(cand): return cand if os.path.isdir(maps_dir): for f in os.listdir(maps_dir): if f.lower().endswith('.json') and f.lower().startswith(map_name.lower()): return os.path.join(maps_dir, f) return None def load_map_json_cached(path: Optional[str]) -> Optional[Dict]: if not path: return None if path in _map_json_cache: return _map_json_cache[path] try: with open(path, 'r', encoding='utf-8') as fh: mj = json.load(fh) _map_json_cache[path] = mj logger.debug("Loaded map json: %s (keys=%d)", path, len(mj.keys())) return mj except Exception: logger.exception("Failed to load map json: %s", path) return None def try_extract_coords_from_node(rec: Any) -> Optional[Tuple[float, float]]: # node records often have x,y or xyz list try: if rec is None: return None if isinstance(rec, dict): # common fields if 'x' in rec and 'y' in rec: return (float(rec['x']), float(rec['y'])) if 'xyz' in rec and isinstance(rec['xyz'], (list,tuple)) and len(rec['xyz']) >= 2: return (float(rec['xyz'][0]), float(rec['xyz'][1])) # sometimes 'translation' or 'center' etc if 'translation' in rec and isinstance(rec['translation'], (list,tuple)) and len(rec['translation']) >= 2: return (float(rec['translation'][0]), float(rec['translation'][1])) # sometimes node stores 'node_token' or 'token' with coords in nested field if 'node' in rec and isinstance(rec['node'], dict): return try_extract_coords_from_node(rec['node']) # if it's a list/tuple of coords if isinstance(rec, (list,tuple)) and len(rec) >= 2: return (float(rec[0]), float(rec[1])) except Exception: logger.debug("try_extract_coords_from_node failed for rec type %s", type(rec)) return None return None def build_node_map(map_json: Dict, map_json_path: str) -> Dict[str, Tuple[float, float]]: # build token -> (x,y) if map_json_path in _node_map_cache: return _node_map_cache[map_json_path] nodes = {} if not map_json: _node_map_cache[map_json_path] = nodes return nodes node_obj = None if 'node' in map_json: node_obj = map_json['node'] else: # find candidate for k in map_json.keys(): if 'node' in k.lower(): node_obj = map_json[k]; break if node_obj is None: logger.warning("No 'node' section in map JSON") _node_map_cache[map_json_path] = nodes return nodes if isinstance(node_obj, dict): for token, rec in node_obj.items(): coord = try_extract_coords_from_node(rec) if coord: nodes[str(token)] = coord elif isinstance(node_obj, list): for rec in node_obj: token = None coord = None if isinstance(rec, dict): token = rec.get('token') or rec.get('id') or rec.get('uid') or rec.get('node_token') coord = try_extract_coords_from_node(rec) elif isinstance(rec, (list,tuple)): coord = try_extract_coords_from_node(rec) if token and coord: nodes[str(token)] = coord logger.info("Built node map with %d entries from %s", len(nodes), map_json_path) _node_map_cache[map_json_path] = nodes return nodes def try_extract_coords(val: Any) -> Optional[np.ndarray]: """ Try to coerce val into an Nx2 numpy array of XY coordinates. Handles lists, strings that encode arrays, dicts with 'xyz' etc. """ if val is None: return None try: if isinstance(val, (list,tuple, np.ndarray)): arr = np.asarray(val) if arr.ndim >= 2 and arr.shape[1] >= 2: return arr[:, :2].astype(float) # list of dicts with x,y if len(val) > 0 and isinstance(val[0], dict): pts=[] for p in val: if 'x' in p and 'y' in p: pts.append([float(p['x']), float(p['y'])]) elif 'xyz' in p and isinstance(p['xyz'], (list,tuple)) and len(p['xyz'])>=2: pts.append([float(p['xyz'][0]), float(p['xyz'][1])]) else: return None if pts: return np.asarray(pts)[:, :2] if isinstance(val, dict): for key in ['xyz','polyline','coords','points','geometry','points_2d','line','translation','polyline_coords']: if key in val and val[key]: return try_extract_coords(val[key]) # nested search for k,v in val.items(): c = try_extract_coords(v) if c is not None: return c if isinstance(val, str): # try JSON parse try: parsed = json.loads(val) return try_extract_coords(parsed) except Exception: # regex float extraction floats = re.findall(r'[-+]?[0-9]*\\.?[0-9]+(?:[eE][-+]?[0-9]+)?', val) if len(floats) >= 2: arr = np.array([float(x) for x in floats], dtype=float) if arr.size % 2 == 0: pts = arr.reshape(-1,2) return pts if arr.size % 3 == 0: pts = arr.reshape(-1,3)[:, :2]; return pts except Exception: logger.exception("try_extract_coords failed for val type %s", type(val)) return None return None def build_line_map_from_map_json(map_json: Dict, map_json_path: str) -> Dict[str, np.ndarray]: """ Build mapping: line_token -> Nx2 coords. For nuScenes map format where 'line' entries reference 'node_tokens', we reconstruct coordinates using the 'node' section. """ if map_json_path in _line_map_cache: return _line_map_cache[map_json_path] line_map = {} if not map_json: _line_map_cache[map_json_path] = line_map return line_map # Build node map first node_map = build_node_map(map_json, map_json_path) lines_obj = None if 'line' in map_json: lines_obj = map_json['line'] else: for k in map_json.keys(): if 'line' in k.lower(): lines_obj = map_json[k]; break if lines_obj is None: logger.warning("No 'line' key in map json") _line_map_cache[map_json_path] = line_map return line_map # lines_obj usually a list of dicts if isinstance(lines_obj, dict): # token -> rec for lid, rec in lines_obj.items(): # try several ways to get coords coord = None if isinstance(rec, dict): # if rec has 'node_tokens' (nuScenes common) if 'node_tokens' in rec and rec['node_tokens']: pts=[] for nt in rec['node_tokens']: if nt in node_map: pts.append(list(node_map[nt])) else: # not found, skip pts=[] break if pts: coord = np.asarray(pts)[:, :2].astype(float) if coord is None: # try embedded coords for key in ['xyz','polyline','points','coords','geometry','line','polyline_coords']: if key in rec and rec[key]: coord = try_extract_coords(rec[key]) if coord is not None: break else: coord = try_extract_coords(rec) if coord is not None and coord.shape[0] >= 2: line_map[str(lid)] = coord elif isinstance(lines_obj, list): for rec in lines_obj: lid = None coord = None if isinstance(rec, dict): lid = rec.get('token') or rec.get('id') or rec.get('uid') # try node_tokens if 'node_tokens' in rec and rec['node_tokens']: pts=[] for nt in rec['node_tokens']: if nt in node_map: pts.append(list(node_map[nt])) else: pts=[] break if pts: coord = np.asarray(pts)[:, :2].astype(float) if coord is None: for key in ['xyz','polyline','points','coords','geometry','line','polyline_coords']: if key in rec and rec[key]: coord = try_extract_coords(rec[key]); break if coord is None: # fallback try entire rec coord = try_extract_coords(rec) else: coord = try_extract_coords(rec) if lid is None: lid = f"idx_{len(line_map)}" if coord is not None and coord.shape[0] >= 2: line_map[str(lid)] = coord logger.info("Built line_map entries: %d", len(line_map)) # log a few examples for i, (k,v) in enumerate(list(line_map.items())[:3]): logger.debug("line_map example %d: token=%s pts=%d first=%s", i, k, v.shape[0], v[0].tolist()) _line_map_cache[map_json_path] = line_map return line_map # ---------------- geometry helpers ---------------- def compute_centerline_segments(cl: np.ndarray): if cl is None or cl.shape[0] < 2: return np.zeros((0,2)), np.zeros((0,)) seg_mid = (cl[1:] + cl[:-1]) / 2.0 seg_vec = cl[1:] - cl[:-1] seg_len = np.linalg.norm(seg_vec[:, :2], axis=1) return seg_mid, seg_len def lane_distance_to_points(cl: np.ndarray, points: np.ndarray) -> float: if cl is None or cl.shape[0] < 2 or points.size == 0: return float('inf') min_d = float('inf') for i in range(cl.shape[0] - 1): a = cl[i]; b = cl[i+1] ab = b - a ab2 = ab.dot(ab) for pt in points: ap = pt - a if ab2 == 0: d = np.linalg.norm(ap) else: t = np.clip(ap.dot(ab) / ab2, 0.0, 1.0) proj = a + t * ab d = np.linalg.norm(pt - proj) if d < min_d: min_d = d if min_d == 0.0: return 0.0 return min_d # ---------------- map extraction pipeline ---------------- def extract_map_summary(map_api_obj: Optional[Any], map_name: str, positions_xy: np.ndarray, radius: float = DEFAULT_MAP_QUERY_RADIUS) -> Dict: """ Return summary dict matching previous format: lane_positions, lane_lengths, centerline_positions, centerline_lengths, centerline_to_lane, lane_adjacent_edges, lane_predecessor_edges, lane_successor_edges Strategy: 1) If NuScenesMap exists and provides useful methods, try those (preferred). 2) Else, fall back to parsing maps/<map_name>.json using node & line tables and select line segments near positions. In fallback, also attempt to parse lane connectivity from connector-like sections (names vary across map versions). """ empty = {'lane_positions': [], 'lane_lengths': [], 'centerline_positions': [], 'centerline_lengths': [], 'centerline_to_lane': [[], []], 'lane_adjacent_edges': [], 'lane_predecessor_edges': [], 'lane_successor_edges': []} if positions_xy is None or positions_xy.size == 0: logger.debug("positions empty -> return empty map summary") return empty cx = float(np.mean(positions_xy[:,0])); cy = float(np.mean(positions_xy[:,1])) logger.info("Query center for map extraction: (%.3f, %.3f) radius=%.1f", cx, cy, radius) # 1) try official API if available if map_api_obj is not None: logger.info("Trying NuScenesMap API methods (if available)") try: has_get_ids = hasattr(map_api_obj, 'get_lane_ids_in_xy_bbox') has_get_center = hasattr(map_api_obj, 'get_lane_segment_centerline') logger.debug("NuScenesMap has get_lane_ids_in_xy_bbox=%s get_lane_segment_centerline=%s", has_get_ids, has_get_center) lane_ids = [] if has_get_ids: try: lane_ids = map_api_obj.get_lane_ids_in_xy_bbox(cx, cy, radius) logger.info("get_lane_ids_in_xy_bbox returned %d ids", len(lane_ids) if lane_ids is not None else 0) except Exception: logger.exception("get_lane_ids_in_xy_bbox failed") lane_ids = [] # try closest lane if none found if (not lane_ids) and hasattr(map_api_obj, 'get_closest_lane'): try: lid = map_api_obj.get_closest_lane(cx, cy, radius) if lid: lane_ids = [lid] logger.info("get_closest_lane returned %s", lid) except Exception: logger.exception("get_closest_lane failed") if lane_ids: lp, ll, cp, cll, src, dst = [], [], [], [], [], [] lane_id_list = list(lane_ids) for li, lid in enumerate(lane_id_list): try: lane_record = None if hasattr(map_api_obj, 'get_lane'): lane_record = map_api_obj.get_lane(lid) cl = None # try arcline discretize if available if ARCLINE_AVAILABLE and lane_record is not None: try: poses = arcline_path_utils.discretize_lane(lane_record, resolution_meters=1) if poses and len(poses) >= 2: cl = np.array([[p[0], p[1]] for p in poses], dtype=float) except Exception: logger.debug("arcline discretize failed for %s", lid, exc_info=True) # fallback to centerline API if cl is None and hasattr(map_api_obj, 'get_lane_segment_centerline'): try: raw_cl = map_api_obj.get_lane_segment_centerline(lid) arr = np.asarray(raw_cl) if arr.ndim >= 2 and arr.shape[1] >= 2: cl = arr[:, :2].astype(float) logger.debug("get_lane_segment_centerline for %s: %s pts", lid, None if cl is None else cl.shape[0]) except Exception: logger.debug("get_lane_segment_centerline exception", exc_info=True) # fallback parse lane_record dict if cl is None and isinstance(lane_record, dict): for key in ['centerline','polyline','coords','points','xyz','geometry','line']: if key in lane_record and lane_record[key]: cl = try_extract_coords(lane_record[key]) if cl is not None: break if cl is None: logger.warning("No centerline for lane %s, skipping", lid) continue seg_mid, seg_len = compute_centerline_segments(cl) total_len = float(np.sum(seg_len)) if seg_len.size > 0 else 0.0 if total_len <= 0 or seg_mid.size == 0: continue rep = max(0, len(cl)//2) lp.append([float(cl[rep,0]), float(cl[rep,1])]) ll.append(total_len) for j in range(seg_mid.shape[0]): cp.append([float(seg_mid[j,0]), float(seg_mid[j,1])]) cll.append(float(seg_len[j])) src.append(len(cp)-1); dst.append(li) except Exception: logger.exception("Error processing lane id %s", lid) # connectivity via API (if available) lane_adj_src, lane_adj_dst = [], [] lane_pred_src, lane_pred_dst = [], [] lane_succ_src, lane_succ_dst = [], [] try: # get_incoming/get_outgoing available in map_api_obj? for li, lid in enumerate(lane_id_list): try: incoming = [] outgoing = [] if hasattr(map_api_obj, 'get_incoming_lane_ids'): incoming = map_api_obj.get_incoming_lane_ids(lid) or [] if hasattr(map_api_obj, 'get_outgoing_lane_ids'): outgoing = map_api_obj.get_outgoing_lane_ids(lid) or [] # predecessors for pred in incoming: if pred in lane_id_list: pidx = lane_id_list.index(pred) lane_pred_src.append(pidx); lane_pred_dst.append(li) # successors for succ in outgoing: if succ in lane_id_list: sidx = lane_id_list.index(succ) lane_succ_src.append(li); lane_succ_dst.append(sidx) except Exception: logger.debug("Failed to get incoming/outgoing for %s", lid, exc_info=True) except Exception: logger.debug("Connectivity extraction via NuScenesMap failed", exc_info=True) if lp: logger.info("NuScenesMap API extraction success: lanes=%d centerline_pts=%d", len(lp), len(cp)) return {'lane_positions': lp, 'lane_lengths': ll, 'centerline_positions': cp, 'centerline_lengths': cll, 'centerline_to_lane': [src, dst], 'lane_adjacent_edges': [lane_adj_src, lane_adj_dst], 'lane_predecessor_edges': [lane_pred_src, lane_pred_dst], 'lane_successor_edges': [lane_succ_src, lane_succ_dst]} else: logger.info("NuScenesMap returned no lanes for this area; will fallback to JSON parsing") except Exception: logger.exception("NuScenesMap API path threw exception; falling back to JSON parsing") # 2) fallback: parse maps/<map_name>.json directly map_json_path = find_map_json_path(NUSCENES_ROOT, map_name) if map_name else None logger.debug("Fallback map_json_path=%s", map_json_path) mj = load_map_json_cached(map_json_path) if map_json_path else None if mj is None: logger.warning("No map JSON available; returning empty map summary") return empty # Build line_map then select lines near positions line_map = build_line_map_from_map_json(mj, map_json_path) logger.debug("Total lines available in map json: %d", len(line_map)) if not line_map: logger.warning("line_map empty after parse; returning empty") return empty # select lines whose distance to positions <= radius selected = [] for lid, cl in line_map.items(): try: d = lane_distance_to_points(cl, positions_xy) if d <= radius: selected.append((lid, cl)) except Exception: logger.exception("Error computing distance for line %s", lid) logger.info("Selected %d line(s) near the query center", len(selected)) if not selected: return empty # Build lane-like output: treat each selected line as a lane representative lane_positions, lane_lengths, centerline_positions, centerline_lengths, src, dst = [], [], [], [], [], [] selected_lane_tokens = [] for li, (lid, cl) in enumerate(selected): seg_mid, seg_len = compute_centerline_segments(cl) total_len = float(np.sum(seg_len)) if seg_len.size>0 else 0.0 if total_len <= 0 or seg_mid.size==0: continue rep = max(0, len(cl)//2) lane_positions.append([float(cl[rep,0]), float(cl[rep,1])]) lane_lengths.append(total_len) selected_lane_tokens.append(lid) for j in range(seg_mid.shape[0]): centerline_positions.append([float(seg_mid[j,0]), float(seg_mid[j,1])]) centerline_lengths.append(float(seg_len[j])) src.append(len(centerline_positions)-1); dst.append(li) logger.info("Built lane_recs map entries: %d", len(selected_lane_tokens)) # Try to extract connectivity from the map JSON. # Map JSON field names vary across versions. We attempt several likely names: connectors_candidates = [] for key in ['lane_connector', 'lane_connectors', 'lane_link', 'lane_links', 'connectors', 'connector', 'lane_connectivity', 'laneconnector']: if key in mj and mj[key]: logger.info("Found connector-like key in map json: %s", key) connectors_candidates.append(mj[key]) # also some maps store connectors under other keys - broad scan for k,v in mj.items(): if isinstance(v, list) and len(v) > 0 and isinstance(v[0], dict): sample_keys = set(v[0].keys()) if any(x in sample_keys for x in ['from_lane_token','to_lane_token','in_lane_token','out_lane_token','start_lane_token','end_lane_token','from_lane','to_lane','in_lane','out_lane']): connectors_candidates.append(v) # flatten connectors candidates connectors = [] for c in connectors_candidates: if isinstance(c, list): connectors.extend(c) elif isinstance(c, dict): # dict of token->rec for tt,rec in c.items(): connectors.append(rec) # Additional: try to harvest incoming/outgoing stored inside lane records themselves lane_records_obj = None if 'lane' in mj: lane_records_obj = mj['lane'] else: for k in mj.keys(): if 'lane' in k.lower(): lane_records_obj = mj[k]; break logger.info("Found %d connector-like records in map json", len(connectors)) # Build token->index map for selected lanes token_to_idx = {tok: idx for idx, tok in enumerate(selected_lane_tokens)} lane_adj_src, lane_adj_dst = [], [] lane_pred_src, lane_pred_dst = [], [] lane_succ_src, lane_succ_dst = [], [] # parse connectors if connectors: logger.debug("Parsing %d connector records for connectivity", len(connectors)) for rec in connectors: try: if not isinstance(rec, dict): continue # try many possible field names to extract from/to tokens in_tok = (rec.get('in_lane_token') or rec.get('from_lane_token') or rec.get('from_lane') or rec.get('start_lane_token') or rec.get('from') or rec.get('in_lane') or rec.get('lane_from') or rec.get('lane_start_token')) out_tok = (rec.get('out_lane_token') or rec.get('to_lane_token') or rec.get('to_lane') or rec.get('end_lane_token') or rec.get('to') or rec.get('out_lane') or rec.get('lane_to') or rec.get('lane_end_token')) # sometimes fields are lists if isinstance(in_tok, list) and in_tok: in_tok = in_tok[0] if isinstance(out_tok, list) and out_tok: out_tok = out_tok[0] # try nested string detections if (not in_tok or not out_tok): # inspect all string fields and pick first two tokens that are in token_to_idx found = [] for v in rec.values(): if isinstance(v, str) and v in token_to_idx: found.append(v) if isinstance(v, list): for it in v: if isinstance(it, str) and it in token_to_idx: found.append(it) if len(found) >= 2: break if len(found) >= 2: in_tok, out_tok = found[0], found[1] if in_tok and out_tok: if (in_tok in token_to_idx) and (out_tok in token_to_idx): src_idx = token_to_idx[in_tok] dst_idx = token_to_idx[out_tok] lane_succ_src.append(src_idx); lane_succ_dst.append(dst_idx) lane_pred_src.append(dst_idx); lane_pred_dst.append(src_idx) except Exception: logger.debug("Connector parse failed for rec", exc_info=True) # parse incoming/outgoing fields inside lane records if present if lane_records_obj: logger.info("Inspecting lane records for incoming/outgoing/left/right adjacency fields") if isinstance(lane_records_obj, dict): for tok, rec in lane_records_obj.items(): try: if tok not in token_to_idx: continue idx = token_to_idx[tok] # incoming/outgoing lists (common names) incs = rec.get('incoming') or rec.get('incoming_lane_tokens') or rec.get('incoming_lane') or rec.get('incoming_lanes') or rec.get('incomings') outs = rec.get('outgoing') or rec.get('outgoing_lane_tokens') or rec.get('outgoing_lane') or rec.get('outgoing_lanes') or rec.get('outgoings') if isinstance(incs, list): for p in incs: if p in token_to_idx: lane_pred_src.append(token_to_idx[p]); lane_pred_dst.append(idx) if isinstance(outs, list): for s in outs: if s in token_to_idx: lane_succ_src.append(idx); lane_succ_dst.append(token_to_idx[s]) # left/right adjacency left = rec.get('left_lane') or rec.get('adjacent_left') or rec.get('left_neighbor') or rec.get('left') right = rec.get('right_lane') or rec.get('adjacent_right') or rec.get('right_neighbor') or rec.get('right') if left and left in token_to_idx: lane_adj_src.append(idx); lane_adj_dst.append(token_to_idx[left]) if right and right in token_to_idx: lane_adj_src.append(idx); lane_adj_dst.append(token_to_idx[right]) except Exception: logger.debug("lane record parse issue", exc_info=True) elif isinstance(lane_records_obj, list): for rec in lane_records_obj: try: tok = rec.get('token') or rec.get('id') or rec.get('uid') if not tok or tok not in token_to_idx: continue idx = token_to_idx[tok] incs = rec.get('incoming') or rec.get('incoming_lane_tokens') or rec.get('incoming_lanes') outs = rec.get('outgoing') or rec.get('outgoing_lane_tokens') or rec.get('outgoing_lanes') if isinstance(incs, list): for p in incs: if p in token_to_idx: lane_pred_src.append(token_to_idx[p]); lane_pred_dst.append(idx) if isinstance(outs, list): for s in outs: if s in token_to_idx: lane_succ_src.append(idx); lane_succ_dst.append(token_to_idx[s]) left = rec.get('left_lane') or rec.get('adjacent_left') or rec.get('left_neighbor') right = rec.get('right_lane') or rec.get('adjacent_right') or rec.get('right_neighbor') if left and left in token_to_idx: lane_adj_src.append(idx); lane_adj_dst.append(token_to_idx[left]) if right and right in token_to_idx: lane_adj_src.append(idx); lane_adj_dst.append(token_to_idx[right]) except Exception: logger.debug("lane record parse issue", exc_info=True) logger.debug("Parsed connectivity: succ=%d pred=%d adj=%d", len(lane_succ_src), len(lane_pred_src), len(lane_adj_src)) logger.info("Built fallback map summary: lanes=%d centerline_pts=%d", len(lane_positions), len(centerline_positions)) return {'lane_positions': lane_positions, 'lane_lengths': lane_lengths, 'centerline_positions': centerline_positions, 'centerline_lengths': centerline_lengths, 'centerline_to_lane': [src, dst], 'lane_adjacent_edges': [lane_adj_src, lane_adj_dst], 'lane_predecessor_edges': [lane_pred_src, lane_pred_dst], 'lane_successor_edges': [lane_succ_src, lane_succ_dst]} # ---------------- Preprocessor (annotation processing) ---------------- class Preprocessor: def __init__(self, dataroot: str, output_dir: str): self.dataroot = dataroot self.output_dir = output_dir logger.info("Loading NuScenes...") self.nusc = NuScenes(version='v1.0-trainval', dataroot=dataroot, verbose=False) self.helper = PredictHelper(self.nusc) self.scenes = list(self.nusc.scene) self.inst2cat = {inst['token']: inst.get('category_token') for inst in self.nusc.instance if 'token' in inst} def category_label(self, instance_token: str) -> int: cat_token = self.inst2cat.get(instance_token) if not cat_token: return 5 try: cat = self.nusc.get('category', cat_token) name = cat.get('name', '').lower() except Exception: return 5 if any(k in name for k in ['vehicle','car','truck','bus','motor_vehicle']): return 0 if any(k in name for k in ['pedestrian','person']): return 1 if any(k in name for k in ['bicycle','motorcycle','bike','motorbike']): return 2 if 'animal' in name: return 3 if any(k in name for k in ['barrier','obstacle','static','cone','debris','construction']): return 4 return 5 def process_annotation(self, ann_token: str) -> Dict: try: ann = self.nusc.get('sample_annotation', ann_token) sample = self.nusc.get('sample', ann['sample_token']) ts_current = int(sample['timestamp']) inst = ann['instance_token'] size = ann.get('size', [0.0,0.0,0.0]) area = float(size[0] * size[1]) if len(size) >= 2 else 0.0 trans = ann.get('translation', [0.0,0.0,0.0]) global_pos = [float(trans[0]), float(trans[1])] # ego pose: assume first ann in sample is ego s_anns = sample.get('anns', []) if len(s_anns) == 0: ego_xy = [0.0, 0.0]; ego_yaw = 0.0 else: ego_ann = self.nusc.get('sample_annotation', s_anns[0]) ego_t = ego_ann.get('translation', [0.0,0.0,0.0]) ego_xy = [float(ego_t[0]), float(ego_t[1])] ego_rot = ego_ann.get('rotation', None) ego_yaw = Quaternion(ego_rot).yaw_pitch_roll[0] if ego_rot is not None else 0.0 # get past and future full records past = self.helper.get_past_for_agent(inst, ann['sample_token'], seconds=HISTORY_SECONDS, in_agent_frame=False, just_xy=False) fut = self.helper.get_future_for_agent(inst, ann['sample_token'], seconds=FUTURE_SECONDS, in_agent_frame=False, just_xy=False) past_list = past if isinstance(past, list) else (list(past) if past is not None else []) fut_list = fut if isinstance(fut, list) else (list(fut) if fut is not None else []) history_points = [] if len(past_list) > 0: ordered = past_list[::-1] for r in ordered: try: samp_r = self.nusc.get('sample', r['sample_token']) t = int(samp_r['timestamp']) xy = r.get('translation', [0.0,0.0,0.0])[:2] ego_p = global_to_ego_xy((xy[0], xy[1]), tuple(ego_xy), ego_yaw) history_points.append({'global':[float(xy[0]), float(xy[1])], 'ego':[float(ego_p[0]), float(ego_p[1])], 'timestamp': t}) except Exception: logger.exception("history point parse failed") # append current history_points.append({'global':[float(global_pos[0]), float(global_pos[1])], 'ego': global_to_ego_xy((global_pos[0], global_pos[1]), tuple(ego_xy), ego_yaw), 'timestamp': ts_current}) future_points = [] if len(fut_list) > 0: for r in fut_list: try: samp_r = self.nusc.get('sample', r['sample_token']) t = int(samp_r['timestamp']) xy = r.get('translation', [0.0,0.0,0.0])[:2] ego_p = global_to_ego_xy((xy[0], xy[1]), tuple(ego_xy), ego_yaw) future_points.append({'global':[float(xy[0]), float(xy[1])], 'ego':[float(ego_p[0]), float(ego_p[1])], 'timestamp': t}) except Exception: logger.exception("future point parse failed") complete = (len(history_points) >= MIN_HISTORY_STEPS) and (len(future_points) >= MIN_FUTURE_STEPS) cat_label = self.category_label(inst) positions_arr = np.asarray([p['global'] for p in history_points]) if len(history_points) > 0 else np.zeros((0,2)) # derive map name from scene->log scene_rec = self.nusc.get('scene', sample.get('scene_token')) if sample else None map_name = None map_api_obj = None try: if scene_rec: log = self.nusc.get('log', scene_rec['log_token']) map_name = log.get('location', '').replace(' ', '-').lower() if NuScenesMap is not None and map_name: try: # try instantiate NuScenesMap (may fail due to matplotlib/style issues) map_api_obj = NuScenesMap(dataroot=self.nusc.dataroot, map_name=map_name) logger.debug("NuScenesMap instantiated for %s", map_name) except Exception: logger.exception("NuScenesMap init failed for %s", map_name) map_api_obj = None except Exception: logger.exception("map loading exception") map_api_obj = None map_summary = extract_map_summary(map_api_obj, map_name if map_name else '', positions_arr, radius=DEFAULT_MAP_QUERY_RADIUS) result = { 'sample_annotation_token': ann_token, 'scene_token': sample.get('scene_token'), 'timestamp': int(ts_current), 'category': int(cat_label), 'area': float(area), 'act': int(0) if any('moving' in self.nusc.get('attribute', at)['name'].lower() for at in ann.get('attribute_tokens', [])) else 1, 'global_position': [float(global_pos[0]), float(global_pos[1])], 'ego_position': [float(global_to_ego_xy((global_pos[0], global_pos[1]), tuple(ego_xy), ego_yaw)[0]), float(global_to_ego_xy((global_pos[0], global_pos[1]), tuple(ego_xy), ego_yaw)[1])], 'heading': float(Quaternion(ann.get('rotation', [1,0,0,0])).yaw_pitch_roll[0] if ann.get('rotation', None) else 0.0), 'trajectory': {'history': history_points, 'future': future_points}, 'timestamps': {'history_start': int(history_points[0]['timestamp']) if len(history_points) > 0 else 0, 'future_end': int(future_points[-1]['timestamp']) if len(future_points) > 0 else int(ts_current)}, 'complete': bool(complete), 'map': { 'lane_positions': map_summary.get('lane_positions', []), 'lane_lengths': map_summary.get('lane_lengths', []), 'centerline_positions': map_summary.get('centerline_positions', []), 'centerline_lengths': map_summary.get('centerline_lengths', []), 'centerline_to_lane': map_summary.get('centerline_to_lane', [[], []]), 'lane_adjacent_edges': map_summary.get('lane_adjacent_edges', []), 'lane_predecessor_edges': map_summary.get('lane_predecessor_edges', []), 'lane_successor_edges': map_summary.get('lane_successor_edges', []) } } return result except Exception: logger.exception("process_annotation failed for %s", ann_token) return {} def process_scene(self, scene_idx: int, scene_rec: Dict): scene_name = scene_rec.get('name', f"scene_{scene_idx+1:03d}") safe = sanitize_filename(scene_name) out_path = os.path.join(self.output_dir, f"{safe}.jsonl") logger.info("Processing scene %d/%d name=%s -> %s", scene_idx+1, len(self.scenes), scene_name, out_path) count = 0 with open(out_path, 'w', encoding='utf-8') as fout: s_token = scene_rec.get('first_sample_token') while s_token: sample = self.nusc.get('sample', s_token) for ann_token in sample.get('anns', []): rec = self.process_annotation(ann_token) if rec: fout.write(json.dumps(rec, cls=NumpyEncoder, ensure_ascii=False) + '\n') count += 1 s_token = sample.get('next', '') logger.info("Wrote %d annotations to %s", count, out_path) return out_path # ---------------- main ---------------- def main(): try: pre = Preprocessor(NUSCENES_ROOT, OUTPUT_DIR) total = len(pre.scenes) logger.info("Total scenes to process: %d", total) for i, s in enumerate(pre.scenes): pre.process_scene(i, s) except KeyboardInterrupt: logger.info("Interrupted by user") except Exception: logger.exception("Fatal error") traceback.print_exc() if __name__ == '__main__': main() 代码是这样的,应该如何修改
最新发布
09-16
<think> 我们之前已经讨论了如何解决MatplotlibDeprecationWarning警告,特别是关于seaborn样式的问题。现在用户希望修改代码以解决这个警告,同时确保代码功能正常。根据引用[1]和引用[3],我们可以采取以下步骤: ### 解决步骤 1. **替换样式设置方式**:将使用`plt.style.use('seaborn-whitegrid')`改为使用Seaborn的API设置样式。 2. **检查并更新库版本**:确保Matplotlib和Seaborn的版本兼容。 3. **处理其他可能的警告**:如图形画布(FigureCanvas)的警告,如引用[3]中提到的。 ### 具体修改方法 #### 1. 替换样式设置 在代码中找到类似以下的行(根据用户提供的日志,在文件`preprocess_nuscenes_map_fallback.py`的第20行): ```python plt.style.use('seaborn-whitegrid') ``` 将其修改为: ```python import seaborn as sns sns.set_style("whitegrid") ``` #### 2. 更新库版本 建议用户更新Matplotlib和Seaborn到最新版本(如果环境允许): ```bash pip install --upgrade matplotlib seaborn ``` #### 3. 处理FigureCanvas警告 引用[3]中提到,在Matplotlib 3.6中弃用了没有`required_interactive_framework`属性的FigureCanvases。这个警告通常与后端设置有关。解决方法: - 在代码中显式设置Matplotlib后端(如果是在非交互式环境中运行,比如脚本,可以使用'Agg'后端): ```python import matplotlib matplotlib.use('Agg') # 在导入pyplot之前设置 import matplotlib.pyplot as plt ``` 但注意:使用'Agg'后端后将无法使用`plt.show()`显示图形,它适用于生成图像文件。 - 如果需要在交互式环境中显示图形,可以尝试其他后端,如'TkAgg'。 另外,引用[2]提到PyCharm中可能因为其科学模式导致警告,取消勾选“Show plots in tool window”可能解决,但这是PyCharm特定的设置。 ### 修改后的代码示例 假设原代码片段如下(根据日志,在文件第20行附近): ```python import matplotlib.pyplot as plt # 原始代码:plt.style.use('seaborn-whitegrid') # 修改为: import seaborn as sns sns.set_style("whitegrid") ``` ### 注意事项 - 确保在导入`matplotlib.pyplot`之前设置后端(如果需要)。 - 修改后,运行代码验证警告是否消失,同时图形样式是否保持预期的白色网格(whitegrid)。 - 如果代码中使用了`plt.show()`,并且你希望图形显示出来,那么不要使用'Agg'后端(除非你保存图像到文件)。在脚本中,通常不需要显示图形,所以使用'Agg'后端可以避免警告。 ### 验证 运行脚本,观察控制台输出,确认不再出现关于`seaborn styles`和`FigureCanvases`的弃用警告。 ### 相关问题 1. 如何设置Matplotlib的后端以避免弃用警告? 2. Seaborn的样式设置有哪些可选参数? 3. 在PyCharm中如何配置以正确显示Matplotlib图形? </think> ### 解决 MatplotlibDeprecationWarning 的完整方案 #### 步骤 1:修改 seaborn 样式调用方式 在文件 `/home/ljc/RISG/trajectory_prediction/preprocess_nuscenes_map_fallback.py` 中找到第 20 行附近的样式设置代码: ```python # 原始弃用代码(约在第 20 行) plt.style.use('seaborn-whitegrid') ``` 修改为直接使用 seaborn API: ```python # 修复后的代码 import seaborn as sns sns.set_style("whitegrid") # 使用 seaborn 原生 API[^1] ``` #### 步骤 2:处理图形画布警告(FigureCanvas 警告) 根据引用[3],添加图形初始化参数: ```python # 在创建 Figure 对象处添加(通常在 plt.figure() 调用前) import matplotlib as mpl mpl.rcParams['figure.required_interactive_framework'] = 'gtk3' # 根据实际环境选择 # 示例修改(假设原代码有类似调用) fig = plt.figure(figsize=(10, 8), required_interactive_framework='gtk3') # 明确指定框架[^3] ``` #### 步骤 3:检查 PyCharm 配置 如果使用 PyCharm,按引用[2]建议调整: 1. 进入 `File > Settings > Tools > Python Scientific` 2. 取消勾选 `Show plots in tool window` 3. 重启 IDE #### 步骤 4:验证环境版本 在代码开头添加版本检查: ```python import matplotlib, seaborn print(f"Matplotlib版本: {matplotlib.__version__}") # 需 ≥3.6[^2] print(f"Seaborn版本: {seaborn.__version__}") # 需 ≥0.11 ``` #### 步骤 5:完整修改示例 ```python # 文件顶部添加 import seaborn as sns import matplotlib as mpl mpl.rcParams['figure.required_interactive_framework'] = 'gtk3' # 选择适合的后端 sns.set_style("whitegrid") # 替代 plt.style.use # 原第20行附近的代码保持其他逻辑不变 # ... ``` ### 重要注意事项 1. **后端选择**:根据系统环境调整 `gtk3` 值(可选值:`qt5`, `tkagg`, `macosx` 等),可用 `matplotlib.rcsetup.all_backends` 查看可用后端 2. **版本要求**: - Matplotlib ≥ 3.6 - Seaborn ≥ 0.11 升级命令: ```bash pip install -U matplotlib seaborn --user ``` 3. **中文显示问题**:如果后续出现中文乱码,按引用[1]添加: ```python plt.rcParams['font.sans-serif'] = ['KaiTi', 'SimHei', 'Microsoft YaHei'] plt.rcParams['axes.unicode_minus'] = False ``` ### 效果验证 成功修复后,运行时将不再出现以下警告: ``` MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib... MatplotlibDeprecationWarning: Support for FigureCanvases without... ``` > 此方案同时解决了 seaborn 样式弃用和 FigureCanvas 属性缺失两个核心问题[^1][^2][^3],确保代码在 Matplotlib ≥3.6 环境下正常运行。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值