About Database Development Fundamentals

本文阐述了数据库应用设计对于性能、可扩展性、安全性和可用性的重要性。设计应始于高效的数据模型,明确的性能目标,并采用合适的基准测试策略。同时,讨论了数据仓库和在线事务处理(OLTP)系统的特性,以及如分区、物质化视图等特殊场景下的功能。此外,提到了临时有效性支持,允许根据时间维度管理数据的可见性。
  • Design Basics
  • Connection Strategies for Database Applications
  • Performance and Scalability
  • Designing Applications for Oracle Real-World Performance
  • Security
  • High Availability
  • Advanced PL/SQL Features

1 Design Basics

1.1 Design for Performance

The key to database and application performance is design, not tuning. While tuning is quite valuable, it cannot make up for poor design. Your design must start with an efficient data model, well-defined performance goals and metrics, and a sensible benchmarking strategy. Otherwise, you will encounter problems during implementation, when no amount of tuning will produce the results that you could have obtained with good design. You might have to redesign the system later, despite having tuned the original poor design.

1.2 Design for Scalability

Scalability is the ability of a system to perform well as its load increases. Load is a combination of number of data volumes, number of users, and other relevant factors. To design for scalability, you must use an effective benchmarking strategy, appropriate application development techniques (such as bind variables), and appropriate Oracle Database architectural features like shared server connections, clustering, partitioning, and parallel operations.

1.3 Design for Extensibility

Extensibility is the ease with which a database or database application accommodates future growth. The more extensible the database or application, the easier it is to add or change functionality with minimal impact on existing functionality.

1.3.1 Data Cartridges

1.3.2 External Procedures

1.3.3 User-Defined Functions and Aggregate Functions

1.3.4 Object-Relational Features

1.4 Design for Security

Database security involves a wide range of potential activities, including:

  • Designing and implementing security policies to protect the data of an organization, users, and applications from accidental, inappropriate, or unauthorized actions

  • Creating and enforcing policies and practices of auditing and accountability for inappropriate or unauthorized actions

  • Creating, maintaining, and terminating user accounts, passwords, roles, and privileges

  • Developing applications that provide desired services securely in a variety of computational models, leveraging database and directory services to maximize both efficiency and ease of use

1.5 Design for Availability

Availability is the degree to which an application, service, or function is accessible on demand. A system designed for high availability provides uninterrupted computing services during essential time periods, during most hours of the day throughout the year, with minimal downtime for operations such as upgrading the system's hardware or software. The main characteristics of a highly available system are:

  • Reliability

  • Recoverability

  • Timely error detection

  • Continuous operation

1.6 Design for Portability

PL/SQL stored procedures provide some application portability across multiple databases. Although using stored procedures written in the language of a given vendor may seem to tie you to that vendor to some extent, stored procedures make the application's visual component (user interface) and application logic portable. The data logic is encoded optimally for the database on which the application runs. Because the data logic is hidden in stored procedures, you can use the vendor's extensions and features to optimize the data layer.

When developed and deployed on a database, the application can stay deployed on that database forever. If the application is moved to another database, the visual component and application logic can move independently of the data logic in the stored procedures, which simplifies the move. (Reworking the application in combination with the move complicates the move.)

1.7 Design for Diagnosability

Oracle Database includes a fault diagnosability infrastructure for preventing, detecting, diagnosing, and resolving database problems. Problems include critical errors such as code bugs, metadata corruption, and customer data corruption. The goals of the diagnosability infrastructure are to detect problems proactively, limit damage and interruptions after a problem is detected, reduce the time required to diagnose and resolve problems, and simplify any possible interaction with Oracle Support.

Automatic Diagnostic Repository (ADR) is a file-based repository that stores database diagnostic data such as trace files, the alert log, and Health Monitor reports. ADR is located outside the database, which enables Oracle Database to access and manage ADR when the physical database is unavailable.

1.8 Design for Special Environments

1.8.1 Data Warehousing

A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but can include data from other sources. Data warehouses separate analysis workload from transaction workload and enable an organization to consolidate data from several sources. This strategy helps the organization maintain historical records and analyze the data to better understand and improve its business.

In addition to a relational database, a data warehouse environment can include:

  • An extraction, transportation, transformation, and loading (ETL) solution

  • Statistical analysis

  • Reporting

  • Data mining capabilities

  • Client analysis tools

  • Applications that manage the process of gathering data; transforming it into useful, actionable information; and delivering it to business users

Data warehousing systems typically:

  • Use many indexes

  • Use some (but not many) joins

  • Use denormalized or partially denormalized schemas (such as a star schema) to optimize query and analytical performance

  • Use derived data and aggregates

  • Have workloads designed to accommodate ad hoc queries and data analysis

    Because you might not know the workload of your data warehouse in advance, you must optimize the data warehouse to perform well for a wide variety of possible query and analytical operations.

  • Are updated regularly (nightly or weekly) by the ETL process, using bulk data modification techniques

1.8.2 Online Transaction Processing (OLTP)

Online transaction processing (OLTP) systems are optimized for fast and reliable transaction handling. Compared to data warehouse systems, most OLTP interactions involve a relatively small number of rows, but a larger group of tables. In OLTP systems, performance requirements require that historical data be frequently moved to an archive.

OLTP systems typically:

  • Use few indexes.

  • Use many joins.

  • Use fully normalized schemas to optimize update, insert, and delete performance, and to guarantee data consistency.

  • Rarely use derived data and aggregates.

  • Have workloads consisting of predefined operations.

  • Have users routinely issuing individual data modification statements to the database, so that the OLTP database always reflects the current state of each transaction.

1.9 Features for Special Scenarios

1.9.1 SQL Analytic Functions

A SQL analytic function computes an aggregate value based on a group of rows. A SQL analytic function differs from an aggregate function in that it returns multiple rows for each group. For each row, a window of rows is defined. The window determines the range of rows used to perform the calculations for the current row. Window sizes can be based on either a physical number of rows or a logical interval such as time.

SQL analytic functions bring to set-oriented SQL the ability to use array semantics on result sets. They provide coding efficiency, because they enable concise, straightforward coding of logic that is otherwise cumbersome or impossible. They also provide processing efficiency, because they are integral to Oracle Database and use internal optimizations.

A typical use of analytic functions is to retrieve the most current information in a table. For example, a query of the following form returns information from the row with the most recent update time for each customer with records in a table:

SELECT ... FROM my_table t1
  WHERE upd_time = ( SELECT MAX(UPD _TIME)
                     FROM my_table t2
                     WHERE t2.cust_id = t1.cust_id );

The preceding query uses a correlated subquery to find the MAX(UPD _TIME) by cust _id, record by record. Therefore, the correlated subquery could be evaluated once for each row in the table. If the table has very few records, performance may be adequate; if the table has tens of thousands of records, the cumulative cost of repeatedly executing the correlated subquery is high.

The following query makes a single pass on the table and computes the maximum UPD_TIME during that pass. Depending on various factors, such as table size and number of rows returned, the following query may be much more efficient than the preceding query:

SELECT ...
  FROM ( SELECT t1.*,
         MAX(UPD_TIME) OVER (PARTITION BY cust _id) max_time
         FROM my_table t1
       )
  WHERE  upd_time = max_time;

The available analytic functions are:

 

AVG
CORR
COUNT
COVAR_POP
COVAR_SAMP
CUME_DIST
DENSE_RANK
FIRST
FIRST_VALUE
LAG
LAST
LAST_VALUE
LEAD
LISTAGG
MAX
MIN
NTH_VALUE
NTILE
PERCENT_RANK
PERCENTILE_CONT
PERCENTILE_DISC
RANK
RATIO_TO_REPORT
REGR_ (Linear Regression) Functions
ROW_NUMBER
STDDEV
STDDEV_POP
STDDEV_SAMP
SUM
VAR_POP
VAR_SAMP
VARIANCE

1.9.2 Materialized Views

Materialized views are query results that have been stored ("materialized") as schema objects. Like tables and views, materialized views can appear in the FROM clauses of queries.

Materialized views are used to summarize, compute, replicate, and distribute data. They are useful for pre-answering general classes of questions—users can query the materialized views instead of individually aggregating detail records. Some environments where materialized views are useful are data warehousing, replication, and mobile computing.

Materialized views require time to create and update, and disk space for storage, but these costs are offset by dramatically faster queries. In these respects, materialized views are like indexes, and they are called "the indexes of your data warehouse." Unlike indexes, materialized views can be queried directly (with SELECT statements) and sometimes updated with DML statements (depending on the type of update needed).

A major benefit of creating and maintaining materialized views is the ability to take advantage of query rewrite, which transforms a SQL statement expressed in terms of tables or views into a statement accessing one or more materialized views that are defined on the detail tables. The transformation is transparent to the end user or application, requiring no intervention and no reference to the materialized view in the SQL statement. Because query rewrite is transparent, materialized views can be added or dropped like indexes without invalidating the SQL in the application code.

he following statement creates and populates a materialized aggregate view based on three master tables in the SH sample schema:

CREATE MATERIALIZED VIEW sales_mv AS 
  SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
  FROM   times t, products p, sales s
  WHERE  t.time_id = s.time_id 
  AND    p.prod_id = s.prod_id
  GROUP BY t.calendar_year, p.prod_id;

1.9.3 Partitioning

Partitioning is the database ability to physically break a very large table, index, or materialized view into smaller pieces that it can manage independently. Partitioning is similar to parallel processing, which breaks a large process into smaller pieces that can be processed independently.

Each partition is an independent object with its own name and, optionally, its own storage characteristics. Partitioning is useful for many different types of database applications, particularly those that manage large volumes of data. Benefits include increased availability, easier administration of schema objects, reduced contention for shared resources in OLTP systems, and enhanced query performance in data warehouses.

To partition a table, specify the PARTITION BY clause in the CREATE TABLE statement. SELECT and DML statements do not need special syntax to benefit from the partitioning.

A common strategy is to partition records by date ranges. The following statement creates four partitions, one for records from each of four years of sales data (2008 through 2011):

CREATE TABLE time_range_sales
   ( prod_id        NUMBER(6)
   , cust_id        NUMBER
   , time_id        DATE
   , channel_id     CHAR(1)
   , promo_id       NUMBER(6)
   , quantity_sold  NUMBER(3)
   , amount_sold    NUMBER(10,2)
   )
PARTITION BY RANGE (time_id)
 (PARTITION SALES_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY')),
  PARTITION SALES_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010','DD-MON-YYYY')),
  PARTITION SALES_2010 VALUES LESS THAN (TO_DATE('01-JAN-2011','DD-MON-YYYY')),
  PARTITION SALES_2011 VALUES LESS THAN (MAXVALUE)
 ); 

1.9.4 Temporal Validity Support

Temporal Validity Support lets you associate one or more valid time dimensions with a table and have data be visible depending on its time-based validity, as determined by the start and end dates or time stamps of the period for which a given record is considered valid. Examples of time-based validity are the hire and termination dates of an employee in a Human Resources application, the effective date of coverage for an insurance policy, and the effective date of a change of address for a customer or client.

Temporal Validity Support is typically used with Oracle Flashback Technology, for queries that specify the valid time period in AS OF and VERSIONS BETWEEN clauses. You can also use the DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME procedure to specify an option for the visibility of table data: all table data (the default), data valid at a specified time, or currently valid data within the valid time period at the session level.

Temporal Validity Support is useful in Information Lifecycle Management (ILM) and any other application where it is important to know when certain data becomes valid (from the application's perspective) and when it becomes invalid (if ever).

 The following example creates a table with Temporal Validity Support, inserts rows, and issues queries whose results depend on the valid start date and end date for individual rows.

CREATE TABLE my_emp(
  empno NUMBER,
  last_name VARCHAR2(30),
  start_time TIMESTAMP,
  end_time TIMESTAMP,
PERIOD FOR user_valid_time (start_time, end_time));
 
INSERT INTO my_emp VALUES (100, 'Ames', '01-Jan-10', '30-Jun-11');
INSERT INTO my_emp VALUES (101, 'Burton', '01-Jan-11', '30-Jun-11');
INSERT INTO my_emp VALUES (102, 'Chen', '01-Jan-12', null);
 
-- Valid Time Queries --
 
-- AS OF PERIOD FOR queries:
 
-- Returns only Ames.
SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jun-10');

-- Returns  Ames and Burton, but not Chen.
SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jun-11');

-- Returns no one.
SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP( '01-Jul-11');

-- Returns only Chen.
SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Feb-12');
 
-- VERSIONS PERIOD FOR ... BETWEEN queries:
 
-- Returns only Ames.
SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
   TO_TIMESTAMP('01-Jun-10') AND TO_TIMESTAMP('02-Jun-10');

-- Returns Ames and Burton.
SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
   TO_TIMESTAMP('01-Jun-10') AND TO_TIMESTAMP('01-Mar-11');

-- Returns only Chen.
SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
   TO_TIMESTAMP('01-Nov-11') AND TO_TIMESTAMP('01-Mar-12');

-- Returns no one.
SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
   TO_TIMESTAMP('01-Jul-11') AND TO_TIMESTAMP('01-Sep-11');
SQL> conn scott/tiger@PDB1;
Connected.
SQL> 
SQL> 
SQL> show user;
USER is "SCOTT"
SQL> show con_name;

CON_NAME
------------------------------
PDB1
SQL> CREATE TABLE my_emp(
  2    empno NUMBER,
  3    last_name VARCHAR2(30),
  4    start_time TIMESTAMP,
  5    end_time TIMESTAMP,
  6  PERIOD FOR user_valid_time (start_time, end_time));

Table created.

SQL> INSERT INTO my_emp VALUES (100, 'Ames', '01-Jan-10', '30-Jun-11');
INSERT INTO my_emp VALUES (101, 'Burton', '01-Jan-11', '30-Jun-11');

1 row created.

SQL> 
1 row created.

SQL> INSERT INTO my_emp VALUES (102, 'Chen', '01-Jan-12', null);

1 row created.

SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jun-10');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       100 Ames                                                                                       01-JAN-10 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM


SQL> set pagesize 200 linesize 200
SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jun-10');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       100 Ames                                                                                       01-JAN-10 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM


SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jun-11');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       100 Ames                                                                                       01-JAN-10 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM

       101 Burton                                                                                     01-JAN-11 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM


SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP( '01-Jul-11');

no rows selected

SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Feb-12');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       102 Chen                                                                                       01-JAN-12 12.00.00.000000 AM



SQL> SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
  2     TO_TIMESTAMP('01-Jun-10') AND TO_TIMESTAMP('02-Jun-10');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       100 Ames                                                                                       01-JAN-10 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM


SQL> SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
  2     TO_TIMESTAMP('01-Jun-10') AND TO_TIMESTAMP('01-Mar-11');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       100 Ames                                                                                       01-JAN-10 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM

       101 Burton                                                                                     01-JAN-11 12.00.00.000000 AM
30-JUN-11 12.00.00.000000 AM


SQL> SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
  2     TO_TIMESTAMP('01-Nov-11') AND TO_TIMESTAMP('01-Mar-12');

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       102 Chen                                                                                       01-JAN-12 12.00.00.000000 AM



SQL> SELECT * from my_emp VERSIONS PERIOD FOR user_valid_time BETWEEN 
  2     TO_TIMESTAMP('01-Jul-11') AND TO_TIMESTAMP('01-Sep-11');

no rows selected

SQL> ALTER TABLE my_emp ADD PERIOD FOR user_valid_time;
ALTER TABLE my_emp ADD PERIOD FOR user_valid_time
*
ERROR at line 1:
ORA-55603: invalid flashback archive or valid time period command


SQL> INSERT INTO my_emp VALUES (103, 'Davis', '01-Jan-12', null);

1 row created.

SQL> UPDATE my_emp SET end_time = '01-Feb-12' WHERE empno = 103;

1 row updated.

SQL> INSERT INTO my_emp VALUES (103, 'Smith', '02-Feb-12', null);

1 row created.

SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time SYSDATE WHERE empno = 103;

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       103 Smith                                                                                      02-FEB-12 12.00.00.000000 AM



SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('01-Jul-12')
  2    WHERE empno = 103;

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       103 Smith                                                                                      02-FEB-12 12.00.00.000000 AM



SQL> SELECT * from my_emp AS OF PERIOD FOR user_valid_time TO_TIMESTAMP('20-Jan-12')
  2    WHERE empno = 103;

     EMPNO LAST_NAME                                                                                  START_TIME
---------- ------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
       103 Davis                                                                                      01-JAN-12 12.00.00.000000 AM
01-FEB-12 12.00.00.000000 AM


SQL> 

<!-- -*- nxml-child-indent: 4; tab-width: 4; indent-tabs-mode: nil -*- --> <config> <!-- For more detailed documentation on typical configuration options please see: https://sdk.collaboraonline.com/docs/installation/Configuration.html --> <!-- Note: 'default' attributes are used to document a setting's default value as well as to use as fallback. --> <!-- Note: When adding a new entry, a default must be set in WSD in case the entry is missing upon deployment. --> <accessibility desc="Accessibility settings"> <enable type="bool" desc="Controls whether accessibility support should be enabled or not." default="false">false</enable> </accessibility> <allowed_languages desc="List of supported languages of Writing Aids (spell checker, grammar checker, thesaurus, hyphenation) on this instance. Allowing too many has negative effect on startup performance." default="de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru">de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru</allowed_languages> <!-- These are the settings of external (remote) spellchecker and grammar checker services. Currently LanguageTool and Duden Korrekturserver APIs are supported, you can set either of them. By default they are disabled. To turn the support on, please set "enabled" property to true. It works with self hosted or cloud services, free and premium as well. The "base_url" may be https://api.languagetoolplus.com/v2 if the cloud version of LanguageTool is used. Please note that your data in the document e.g. the text part of it will be sent to the cloud API. Please read the respective privacy policies, e.g. https://languagetool.org/legal/privacy. --> <languagetool desc="Remote API settings for spell and grammar checking"> <enabled desc="Enable Remote Spell and Grammar Checker" type="bool" default="false">false</enabled> <base_url desc="HTTP endpoint for the API server, without /check or /languages postfix at the end." type="string" default=""></base_url> <user_name desc="LanguageTool or Duden account username for premium usage." type="string" default=""></user_name> <api_key desc="API key provided by LanguageTool or Duden account for premium usage." type="string" default=""></api_key> <ssl_verification desc="Enable or disable SSL verification. You may have to disable it in test environments with self-signed certificates." type="string" default="true">true</ssl_verification> <rest_protocol desc="REST API protocol. For LanguageTool leave it blank, for Duden Korrekturserver use the string 'duden'." type="string" default=""></rest_protocol> </languagetool> <deepl desc="DeepL API settings for translation service"> <enabled desc="If true, shows translate option as a menu entry in the compact view and as an icon in the tabbed view." type="bool" default="false">false</enabled> <api_url desc="URL for the API" type="string" default=""></api_url> <auth_key desc="Auth Key generated by your account" type="string" default=""></auth_key> </deepl> <sys_template_path desc="Path to a template tree with shared libraries etc to be used as source for chroot jails for child processes." type="path" relative="true" default="systemplate"></sys_template_path> <child_root_path desc="Path to the directory under which the chroot jails for the child processes will be created. Should be on the same file system as systemplate and lotemplate. Must be an empty directory." type="path" relative="true" default="jails"></child_root_path> <mount_jail_tree desc="Controls whether the systemplate and lotemplate contents are mounted or not, which is much faster than the default of linking/copying each file." type="bool" default="true">true</mount_jail_tree> <server_name desc="External hostname:port of the server running coolwsd. If empty, it's derived from the request (please set it if this doesn't work). May be specified when behind a reverse-proxy or when the hostname is not reachable directly." type="string" default=""></server_name> <file_server_root_path desc="Path to the directory that should be considered root for the file server. This should be the directory containing cool." type="path" relative="true" default="browser/../"></file_server_root_path> <hexify_embedded_urls desc="Enable to protect encoded URLs from getting decoded by intermediate hops. Particularly useful on Azure deployments" type="bool" default="false">false</hexify_embedded_urls> <experimental_features desc="Enable/Disable experimental features" type="bool" default="true">true</experimental_features> <memproportion desc="The maximum percentage of available memory consumed by all of the Collabora Online Development Edition processes, after which we start cleaning up idle documents. If cgroup memory limits are set, this is the maximum percentage of that limit to consume." type="double" default="80.0"></memproportion> <num_prespawn_children desc="Number of child processes to keep started in advance and waiting for new clients." type="uint" default="4">4</num_prespawn_children> <fetch_update_check desc="Every number of hours will fetch latest version data. Defaults to 10 hours." type="uint" default="10">10</fetch_update_check> <allow_update_popup desc="Allows notification about an update in the editor" type="bool" default="true">true</allow_update_popup> <per_document desc="Document-specific settings, including LO Core settings."> <max_concurrency desc="The maximum number of threads to use while processing a document." type="uint" default="4">4</max_concurrency> <batch_priority desc="A (lower) priority for use by batch eg. convert-to processes to avoid starving interactive ones" type="uint" default="5">5</batch_priority> <bgsave_priority desc="A (lower) priority for use by background save processes to free time for interactive ones" type="uint" default="5">5</bgsave_priority> <bgsave_timeout_secs desc="The default maximum number of seconds to wait for the background save processes to finish before giving up and reverting to synchronous saving" type="uint" default="120">120</bgsave_timeout_secs> <redlining_as_comments desc="If true show red-lines as comments" type="bool" default="false">false</redlining_as_comments> <pdf_resolution_dpi desc="The resolution, in DPI, used to render PDF documents as image. Memory consumption grows proportionally. Must be a positive value less than 385. Defaults to 96." type="uint" default="96">96</pdf_resolution_dpi> <idle_timeout_secs desc="The maximum number of seconds before unloading an idle document. Defaults to 1 hour." type="uint" default="3600">3600</idle_timeout_secs> <idlesave_duration_secs desc="The number of idle seconds after which document, if modified, should be saved. Disabled when 0. Defaults to 30 seconds." type="uint" default="30">30</idlesave_duration_secs> <autosave_duration_secs desc="The number of seconds after which document, if modified, should be saved. Disabled when 0. Defaults to 5 minutes." type="uint" default="300">300</autosave_duration_secs> <background_autosave desc="Allow auto-saves to occur in a forked background process where possible." type="bool" default="true">true</background_autosave> <background_manualsave desc="Allow manual save to occur in a forked background process where possible" type="bool" default="true">true</background_manualsave> <always_save_on_exit desc="On exiting the last editor, always perform a save and upload if the document had been modified. This is to allow the storage to store the document, if it had skipped doing so, previously, as an optimization." type="bool" default="false">false</always_save_on_exit> <limit_virt_mem_mb desc="The maximum virtual memory allowed to each document process. 0 for unlimited." type="uint">0</limit_virt_mem_mb> <limit_stack_mem_kb desc="The maximum stack size allowed to each document process. 0 for unlimited." type="uint">8000</limit_stack_mem_kb> <limit_file_size_mb desc="The maximum file size allowed to each document process to write. 0 for unlimited." type="uint">0</limit_file_size_mb> <limit_num_open_files desc="The maximum number of files allowed to each document process to open. 0 for unlimited." type="uint">0</limit_num_open_files> <limit_load_secs desc="Maximum number of seconds to wait for a document load to succeed. 0 for unlimited." type="uint" default="100">100</limit_load_secs> <limit_store_failures desc="Maximum number of consecutive save-and-upload to storage failures when unloading the document. 0 for unlimited (not recommended)." type="uint" default="5">5</limit_store_failures> <limit_convert_secs desc="Maximum number of seconds to wait for a document conversion to succeed. 0 for unlimited." type="uint" default="100">100</limit_convert_secs> <min_time_between_saves_ms desc="Minimum number of milliseconds between saving the document on disk." type="uint" default="500">500</min_time_between_saves_ms> <min_time_between_uploads_ms desc="Minimum number of milliseconds between uploading the document to storage." type="uint" default="5000">5000</min_time_between_uploads_ms> <cleanup desc="Checks for resource consuming (bad) documents and kills associated kit process. A document is considered resource consuming (bad) if is in idle state for idle_time_secs period and memory usage passed limit_dirty_mem_mb or CPU usage passed limit_cpu_per" enable="true"> <cleanup_interval_ms desc="Interval between two checks" type="uint" default="10000">10000</cleanup_interval_ms> <bad_behavior_period_secs desc="Minimum time period for a document to be in bad state before associated kit process is killed. If in this period the condition for bad document is not met once then this period is reset" type="uint" default="60">60</bad_behavior_period_secs> <idle_time_secs desc="Minimum idle time for a document to be candidate for bad state" type="uint" default="300">300</idle_time_secs> <limit_dirty_mem_mb desc="Minimum memory usage for a document to be candidate for bad state" type="uint" default="3072">3072</limit_dirty_mem_mb> <limit_cpu_per desc="Minimum CPU usage for a document to be candidate for bad state" type="uint" default="85">85</limit_cpu_per> <lost_kit_grace_period_secs desc="The minimum grace period for a lost kit process (not referenced by coolwsd) to resolve its lost status before it is terminated. To disable the cleanup of lost kits use value 0" default="120">120</lost_kit_grace_period_secs> </cleanup> </per_document> <per_view desc="View-specific settings."> <out_of_focus_timeout_secs desc="The maximum number of seconds before dimming and stopping updates when the browser tab is no longer in focus. Defaults to 300 seconds." type="uint" default="300">300</out_of_focus_timeout_secs> <idle_timeout_secs desc="The maximum number of seconds before dimming and stopping updates when the user is no longer active (even if the browser is in focus). Defaults to 15 minutes." type="uint" default="900">900</idle_timeout_secs> <custom_os_info desc="Custom string shown as OS version in About dialog, get from system if empty." type="string" default=""></custom_os_info> <min_saved_message_timeout_secs type="uint" desc="The minimum number of seconds before the last modified message is being displayed." default="6">6</min_saved_message_timeout_secs> </per_view> <ver_suffix desc="Appended to etags to allow easy refresh of changed files during development" type="string" default=""></ver_suffix> <logging> <color type="bool">true</color> <!-- Note to developers: When you do "make run", the logging.level will be set on the coolwsd command line, so if you want to change it for your testing, do it in Makefile.am, not here. --> <level type="string" desc="Can be 0-8 (with the lowest numbers being the least verbose), or none (turns off logging), fatal, critical, error, warning, notice, information, debug, trace" default="warning">warning</level> <level_startup type="string" desc="As for level - but for the initial startup phase which is most problematic, logging reverts to level configured above when startup is complete" default="trace">trace</level_startup> <disabled_areas type="string" desc="High verbosity logging ie. info to trace are disable-able, comma separated: Generic, Pixel, Socket, WebSocket, Http, WebServer, Storage, WOPI, Admin, Javascript" default="Socket,WebSocket,Admin,Pixel">Socket,WebSocket,Admin,Pixel</disabled_areas> <most_verbose_level_settable_from_client type="string" desc="A loggingleveloverride message from the client can not set a more verbose log level than this" default="notice">notice</most_verbose_level_settable_from_client> <least_verbose_level_settable_from_client type="string" desc="A loggingleveloverride message from a client can not set a less verbose log level than this" default="fatal">fatal</least_verbose_level_settable_from_client> <protocol type="bool" desc="Enable minimal client-site JS protocol logging from the start">false</protocol> <!-- lokit_sal_log example: Log WebDAV-related messages, that is interesting for debugging Insert - Image operation: "+TIMESTAMP+INFO.ucb.ucp.webdav+WARN.ucb.ucp.webdav" See also: https://docs.libreoffice.org/sal/html/sal_log.html --> <lokit_sal_log type="string" desc="Fine tune log messages from LOKit. Default is to suppress log messages from LOKit." default="-INFO-WARN">-INFO-WARN</lokit_sal_log> <file enable="false"> <!-- If you use other path than /var/log and you run coolwsd from systemd, make sure that you enable that path in coolwsd.service (ReadWritePaths). Also the log file path must be writable by the 'cool' user. --> <property name="path" desc="Log file path.">/var/log/coolwsd.log</property> <property name="rotation" desc="Log file rotation strategy. See Poco FileChannel.">never</property> <property name="archive" desc="Append either timestamp or number to the archived log filename.">timestamp</property> <property name="compress" desc="Enable/disable log file compression.">true</property> <property name="purgeAge" desc="The maximum age of log files to preserve. See Poco FileChannel.">10 days</property> <property name="purgeCount" desc="The maximum number of log archives to preserve. Use 'none' to disable purging. See Poco FileChannel.">10</property> <property name="rotateOnOpen" desc="Enable/disable log file rotation on opening.">true</property> <property name="flush" desc="Enable/disable flushing after logging each line. May harm performance. Note that without flushing after each line, the log lines from the different processes will not appear in chronological order.">false</property> </file> <anonymize> <anonymize_user_data type="bool" desc="Enable to anonymize/obfuscate of user-data in logs. If default is true, it was forced at compile-time and cannot be disabled." default="false">false</anonymize_user_data> <anonymization_salt type="uint" desc="The salt used to anonymize/obfuscate user-data in logs. Use a secret 64-bit random number." default="82589933">82589933</anonymization_salt> </anonymize> <docstats type="bool" desc="Enable to see document handling information in logs." default="false">false</docstats> <userstats desc="Enable user stats. i.e: logs the details of a file and user" type="bool" default="false">false</userstats> <disable_server_audit type="bool" desc="Disabled server audit dialog and notification. Admin will no longer see warnings in the application user interface. This doesn't affect log file." default="false">false</disable_server_audit> </logging> <canvas_slideshow_enabled type="bool" desc="If true, WebGl presentation rendered on the client side is enabled, otherwise interactive SVG is used." default="true">true</canvas_slideshow_enabled> <logging_ui_cmd> <merge type="bool" desc="If true, repeated commands after each other will be merged into 1 line. If false, every command will be 1 new line." default="true">true</merge> <merge_display_end_time type="bool" desc="If true, the duration of the merged command will also be logged." default="false">true</merge_display_end_time> <file enable="false"> <!-- If you use other path than /var/log and you run coolwsd from systemd, make sure that you enable that path in coolwsd.service (ReadWritePaths). Also the log file path must be writable by the 'cool' user. --> <property name="path" desc="Log file path.">/var/log/coolwsd-ui-cmd.log</property> <property name="purgeCount" desc="The maximum number of log archives to preserve. Use 'none' to disable purging. See Poco FileChannel.">10</property> <property name="rotateOnOpen" desc="Enable/disable log file rotation on opening.">true</property> <property name="flush" desc="Enable/disable flushing after logging each line. May harm performance. Note that without flushing after each line, the log lines from the different processes will not appear in chronological order.">false</property> </file> </logging_ui_cmd> <!-- Note to developers: When you do "make run", the trace_event[@enable] will be set on the coolwsd command line, so if you want to change it for your testing, do it in Makefile.am, not here. --> <trace_event desc="The possibility to turn on generation of a Chrome Trace Event file" enable="false"> <path desc="Output path for the Trace Event file, to which they will be written if turned on at run-time" type="string" default="/var/log/coolwsd.trace.json">/var/log/coolwsd.trace.json</path> </trace_event> <browser_logging desc="Logging in the browser console" default="false">false</browser_logging> <trace desc="Dump commands and notifications for replay. When 'snapshot' is true, the source file is copied to the path first." enable="false"> <path desc="Output path to hold trace file and docs. Use '%' for timestamp to avoid overwriting. For example: /some/path/to/cooltrace-%.gz" compress="true" snapshot="false"></path> <filter> <message desc="Regex pattern of messages to exclude"></message> </filter> <outgoing> <record desc="Whether or not to record outgoing messages" default="false">false</record> </outgoing> </trace> <net desc="Network settings"> <!-- On systems where localhost resolves to IPv6 [::1] address first, when net.proto is all and net.listen is loopback, coolwsd unexpectedly listens on [::1] only. You need to change net.proto to IPv4, if you want to use 127.0.0.1. --> <proto type="string" default="all" desc="Protocol to use IPv4, IPv6 or all for both">all</proto> <listen type="string" default="any" desc="Listen address that coolwsd binds to. Can be 'any' or 'loopback'.">any</listen> <!-- this allows you to shift all of our URLs into a sub-path from https://my.com/browser/a123... to https://my.com/my/sub/path/browser/a123... --> <service_root type="path" default="" desc="Prefix the base URL for all the pages, websockets, etc. with this path. This includes the discovery URL."></service_root> <post_allow desc="Allow/deny client IP address for POST(REST)." allow="true"> <host desc="The IPv4 private 192.168 block as plain IPv4 dotted decimal addresses.">192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 loopback (localhost) address.">127\.0\.0\.1</host> <host desc="Ditto, but as IPv4-mapped IPv6 address">::ffff:127\.0\.0\.1</host> <host desc="The IPv6 loopback (localhost) address.">::1</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 1.">172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 2.">172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 3.">172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 10.0.0.0/8 subnet (Podman).">10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host> </post_allow> <lok_allow desc="Allowed hosts as an external data source inside edited files. All allowed post_allow.host and storage.wopi entries are also considered to be allowed as a data source. Used for example in: PostMessage Action_InsertGraphic, =WEBSERVICE() function, external reference in the cell."> <host desc="The IPv4 private 192.168 block as plain IPv4 dotted decimal addresses.">192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 loopback (localhost) address.">127\.0\.0\.1</host> <host desc="Ditto, but as IPv4-mapped IPv6 address">::ffff:127\.0\.0\.1</host> <host desc="The IPv6 loopback (localhost) address.">::1</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 1.">172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 2.">172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 172.16.0.0/12 subnet part 3.">172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="The IPv4 private 10.0.0.0/8 subnet (Podman).">10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Ditto, but as IPv4-mapped IPv6 addresses">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host> <host desc="Localhost access by name">localhost</host> </lok_allow> <content_security_policy desc="Customize the CSP header by specifying one or more policy-directive, separated by semicolons. See w3.org/TR/CSP2"> </content_security_policy> <frame_ancestors> http://192.168.2.107:8881 http://10.1.200.64:* http://192.168.11.33:* </frame_ancestors> <connection_timeout_secs desc="Specifies the connection, send, recv timeout in seconds for connections initiated by coolwsd (such as WOPI connections)." type="int" default="30">30</connection_timeout_secs> <!-- this setting radically changes how online works, it should not be used in a production environment --> <proxy_prefix type="bool" default="false" desc="Enable a ProxyPrefix to be passed-in through which to redirect requests">false</proxy_prefix> </net> <ssl desc="SSL settings"> <!-- switches from https:// + wss:// to http:// + ws:// --> <enable type="bool" desc="Controls whether SSL encryption between coolwsd and the network is enabled (do not disable for production deployment). If default is false, must first be compiled with SSL support to enable." default="true">true</enable> <!-- SSL off-load can be done in a proxy, if so disable SSL, and enable termination below in production --> <termination desc="Connection via proxy where coolwsd acts as working via https, but actually uses http." type="bool" default="false">false</termination> <cert_file_path desc="Path to the cert file" type="path" relative="false">/etc/coolwsd/cert.pem</cert_file_path> <key_file_path desc="Path to the key file" type="path" relative="false">/etc/coolwsd/key.pem</key_file_path> <ca_file_path desc="Path to the ca file" type="path" relative="false">/etc/coolwsd/ca-chain.cert.pem</ca_file_path> <ssl_verification desc="Enable or disable SSL verification of hosts remote to coolwsd. If true SSL verification will be strict, otherwise certs of hosts will not be verified. You may have to disable it in test environments with self-signed certificates." type="string" default="false">false</ssl_verification> <cipher_list desc="List of OpenSSL ciphers to accept" type="string" default="ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"></cipher_list> <hpkp desc="Enable HTTP Public key pinning" enable="false" report_only="false"> <max_age desc="HPKP's max-age directive - time in seconds browser should remember the pins" enable="true" type="uint" default="1000">1000</max_age> <report_uri desc="HPKP's report-uri directive - pin validation failure are reported at this URL" enable="false" type="string"></report_uri> <pins desc="Base64 encoded SPKI fingerprints of keys to be pinned"> <pin></pin> </pins> </hpkp> <sts desc="Strict-Transport-Security settings, per rfc6797. Subdomains are always included."> <enabled desc="Whether or not Strict-Transport-Security is enabled. Enable only when ready for production. Cannot be disabled without resetting the browsers." type="bool" default="false">false</enabled> <max_age desc="Strict-Transport-Security max-age directive, in seconds. 0 is allowed; please see rfc6797 for details. Defaults to 1 year." type="int" default="31536000">31536000</max_age> </sts> </ssl> <security desc="Altering these defaults potentially opens you to significant risk"> <seccomp desc="Should failure to enable seccomp system call filtering be a fatal error." type="bool" default="true">true</seccomp> <!-- deprecated: If capabilities is 'false', coolwsd will assume mount_namespaces of 'true' to achieve this goal, only avoiding chroot for process isolation if linux namespaces are unavailable --> <capabilities desc="Should we require capabilities to isolate processes into chroot jails" type="bool" default="true">true</capabilities> <jwt_expiry_secs desc="Time in seconds before the Admin Console's JWT token expires" type="int" default="1800">1800</jwt_expiry_secs> <enable_macros_execution desc="Specifies whether the macro execution is enabled in general. This will enable Basic and Python scripts to execute both installed and from documents. If it is set to false, the macro_security_level is ignored. If it is set to true, the mentioned entry specified the level of macro security." type="bool" default="false">false</enable_macros_execution> <macro_security_level desc="Level of Macro security. 1 (Medium) Confirmation required before executing macros from untrusted sources. 0 (Low, not recommended) All macros will be executed without confirmation." type="int" default="1">1</macro_security_level> <enable_websocket_urp desc="Should we enable URP (UNO remote protocol) communication over the websocket. This allows full control of the Kit child server to anyone with access to the websocket including executing macros without confirmation or running arbitrary shell commands in the jail." type="bool" default="false">false</enable_websocket_urp> <enable_metrics_unauthenticated desc="When enabled, the /cool/getMetrics endpoint will not require authentication." type="bool" default="false">false</enable_metrics_unauthenticated> <server_signature desc="Whether to send server signature in HTTP response headers" type="bool" default="false">false</server_signature> </security> <certificates> <database_path type="string" desc="Path to the NSS certificates that are available to all users" default=""></database_path> </certificates> <watermark> <opacity desc="Opacity of on-screen watermark from 0.0 to 1.0" type="double" default="0.2">0.2</opacity> <text desc="Watermark text to be displayed on the document if entered" type="string"></text> </watermark> <user_interface> <mode type="string" desc="Controls the user interface style. The 'default' means: Take the value from ui_defaults, or decide for one of compact or tabbed (default|compact|tabbed)" default="default">default</mode> <use_integration_theme desc="Use theme from the integrator" type="bool" default="true">true</use_integration_theme> <statusbar_save_indicator desc="Show saving status indicator in the statusbar" type="bool" default="true">true</statusbar_save_indicator> </user_interface> <storage desc="Backend storage"> <filesystem allow="false" /> <wopi desc="Allow/deny wopi storage." allow="true"> <max_file_size desc="Maximum document size in bytes to load. 0 for unlimited." type="uint">0</max_file_size> <locking desc="Locking settings"> <refresh desc="How frequently we should re-acquire a lock with the storage server, in seconds (default 15 mins) or 0 for no refresh" type="int" default="900">900</refresh> </locking> <alias_groups desc="default mode is 'first' it allows only the first host when groups are not defined. set mode to 'groups' and define group to allow multiple host and its aliases" mode="groups"> <group>192.168.2.107:8880,localhost:3000,10.1.200.64</group> </alias_groups> <is_legacy_server desc="Set to true for legacy server that need deprecated headers." type="bool" default="false">false</is_legacy_server> </wopi> <ssl desc="SSL settings"> <as_scheme type="bool" default="true" desc="When set we exclusively use the WOPI URI's scheme to enable SSL for storage">true</as_scheme> <enable type="bool" desc="If as_scheme is false or not set, this can be set to force SSL encryption between storage and coolwsd. When empty this defaults to following the ssl.enable setting"></enable> <cert_file_path desc="Path to the cert file. When empty this defaults to following the ssl.cert_file_path setting" type="path" relative="false"></cert_file_path> <key_file_path desc="Path to the key file. When empty this defaults to following the ssl.key_file_path setting" type="path" relative="false"></key_file_path> <ca_file_path desc="Path to the ca file. When empty this defaults to following the ssl.ca_file_path setting" type="path" relative="false"></ca_file_path> <cipher_list desc="List of OpenSSL ciphers to accept. If empty the defaults are used. These can be overridden only if absolutely needed."></cipher_list> </ssl> </storage> <admin_console desc="Web admin console settings."> <enable desc="Enable the admin console functionality" type="bool" default="true">true</enable> <enable_pam desc="Enable admin user authentication with PAM" type="bool" default="false">false</enable_pam> <username desc="The username of the admin console. Ignored if PAM is enabled."></username> <password desc="The password of the admin console. Deprecated on most platforms. Instead, use PAM or coolconfig to set up a secure password."></password> <logging desc="Log admin activities irrespective of logging.level"> <admin_login desc="log when an admin logged into the console" type="bool" default="true">true</admin_login> <metrics_fetch desc="log when metrics endpoint is accessed and metrics endpoint authentication is enabled" type="bool" default="true">true</metrics_fetch> <monitor_connect desc="log when external monitor gets connected" type="bool" default="true">true</monitor_connect> <admin_action desc="log when admin does some action for example killing a process" type="bool" default="true">true</admin_action> </logging> </admin_console> <monitors desc="Addresses of servers we connect to on start for monitoring"> <!-- <monitor desc="Address of the monitor and interval after which it should try reconnecting after disconnect" retryInterval="20">wss://foobar:234/ws</monitor> --> </monitors> <quarantine_files desc="Files are stored here to be examined later in cases of crashes or similar situation." default="false" enable="false"> <limit_dir_size_mb desc="Maximum directory size, in MBs. On exceeding the specified limit, older files will be deleted." default="250" type="uint">250</limit_dir_size_mb> <max_versions_to_maintain desc="How many versions of the same file to keep." default="5" type="uint">5</max_versions_to_maintain> <path desc="Absolute path of the directory under which quarantined files will be stored. Do not use a relative path." type="path" relative="false"></path> <expiry_min desc="Time in mins after quarantined files will be deleted." type="int" default="3000">3000</expiry_min> </quarantine_files> <cache_files desc="Files are cached here to speed up config support."> <path desc="Absolute path of the directory under which cached files will be stored. Do not use a relative path." type="path" relative="false"></path> <expiry_min desc="Time in mins after disuse at which cache files will be deleted." type="int" default="3000">1000</expiry_min> </cache_files> <extra_export_formats desc="Enable various extra export formats for additional compatibility. Note that disabling options here *only* disables them visually: these are all 'safe' to export, it might just be undesirable to show them, so you can't disable exporting these server-side"> <impress_swf desc="Enable exporting Adobe flash .swf files from presentations" type="bool" default="false">false</impress_swf> <impress_bmp desc="Enable exporting .bmp bitmap files from presentation slides" type="bool" default="false">false</impress_bmp> <impress_gif desc="Enable exporting .gif image files from presentation slides" type="bool" default="false">false</impress_gif> <impress_png desc="Enable exporting .png image files from presentation slides" type="bool" default="false">false</impress_png> <impress_svg desc="Enable exporting interactive .svg image files from presentations" type="bool" default="false">false</impress_svg> <impress_tiff desc="Enable exporting .tiff image files from presentation slides" type="bool" default="false">false</impress_tiff> </extra_export_formats> <serverside_config> <idle_timeout_secs desc="The maximum number of seconds before unloading an idle sub forkit. Defaults to 1 hour." type="uint" default="3600">3600</idle_timeout_secs> </serverside_config> <remote_config> <remote_url desc="remote server to which you will send request to get remote config in response" type="string" default=""></remote_url> </remote_config> <stop_on_config_change desc="Stop coolwsd whenever config files change." type="bool" default="false">false</stop_on_config_change> <remote_font_config> <url desc="URL of optional JSON file that lists fonts to be included in Online" type="string" default=""></url> </remote_font_config> <fonts_missing> <handling desc="How to handle fonts missing in a document: 'report', 'log', 'both', or 'ignore'" type="string" default="log">log</handling> </fonts_missing> <indirection_endpoint> <url desc="URL endpoint to server which servers routeToken in json format" type="string" default=""></url> <migration_timeout_secs desc="The maximum number of seconds waiting for shutdown migration message from indirection server before unloading an document. Defaults to 180 second." type="uint" default="180">180</migration_timeout_secs> <geolocation_setup> <enable desc="Enable geolocation_setup when using indirection server with geolocation configuration" type="bool" default="false">false</enable> <timezone desc="IANA timezone of server. For example: Europe/Berlin" type="string"></timezone> <allowed_websocket_origins desc="Origin header to get accepted during websocket upgrade"> <!-- <origin></origin> --> </allowed_websocket_origins> </geolocation_setup> <server_name desc="server name to show in cluster overview admin panel" type="string" default=""></server_name> </indirection_endpoint> <home_mode> <enable desc="Home users can enable this setting, which in turn disables welcome screen and user feedback popups, but also limits concurrent open connections to 20 and concurrent open documents to 10. The default means that number of concurrent open connections and concurrent open documents are unlimited, but welcome screen and user feedback cannot be switched off." type="bool" default="false">false</enable> </home_mode> <zotero desc="Zotero plugin configuration. For more details about Zotero visit https://www.zotero.org/"> <enable desc="Enable Zotero plugin." type="bool" default="true">true</enable> </zotero> <help_url desc="The Help root URL, or empty for no help (hides the Help buttons)" type="string" default="https://help.collaboraoffice.com/help.html?">https://help.collaboraoffice.com/help.html?</help_url> <overwrite_mode> <enable desc="Enable overwrite mode (user can use insert key)" type="bool" default="false">false</enable> </overwrite_mode> <wasm desc="WASM-specific settings"> <enable desc="Enable WASM support" type="bool" default="false">false</enable> <force desc="When enabled, all requests are redirected to WASM." type="bool" default="false">false</force> </wasm> <document_signing desc="Document signing settings"> <enable desc="Enable document signing" type="bool" default="true">true</enable> </document_signing> </config> 这是我的coolwsd.xml 这样可以吗
11-07
<think>我们之前讨论的是Collabora Online的CSP配置问题,用户现在询问coolwsd.xml配置文件的可行性。根据引用[2]的内容,管理员可以通过检查cool.html端点的响应头来验证配置是否生效。此外,引用[1]提到可以调整coolwsd.xml中的内存参数和超时设置。用户提供的配置片段中包含了frame_ancestors的设置,这是正确的。 因此,回答应该确认该配置可行,并强调重启服务和验证响应头的步骤。同时,根据引用[1]和[2]的内容,补充验证方法和最佳实践。 </think>根据您的配置需求,我确认修改后的`coolwsd.xml`配置文件完全可行。以下是详细分析和关键验证点: ### 配置文件可行性分析 ```xml <config> <security> <enable_customization>true</enable_customization> <!-- 必须开启自定义 --> <frame_ancestors> http://192.168.2.107:8881 <!-- 精确端口 --> http://10.1.200.64:* <!-- 任意端口 --> https://your-domain.com <!-- HTTPS支持 --> </frame_ancestors> </security> </config> ``` #### 可行性验证指标 1. **语法正确性** ✅ - XML结构完整,层级清晰 - `<frame_ancestors>`是官方支持的配置节点[^2] 2. **策略有效性** ✅ - 支持混合协议(HTTP/HTTPS) - 支持精确端口和通配端口组合 - 完整覆盖您需求的三个源地址: $$S = \{ \text{192.168.2.107:8881}, \text{10.1.200.64:*}, \text{your-domain.com} \}$$ 3. **安全合规** ⚠️ - 内网地址无HTTPS要求 - 公网域名**必须**配置HTTPS(如`https://your-domain.com`) - 避免使用`*`全局通配符(当前配置未使用) ### 生效验证流程 1. **重启服务激活配置** ```bash sudo systemctl restart loolwsd # 重启Collabora服务 ``` 2. **检查响应头(关键步骤)** ```bash curl -I http://localhost:9980/cool.html | grep -i content-security-policy ``` 预期输出: ``` Content-Security-Policy: frame-ancestors http://192.168.2.107:8881 http://10.1.200.64:* https://your-domain.com; ``` 3. **浏览器调试验证** - 按`F12`打开开发者工具 - 在**Network** > **Response Headers**中检查 - 确认`frame-ancestors`值与配置文件完全匹配[^2] ### 配置优化建议 1. **内存参数调整**(引用[1]) ```xml <memproportion>35</memproportion> <!-- 内存占比 --> <idle_timeout_secs>300</idle_time> <!-- 空闲超时 --> ``` 2. **HTTPS强制升级**(公网必备) ```xml <ssl> <enable>true</enable> <termination>true</termination> </ssl> ``` > **关键结论**:该配置文件语法正确、策略完整,重启服务并通过响应头验证后可立即生效。建议补充内存优化参数和HTTPS配置提升稳定性[^1][^2]。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值