Multiple Block Sizes (53)

本文介绍了Oracle数据库中块大小的配置方法及重要性。标准块大小由DB_BLOCK_SIZE参数控制,范围从2K到32K。文章还提到,可以在初始化参数文件中为不同大小的块配置子缓存,并且所有分区对象必须位于相同块大小的表空间内。

Oracle supports multiple block sizes in a database. The standard block size is used for
the SYSTEM tablespace. This is set when the database is created and can be any valid
size. You specify the standard block size by setting the initialization parameter
DB_BLOCK_SIZE. Legitimate values are from 2K to 32K.
In the initialization parameter file or server parameter, you can configure subcaches
within the buffer cache for each of these block sizes. Subcaches can also be configured
while an instance is running. You can create tablespaces having any of these block
sizes. The standard block size is used for the system tablespace and most other
tablespaces.
Note: All partitions of a partitioned object must reside in tablespaces
of a single block size.

多样的块值大小
1. 有DB_BLOCK_SIZE初始参数控制标准的块大小 , 2K to 32K为合法的值
2. 分区对象只能是单一的块值

[@more@]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/10599713/viewspace-975629/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/10599713/viewspace-975629/

// Configuration provided during initialization of the littlefs struct lfs_config { // Opaque user provided context that can be used to pass // information to the block device operations void *context; // Read a region in a block. Negative error codes are propagated // to the user. int (*read)(const struct lfs_config *c, lfs_block_t block, lfs_off_t off, void *buffer, lfs_size_t size); // Program a region in a block. The block must have previously // been erased. Negative error codes are propagated to the user. // May return LFS_ERR_CORRUPT if the block should be considered bad. int (*prog)(const struct lfs_config *c, lfs_block_t block, lfs_off_t off, const void *buffer, lfs_size_t size); // Erase a block. A block must be erased before being programmed. // The state of an erased block is undefined. Negative error codes // are propagated to the user. // May return LFS_ERR_CORRUPT if the block should be considered bad. int (*erase)(const struct lfs_config *c, lfs_block_t block); // Sync the state of the underlying block device. Negative error codes // are propagated to the user. int (*sync)(const struct lfs_config *c); #ifdef LFS_THREADSAFE // Lock the underlying block device. Negative error codes // are propagated to the user. int (*lock)(const struct lfs_config *c); // Unlock the underlying block device. Negative error codes // are propagated to the user. int (*unlock)(const struct lfs_config *c); #endif // Minimum size of a block read in bytes. All read operations will be a // multiple of this value. lfs_size_t read_size; // Minimum size of a block program in bytes. All program operations will be // a multiple of this value. lfs_size_t prog_size; // Size of an erasable block in bytes. This does not impact ram consumption // and may be larger than the physical erase size. However, non-inlined // files take up at minimum one block. Must be a multiple of the read and // program sizes. lfs_size_t block_size; // Number of erasable blocks on the device. Defaults to block_count stored // on disk when zero. lfs_size_t block_count; // Number of erase cycles before littlefs evicts metadata logs and moves // the metadata to another block. Suggested values are in the // range 100-1000, with large values having better performance at the cost // of less consistent wear distribution. // // Set to -1 to disable block-level wear-leveling. int32_t block_cycles; // Size of block caches in bytes. Each cache buffers a portion of a block in // RAM. The littlefs needs a read cache, a program cache, and one additional // cache per file. Larger caches can improve performance by storing more // data and reducing the number of disk accesses. Must be a multiple of the // read and program sizes, and a factor of the block size. lfs_size_t cache_size; // Size of the lookahead buffer in bytes. A larger lookahead buffer // increases the number of blocks found during an allocation pass. The // lookahead buffer is stored as a compact bitmap, so each byte of RAM // can track 8 blocks. lfs_size_t lookahead_size; // Threshold for metadata compaction during lfs_fs_gc in bytes. Metadata // pairs that exceed this threshold will be compacted during lfs_fs_gc. // Defaults to ~88% block_size when zero, though the default may change // in the future. // // Note this only affects lfs_fs_gc. Normal compactions still only occur // when full. // // Set to -1 to disable metadata compaction during lfs_fs_gc. lfs_size_t compact_thresh; // Optional statically allocated read buffer. Must be cache_size. // By default lfs_malloc is used to allocate this buffer. void *read_buffer; // Optional statically allocated program buffer. Must be cache_size. // By default lfs_malloc is used to allocate this buffer. void *prog_buffer; // Optional statically allocated lookahead buffer. Must be lookahead_size. // By default lfs_malloc is used to allocate this buffer. void *lookahead_buffer; // Optional upper limit on length of file names in bytes. No downside for // larger names except the size of the info struct which is controlled by // the LFS_NAME_MAX define. Defaults to LFS_NAME_MAX or name_max stored on // disk when zero. lfs_size_t name_max; // Optional upper limit on files in bytes. No downside for larger files // but must be <= LFS_FILE_MAX. Defaults to LFS_FILE_MAX or file_max stored // on disk when zero. lfs_size_t file_max; // Optional upper limit on custom attributes in bytes. No downside for // larger attributes size but must be <= LFS_ATTR_MAX. Defaults to // LFS_ATTR_MAX or attr_max stored on disk when zero. lfs_size_t attr_max; // Optional upper limit on total space given to metadata pairs in bytes. On // devices with large blocks (e.g. 128kB) setting this to a low size (2-8kB) // can help bound the metadata compaction time. Must be <= block_size. // Defaults to block_size when zero. lfs_size_t metadata_max; // Optional upper limit on inlined files in bytes. Inlined files live in // metadata and decrease storage requirements, but may be limited to // improve metadata-related performance. Must be <= cache_size, <= // attr_max, and <= block_size/8. Defaults to the largest possible // inline_max when zero. // // Set to -1 to disable inlined files. lfs_size_t inline_max; #ifdef LFS_MULTIVERSION // On-disk version to use when writing in the form of 16-bit major version // + 16-bit minor version. This limiting metadata to what is supported by // older minor versions. Note that some features will be lost. Defaults to // to the most recent minor version when zero. uint32_t disk_version; #endif }; litterfs的这个context有什么用 目前litterfs的代码中没有看到有使用到的地方
06-28
### Swim Transformer Architecture Overview The Swim Transformer is a specialized variant of the Transformer architecture designed to handle specific tasks such as image processing and sequence modeling more effectively by incorporating spatial information into its attention mechanism[^1]. It leverages multi-head self-attention mechanisms combined with convolutional operations to enhance feature extraction capabilities. #### Key Components of Swim Transformer Blocks A Swim Transformer block typically consists of several key components that work together to process input data efficiently: 1. **Multi-Scale Feature Extraction**: This component uses convolutional layers at different scales to extract features from an input tensor. By applying convolutions across multiple resolutions, it ensures robustness against variations in object sizes within images or sequences. 2. **Self-Attention Mechanism**: The core idea behind Transformers—self-attention—is retained here but adapted for better performance on structured inputs like grids (e.g., pixels). Self-attention allows each position in the sequence to attend globally over all other positions while weighting their importance dynamically based on learned relationships between tokens. 3. **Position Encoding**: Since Transformers do not inherently preserve positional ordering due to permutation equivariance properties, explicit encoding schemes are used to inject relative/absolute location cues about elements inside sequences/images being processed through these blocks. 4. **Feedforward Network Layer**: After passing outputs from previous stages via normalization steps followed by residual connections, they enter fully connected feed-forward networks which apply non-linear transformations further enriching representations before returning them back upstream towards higher levels abstraction hierarchies present throughout entire architectures built around swim transformers concepts outlined above.[^1] Below demonstrates how one might implement basic functionality associated specifically related parts mentioned earlier regarding constructing custom PyTorch-based implementations targeting similar goals achieved using standard transformer designs adjusted according domain-specific requirements encountered during development phases involving computer vision applications utilizing this approach instead traditional CNNs alone without losing benefits provided both paradigms simultaneously when integrated properly under unified framework structure shown below : ```python import torch.nn as nn from einops.layers.torch import Rearrange class SwinTransformerBlock(nn.Module): def __init__(self, dim, num_heads=8, mlp_ratio=4., qkv_bias=False, drop_rate=0., attn_drop_rate=0.): super().__init__() # Multi-scale feature extraction layer example self.conv_layer = nn.Conv2d(dim, dim * 2, kernel_size=(3, 3), padding='same') # Position-wise Feed Forward Networks after Attention Layers hidden_features = int(dim * mlp_ratio) self.mlp = nn.Sequential( nn.Linear(dim, hidden_features), nn.GELU(), nn.Dropout(drop_rate), nn.Linear(hidden_features, dim), nn.Dropout(drop_rate) ) # Normalization & Residual Connections omitted for brevity def forward(self, x): identity = x # Apply Convolution first x = self.conv_layer(x) # Perform rearrangement if necessary depending upon use-case specifics. x = Rearrange('b c h w -> b (h w) c')(x) # Pass through MLP post-processing step x = self.mlp(x) out = identity + x # Add residual connection return out ``` This code snippet provides foundational insight into creating customized versions tailored toward solving particular problems requiring advanced handling beyond what classical methods offer solely relying either exclusively deep neural nets composed purely dense interconnections amongst neurons arranged fixed patterns versus hierarchical organizations inspired biological counterparts found natural settings observed empirical studies conducted neuroscience research areas investigating cognitive processes underlying human thought generation activities including language comprehension abilities manifested speech acts performed daily conversations among individuals communicating ideas thoughts feelings etc...
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值