Tuning 15 Application Tuning

本文介绍了 Oracle 数据库的优化方法,包括表的移动操作、B-Tree 索引优化、倒序索引的使用及索引组织表(IOT)的特点。通过实际案例解释了这些技术如何提高查询效率。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

image

image

oracle 是经过多年研发的, 通用的, 质量很高, 而application 是为客户定制的, 一次性的, 质量可能会出问题.

image

数据库的 move 的含义, 是将老表copy到新的表, 然后将新表命名为老表的名字.

为什么要move呢, 可能是表的物理结构有问题, 有很多碎片化. 或者是 PCTFREE, PCTUSED 设置要改变.

image

实际上是使用一个 pl/sql 包 (administrator guid ora 11g, 这个是oracle 提供的一个官方文档)

image

B-Tree 是一个平衡树, 所有的叶子节点都在一个层次上.

image

image

例如递增的编号, 1234, 1235, 1236, 看来前边是一样的, 那么能不能123 存储1次, 后别的单独存储.

看来就是索引值前边有相同的数值.

image

image

image

image 压缩第1列

image

倒序索引 7566 –> 倒序 6657, 7782 倒序 2877

为什么要倒序呢, 因为往往之前是按照顺序排的, 那么可能上图 7499~7782在一个block里, 这时候如果有3个人, 它们分别想确认 7499, 7566, 7782 那么其中两个人就要等待, 因为一次性一个块只能被一个人访问, 但是, 如果倒序后, 那么这些信息已经被分开到多个块中了, 这时候这些块都可以被同时访问.

image

image

image

image

image

image

image

image

为什么要创建索引表, 比如上边例子, 我们为 x 创建一个索引, OK 没有问题, 但是如果我们又为 y z 创建了索引, 那么就没有必要了, 因为我们加大了很多磁盘的 I/O, 这个时候我们就可以考虑使用所以表.

image

IOT 表一定要有主见, 可见IOT表示根据主键的.

image

image

overflow 的意思是, 我想把这个table中的一部分不常用的列移动到别的地方, 不跟这个表存储在一起的物理特性.

PCTTHRESHOLD 20 一个叶子数据块中可以放多少个entry, 多了的就放到 overflow 中

IOT 没有 rowid

image

image

image

image

image

image

image

OLAP

image

这里主要是针对sql语句调优.

image

image

image

image

一般都会把这两个系统独立分开, 除非没钱

image

转载于:https://www.cnblogs.com/moveofgod/p/3641261.html

### Ollama Fine-Tuning Guide and Documentation #### Overview of Ollama Fine-Tuning Fine-tuning is a process where pre-trained models are adapted to specific tasks or domains, improving performance on specialized datasets. This approach leverages the knowledge captured by large-scale language models during their initial training phase while allowing customization for particular applications. For Ollama-specific fine-tuning, several key aspects must be considered: - **Data Preparation**: Ensuring that data used for fine-tuning aligns closely with target application scenarios. - **Model Selection**: Choosing an appropriate base model based on task requirements and resource constraints. - **Hyperparameter Tuning**: Adjusting parameters such as learning rate, batch size, etc., to optimize results. - **Evaluation Metrics**: Defining clear criteria to assess improvements post-fine-tuning. #### Practical Steps for Implementing Ollama Fine-Tuning To implement effective fine-tuning strategies using Ollama frameworks, consider following these guidelines: ```python from ollama import load_model, prepare_data, train_model # Load pretrained model model = load_model('pretrained_ollama') # Prepare dataset tailored towards desired use case data = prepare_data(path_to_custom_dataset) # Define hyperparameters suitable for your scenario hyperparams = { 'learning_rate': 0.001, 'batch_size': 32, 'epochs': 5 } # Train the model with custom settings train_model(model=model, data=data, params=hyperparams) ``` This code snippet demonstrates how one might set up a basic pipeline for adapting a generic Ollama-based system into something more domain-specific through targeted adjustments at both input preparation stages alongside parameter selection processes[^1]. #### Best Practices When Performing Ollama Fine-Tuning When engaging in this type of work, adhering to best practices can significantly enhance outcomes: - Utilize high-quality labeled examples relevant to intended deployment contexts. - Experiment systematically across multiple configurations before settling on final choices. - Monitor progress carefully throughout experimentation phases via logging mechanisms built within development environments. - Validate findings rigorously against benchmarks established prior to initiating any modifications. By incorporating these recommendations when working with Ollama's capabilities, developers stand better positioned not only achieve superior technical achievements but also ensure ethical considerations remain paramount during all facets involved from conception through execution cycles[^2]. --related questions-- 1. What preprocessing steps should be taken before feeding text inputs into Ollama models? 2. How does transfer learning differ between general-purpose versus industry-focused NLP solutions like those offered under Ollama umbrella? 3. Can you provide real-world success stories showcasing benefits derived after applying advanced tuning techniques discussed here?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值