Introduction to Oracle Coherence

本文介绍了Oracle Coherence这一分布式内存数据管理解决方案的特点与应用场景,包括其如何提供可靠的数据层级、动态数据容量扩展及确保数据处理能力与存储能力同步扩展。文章还深入探讨了Coherence的技术细节,如集群工作原理、故障恢复机制、不同缓存拓扑结构及其适用场景,并提供了多个实用的代码示例。

序:曾因为项目方财大气粗,并且极度青睐Oralce,幸而能在项目中接触并使用Oracle Coherence。期间我在公司内部做过一次Oracle Coherence的分享,为保证听众中的外国朋友不致于全场“坐飞机”,讲述内容以英文呈现。在此将讲述材料进一步整理与更多的朋友分享,就当是保证它的“原汁原味”吧,就不做翻译了,还请大家不要因此拍砖^_^

-----------------------------------------------------华丽分隔线----------------------------------------------------------------

 

Agenda

 

What is Coherence?
Demonstration
Technical
Code Examples
Architectural Patterns

 

What is Coherence?


Distributed Memory Data Management Solution(aka: Data Grid)

 

How Can a Data Grid Help?
1. Provides a reliable data tier with a single, consistent view of data
2. Enables dynamic data capacity including fault tolerance and load balancing
3. Ensures that data capacity scales with processing capacity

 

Oracle Grid Computing: Enterprise Ready

1. Common Shared Application Infrastructure (Application Virtualization)
2. Data Virtualization (Data as a Service)
3. Middle tier scale out for Grid Based OLTP
4. Massive Persistent scale out with Oracle RAC

 

Requirements of Enterprise Data Grid

Reliable

1. Built for continuous operation
2. Data Fault Tolerance
3. Self-Diagnosis and Healing
4. “Once and Only Once” Processing

Scalable
1. Dynamically Expandable
2. No data loss at any volume
3. No interruption of service
4. Leverage Commodity Hardware
5. Cost Effective

Universal
1. Single view of data
2. Single management view
3. Simple programming model
4. Any Application
5. Any Data Source

Data
1. Data Caching
2. Analytics
3. Transaction Processing
4. Event Processing

 

How Does Coherence Data Grid Work?

1. Cluster of nodes holding % of primary data locally
2. Back-up of primary data is distributed across all other nodes
3. Logical view of all data from any node



1. All nodes verify health of each other
2. In the event a node is unhealthy, other nodes diagnose state



1. Unhealthy node isolated from cluster
2. Remaining nodes redistribute primary and back-up responsibilities to healthy nodes


Customers & Coherence?
Caching: Applications request data from the Data Grid rather than backend data sources

Analytics: Applications ask the Data Grid questions from simple queries to advanced scenario modeling

Transactions: Data Grid acts as a transactional System of Record, hosting data and business logic

Events: Automated processing based on event

 

Coherence Demonstration


Topology #1 - Replicated Cache



Topology #2 - Partitioned Cache

Topology #2 - Guaranteed Cluster Resiliency

Topology #2 - Partitioned Failover

Topology #2a – Cache Client/Cache Server

 

Topology #3 - Near Cache

Use Case: Coherence*Web
1. Coherence*Web is an HTTP session-management module (built-in feature of Coherence)
2. Supports a wide range of application servers.
3. Does not require any changes to the application.
4. Coherence*Web uses the NearCache technology to provide fully fault-tolerant caching, with almost unlimited scalability (to several hundred cluster nodes without issue).
5. Heterogeneous applications running on mixed hardware/OS/application servers can share common user session data. This dramatically simplifies supporting Single-Sign-On across applications.

 

Coherence*Web: Session State Management

Build slide to show state is recoverable from the data grid.  There is multiple important points here – the biggest is the ability to separate the session state to a tier independent of the application – you are offloading horsepower requirements in the middletier app server to the grid and getting significant reliability as a result of making this coherence.

 

Read-Through Caching

 

Write-Through Caching

 

Write-Behind Caching
 

 

Coherence Code Examples

 

Clustering Java Processes 

Cluster cluster = CacheFactory.ensureCluster();

·Joins an existing cluster or forms a new cluster
Time “to join” configurable

·cluster contains information about the Cluster
Cluster Name
Members
Locations
Processes

·No “master” servers
·No “server registries”

 

Leaving a Cluster 

CacheFactory.shutdown();

·Leaves the current cluster
·shutdown blocks until “data” is safe

·Failing to call shutdown results in Coherence having to detect process death/exit and recover information from another process. 

·Death detection and recovery is automatic

 

Using a Cache get, put, size & remove  

NamedCache nc = CacheFactory.getCache(“mine”);

Object previous = nc.put(“key”, “hello world”);

Object current = nc.get(“key”);

int size = nc.size();

Object value = nc.remove(“key”);

 

·CacheFactory resolves cache names (ie: “mine”) to configured NamedCaches

·NamedCache provides data topology agnostic access to information
·NamedCache interfaces implement several interfaces;
  ·java.util.Map, Jcache,

  ·ObservableMap*,

  ·ConcurrentMap*,

  ·QueryMap*,

  ·InvocableMap*

(* Coherence Extensions)
 

Using a Cache keySet, entrySet, containsKey 

NamedCache nc = CacheFactory.getCache(“mine”);

Set keys = nc.keySet();

Set entries = nc.entrySet();

boolean exists = nc.containsKey(“key”);

 

·Using a NamedCache is like using a java.util.Map

·What is the difference between a Map and a Cache data-structure?

   · Both use (key,value) pairs for entries
   · Map entries don’t expire
   · Cache entries may expire
   · Maps are typically limited by heap space
   · Caches are typically size limited (by number of entries or memory)
   · Map content is typically in-process (on heap)

 

Observing Cache Changes ObservableMap 

NamedCache nc = CacheFactory.getCache(“stocks”);

nc.addMapListener(new MapListener() {
    public void onInsert(MapEvent mapEvent) {
    }
    public void onUpdate(MapEvent mapEvent) {
    }

    public void onDelete(MapEvent mapEvent) {
    }
 });

 

·Observe changes in real-time as they occur in a NamedCache
·Options exist to optimize events by using Filters, (including pre and post condition checking) and reducing on-the-wire payload (Lite Events)

·Several MapListeners are provided out-of-the-box. 
  ·Abstract, Multiplexing...

 

Querying Caches QueryMap 

NamedCache nc = CacheFactory.getCache(“people”);

Set keys = nc.keySet( new LikeFilter(“getLastName”, “%Stone%”));

Set entries = nc.entrySet(new EqualsFilter(“getAge”, 35));

 

·Query NamedCache keys and entries across a cluster (Data Grid) in parallel* using Filters
·Results may be ordered using natural ordering or custom comparators
·Filters provide support almost all SQL constructs
·Query using non-relational data representations and models
·Create your own Filters 

( * Requires Enterprise Edition or above)

 

 Continuous Observation Continuous Query Caches 

NamedCache nc = CacheFactory.getCache(“stocks”);

NamedCache expensiveItems = new ContinuousQueryCache(nc, new GreaterThan(“getPrice”, 1000));

 

·ContinuousQueryCache provides real-time and in-process copy of filtered cached data
·Use standard or your own custom Filters to limit view
·Access to “view”of cached information is instant

·May use with MapListeners to support rendering real-time local views (aka: Think Client) of Data Grid information.

 

Aggregating Information InvocableMap 

NamedCache nc = CacheFactory.getCache(“stocks”);

Double total = (Double)nc.aggregate(AlwaysFilter.INSTANCE,new DoubleSum(“getQuantity”));

Set symbols = (Set)nc.aggregate(new EqualsFilter(“getOwner”, “Larry”), new DistinctValue(“getSymbol”));

 

·Aggregate values in a NamedCache across a cluster (Data Grid) in parallel* using Filters
·Aggregation constructs include; Distinct, Sum, Min, Max, Average, Having, Group By
·Aggregate using non-relational data models
·Create your own aggregators
(* Requires Enterprise Edition or above)

 

Mutating Information InvocableMap 

NamedCache nc = CacheFactory.getCache(“stocks”);

nc.invokeAll(new EqualsFilter(“getSymbol”, “ORCL”), new StockSplitProcessor());

...

class StockSplitProcessor extends AbstractProcessor {

     Object process(Entry entry) {

          Stock stock = (Stock)entry.getValue(); 

          stock.quantity *= 2;

          entry.setValue(stock);

          return null;
     }

}

 

·Invoke EntryProcessors on zero or more entries in a NamedCache across a cluster (Data Grid) in

·parallel* (using Filters) to perform operations

·Execution occurs where the entries are managed in the cluster, not in the thread calling invoke

·This permits Data + Processing Affinity
(* Requires Enterprise Edition or above)

 

 

Oracle Coherence Architectural Patterns

 

Single Application Process
         

Coherence as “Data Structure”.  Single applications may use Coherence java.util.Map interface implementations (and extensions) for high-performance, highly configurable caching.  Clustering is not required!
 

Clustered Processes

Coherence ensures that there is a consistent view of the data in-memory to all processes.

This is sometimes referred to as a “single-system-image”


Multi Platform Cluster

 

Clustered Application Servers

 

 

With Data Source Integration (Cache Stores)

 

Clustered Second Level Cache (for Hibernate) 

 

Remote Clients connected to Coherence Cluster

 

Interconnected WAN Clusters


 

Getting Oracle Coherence

 

Search:   

http://search.oracle.com

 
Download:

http://www.oracle.com/technology/products/coherence

 
Support:
http://forums.tangosol.com
http://wiki.tangosol.com

  
Read More:
http://www.tangosol.com/

【电力系统】单机无穷大电力系统短路故障暂态稳定Simulink仿真(带说明文档)内容概要:本文档围绕“单机无穷大电力系统短路故障暂态稳定Simulink仿真”展开,提供了完整的仿真模型与说明文档,重点研究电力系统在发生短路故障后的暂态稳定性问题。通过Simulink搭建单机无穷大系统模型,模拟不同类型的短路故障(如三相短路),分析系统在故障期间及切除后的动态响应,包括发电机转子角度、转速、电压和功率等关键参数的变化,进而评估系统的暂态稳定能力。该仿真有助于理解电力系统稳定性机理,掌握暂态过程分析方法。; 适合人群:电气工程及相关专业的本科生、研究生,以及从事电力系统分析、运行与控制工作的科研人员和工程师。; 使用场景及目标:①学习电力系统暂态稳定的基本概念与分析方法;②掌握利用Simulink进行电力系统建模与仿真的技能;③研究短路故障对系统稳定性的影响及提高稳定性的措施(如故障清除时间优化);④辅助课程设计、毕业设计或科研项目中的系统仿真验证。; 阅读建议:建议结合电力系统稳定性理论知识进行学习,先理解仿真模型各模块的功能与参数设置,再运行仿真并仔细分析输出结果,尝试改变故障类型或系统参数以观察其对稳定性的影响,从而深化对暂态稳定问题的理解。
本研究聚焦于运用MATLAB平台,将支持向量机(SVM)应用于数据预测任务,并引入粒子群优化(PSO)算法对模型的关键参数进行自动调优。该研究属于机器学习领域的典型实践,其核心在于利用SVM构建分类模型,同时借助PSO的全局搜索能力,高效确定SVM的最优超参数配置,从而显著增强模型的整体预测效能。 支持向量机作为一种经典的监督学习方法,其基本原理是通过在高维特征空间中构造一个具有最大间隔的决策边界,以实现对样本数据的分类或回归分析。该算法擅长处理小规模样本集、非线性关系以及高维度特征识别问题,其有效性源于通过核函数将原始数据映射至更高维的空间,使得原本复杂的分类问题变得线性可分。 粒子群优化算法是一种模拟鸟群社会行为的群体智能优化技术。在该算法框架下,每个潜在解被视作一个“粒子”,粒子群在解空间中协同搜索,通过不断迭代更新自身速度与位置,并参考个体历史最优解和群体全局最优解的信息,逐步逼近问题的最优解。在本应用中,PSO被专门用于搜寻SVM中影响模型性能的两个关键参数——正则化参数C与核函数参数γ的最优组合。 项目所提供的实现代码涵盖了从数据加载、预处理(如标准化处理)、基础SVM模型构建到PSO优化流程的完整步骤。优化过程会针对不同的核函数(例如线性核、多项式核及径向基函数核等)进行参数寻优,并系统评估优化前后模型性能的差异。性能对比通常基于准确率、精确率、召回率及F1分数等多项分类指标展开,从而定量验证PSO算法在提升SVM模型分类能力方面的实际效果。 本研究通过一个具体的MATLAB实现案例,旨在演示如何将全局优化算法与机器学习模型相结合,以解决模型参数选择这一关键问题。通过此实践,研究者不仅能够深入理解SVM的工作原理,还能掌握利用智能优化技术提升模型泛化性能的有效方法,这对于机器学习在实际问题中的应用具有重要的参考价值。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值