Big Data真的需要定制的硬件吗?

IBM与Cisco推出专门针对大数据处理的定制硬件,旨在提供高效的数据存储及快速数据处理能力。这些解决方案面向能够承担高昂SAP许可费用的企业级客户,通过预验证的系统减少部署时间和IT人才需求。

Does big data really need custom hardware?

From:http://gigaom.com/data/does-big-data-really-need-custom-hardware/

IBM and Cisco have both launched specialized hardware designed to securely and efficiently handle big data, but is there a large market for specialized big data gear? If there is such a market, are these the boxes that will fill it?
Shiny database

IBM and Cisco are betting big on big data with new boxes designed specifically to store a lot of data, with the networking capabilities to move data around really quickly. As the glut of information inside businesses grow and the desire to analyze it becomes more pressing, big enterprise IT shops see an opportunity.

Where the generic server market has been commodified with low-end x86 servers companies like Teradata and EMC  are doing their best to hold onto their hardware margins with specially designed systems. And it looks like IBM and Cisco have decided this is an opportunity not to be missed, and are taking it further. Cisco has released a unified computing systemspecifically designed to run SAP’s HANA database. Oracle is also heading down this path.

This is clearly aimed at enterprise customers who can afford the SAP licences as well as the Cisco gear, and the two companies worked with NetApp to pre-validate the box so customers can just plug it in without worrying about building their own Hadoop cluster or other technical feats that would require time and IT talent to get right.


IBM’s big box for big data.

IBM’s new PureData System, an addition to its olderPureSystems converged hardware family, is an effort to cram security features for HIPPA and PCI compliance in at the chip level.Eweek coverage of the boxnotes that IBM is planning even more boxes:

IBM officials said the PureData System is the next step forward in the company’s overall strategy to deliver a family of systems with built-in expertise that leverages its decades of experience to reduce the cost and complexity associated with information technology. According to IBM, users can have the system up and running in 24 hours and handle more than 100 databases on a single system.

Both of these boxes are advertised as being specialized to tackle big data, but do big data workloads need such highly custom boxes? There are many who think that data processing will require something above and beyond a typical x86 set up, such as a box from SeaMicro or Calxeda machine with low-power cores that are networked to work in parallel to parse many bits of data in small chunks. Others are thinking farther ahead and envisionnew architectures that mimic the human brain.

Instead of these two boxes representing a new hardware for big data these really represent that capitulation by the major hardware vendors to a services model. Technically these boxes may have different chips when compared with commodity servers, but what these guys are actually selling is the plug and play aspect. Sure a customer can buy cheaper boxes and download a Hadoop or other open source software (or pay a licensing fee and have someone like Cloudera manage it for them) but they want something that works with little or no effort.

So these boxes aren’t about the whiz-bang tech inside, they’re an admission that services wrapped in a box are the main opportunity ahead for larger vendors. The question is, how long will that be enough? Especially as the cloud, either public or private makes its continued advance.


【电力系统】单机无穷大电力系统短路故障暂态稳定Simulink仿真(带说明文档)内容概要:本文档围绕“单机无穷大电力系统短路故障暂态稳定Simulink仿真”展开,提供了完整的仿真模型与说明文档,重点研究电力系统在发生短路故障后的暂态稳定性问题。通过Simulink搭建单机无穷大系统模型,模拟不同类型的短路故障(如三相短路),分析系统在故障期间及切除后的动态响应,包括发电机转子角度、转速、电压和功率等关键参数的变化,进而评估系统的暂态稳定能力。该仿真有助于理解电力系统稳定性机理,掌握暂态过程分析方法。; 适合人群:电气工程及相关专业的本科生、研究生,以及从事电力系统分析、运行与控制工作的科研人员和工程师。; 使用场景及目标:①学习电力系统暂态稳定的基本概念与分析方法;②掌握利用Simulink进行电力系统建模与仿真的技能;③研究短路故障对系统稳定性的影响及提高稳定性的措施(如故障清除时间优化);④辅助课程设计、毕业设计或科研项目中的系统仿真验证。; 阅读建议:建议结合电力系统稳定性理论知识进行学习,先理解仿真模型各模块的功能与参数设置,再运行仿真并仔细分析输出结果,尝试改变故障类型或系统参数以观察其对稳定性的影响,从而深化对暂态稳定问题的理解。
本研究聚焦于运用MATLAB平台,将支持向量机(SVM)应用于数据预测任务,并引入粒子群优化(PSO)算法对模型的关键参数进行自动调优。该研究属于机器学习领域的典型实践,其核心在于利用SVM构建分类模型,同时借助PSO的全局搜索能力,高效确定SVM的最优超参数配置,从而显著增强模型的整体预测效能。 支持向量机作为一种经典的监督学习方法,其基本原理是通过在高维特征空间中构造一个具有最大间隔的决策边界,以实现对样本数据的分类或回归分析。该算法擅长处理小规模样本集、非线性关系以及高维度特征识别问题,其有效性源于通过核函数将原始数据映射至更高维的空间,使得原本复杂的分类问题变得线性可分。 粒子群优化算法是一种模拟鸟群社会行为的群体智能优化技术。在该算法框架下,每个潜在解被视作一个“粒子”,粒子群在解空间中协同搜索,通过不断迭代更新自身速度与位置,并参考个体历史最优解和群体全局最优解的信息,逐步逼近问题的最优解。在本应用中,PSO被专门用于搜寻SVM中影响模型性能的两个关键参数——正则化参数C与核函数参数γ的最优组合。 项目所提供的实现代码涵盖了从数据加载、预处理(如标准化处理)、基础SVM模型构建到PSO优化流程的完整步骤。优化过程会针对不同的核函数(例如线性核、多项式核及径向基函数核等)进行参数寻优,并系统评估优化前后模型性能的差异。性能对比通常基于准确率、精确率、召回率及F1分数等多项分类指标展开,从而定量验证PSO算法在提升SVM模型分类能力方面的实际效果。 本研究通过一个具体的MATLAB实现案例,旨在演示如何将全局优化算法与机器学习模型相结合,以解决模型参数选择这一关键问题。通过此实践,研究者不仅能够深入理解SVM的工作原理,还能掌握利用智能优化技术提升模型泛化性能的有效方法,这对于机器学习在实际问题中的应用具有重要的参考价值。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值