Is Fabric Computing the Future of Cloud?

织物计算作为一种新兴概念,正迅速获得关注。它通过将存储、网络和处理等功能紧密集成来提升数据中心效率。本文探讨了从分离到再整合的数据处理方式转变,并分析了这种趋势如何与软件定义的基础设施相结合,进而推动云计算的发展。

 

原文地址:http://cloudcomputing.sys-con.com/node/1755128  感谢作者:GREGOR PETRI

 

云计算还没有站稳脚跟,新的概念又来了!

 

The term fabric computing is gaining rapid popularity, but currently mostly within the hardware community. In fact, according to a recent report, over 50% of attendees at the recent Datacenter Summit had implemented, or are in the process of implementing, fabric computing. Time to take a look at what fabric computing means for software and for (cloud) computing as a whole.

 

Depending on which dictionary you choose, you can find anywhere between two and seven meanings for "fabric." Etymology-wise, it comes from the French fabrique and the Latin fabricare, and the Dutch Fabriek actually means factory. But in an IT context, fabric has little to do with our often used manufacturing or supply chain analogies; instead it actually relates much closer to fabric in its meaning of cloth, a material produced (fabricated) by weaving fibers.

If we check our handy Wikipedia for fabric computing, we get:

Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance.[1]

 

Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects ...

 

 

 

In the context of data centers it means a move from having distinct boxes for handling storage, network and processing towards a fabric where these functions are much more intertwined or even integrated. Most people started to note the move to fabric or unified computing when Cisco started to include servers inside their switches, which they did partly in response to HP including more and more switches in their server deals. Cisco's UCS (Unified Computing System), and its bigger sibling, VCE, are the first hardware examples of this trend (although inside the box you can still distinguish the original components).

One reason to move to such a fabric design is that by moving data, network and compute closer together (integrating them) you can improve performance. Juniper's recent QFabric architecture announcement is another similar example. But, the idea of closer integration of data, processing, and communication is actually much older. In some respects, we may even conclude IT is coming full circle with this trend.

Let me explain.

Many years ago I spoke to Professor Scheer, founder of IDS Scheer and a pioneer in the field of Business Process Management (BPM). (Disclosure: years later IDS Scheer became part of my former employer: Software AG.) He spoke about how - in the old days of IT - data and logic were seen as one. Literally! If - while walking with your stack of punch cards to the computer room (back then it was a computer the size of a room, not a room with a computer in it) - you dropped your stack of punch cards, both data and logic would be in one pile on the floor. You would spend the rest of your afternoon sorting them again. There was just one stack: first the processing/algorithm logic, and then the data. Scheer's point was that just like we figured out after a while that data did not belong there and we moved it to its own place (typically a relational database), we should now separate the process flow instructions from the algorithms and move these to a workflow process engine (preferably of course his BPM engine). All valid and true - at that time.

 

But not long after, Object Oriented programming became the norm, and we started to move data back with the logic that understood how to handle that data, and treat them as objects. This of course created a new issue of having these objects perform in an even more remotely acceptable way, as we used relational databases to store or persist the data inside these objects. You could compare this to disassembling your car every night into its original pieces in order to put it in your garage. Over the years the industry figured out how to do this better,in part by creating new databases which design-wise looked remarkably similar to the (hierarchical) databases we used back in the day of punch cards.

 

And now , under the new shiny name of fabric computing we are moving all these processes back in the same physical box.

 

But this is not the whole story -- there is another revolution happening. As an industry we are moving from using dedicated hardware for specialized tasks to generic hardware with specialized software instead. For example, you might use a software virtualization layer to simply emulate a certain piece of specific hardware.

 

Or, look at a firewall: traditionally it was a piece of dedicated hardware built to do one thing (keeping non-allowed traffic out). Today, most firewalls are software-based. We use a generic processor to take care of that task. And we're seeing this trend unfold with more equipment in the data center. Even switches, load balancers and network-attached storage are becoming software-based (virtual appliance seems to be the preferred marketing buzzword for this trend).

 

Using software is more efficient than having loads of dedicated hardware, and we can't ignore the fact that software, because of its completely different economic and management characteristics, has numerous inherent advantages over hardware. For example, you can copy, change, delete and distribute software, all remotely, without having to leave your seat, and even do automatically. You'd need some pretty advanced robots to do that with hardware (if it could be done today).

 

So how do these two trends relate to cloud computing?

By combining the idea of moving stuff that needs to work together closer together (the idea of fabric) and the idea of doing that by using software instead of hardware (which gives us the economics and manageability of software) we can create higher performance, lower cost and easier to manage clouds.

 

Virtualization has been on a similar path. First we virtualized servers, then storage and networking, but all remained in their separate silos. Now we are virtualizing all of it in the same "fabric." This means that managing the entire stack gets simpler, with one tool to define it, make it work and monitor it. And that's something that should make any IT pro smile.

 

In my next post, I'll share my thoughts on why I think this approach has the power to change IT as we know it, based on some of my own epiphanies.

内容概要:本文介绍了ENVI Deep Learning V1.0的操作教程,重点讲解了如何利用ENVI软件进行深度学习模型的训练与应用,以实现遥感图像中特定目标(如集装箱)的自动提取。教程涵盖了从数据准备、标签图像创建、模型初始化与训练,到执行分类及结果优化的完整流程,并介绍了精度评价与通过ENVI Modeler实现一键化建模的方法。系统基于TensorFlow框架,采用ENVINet5(U-Net变体)架构,支持通过点、线、面ROI或分类图生成标签数据,适用于多/高光谱影像的单一类别特征提取。; 适合人群:具备遥感图像处理基础,熟悉ENVI软件操作,从事地理信息、测绘、环境监测等相关领域的技术人员或研究人员,尤其是希望将深度学习技术应用于遥感目标识别的初学者与实践者。; 使用场景及目标:①在遥感影像中自动识别和提取特定地物目标(如车辆、建筑、道路、集装箱等);②掌握ENVI环境下深度学习模型的训练流程与关键参数设置(如Patch Size、Epochs、Class Weight等);③通过模型调优与结果反馈提升分类精度,实现高效自动化信息提取。; 阅读建议:建议结合实际遥感项目边学边练,重点关注标签数据制作、模型参数配置与结果后处理环节,充分利用ENVI Modeler进行自动化建模与参数优化,同时注意软硬件环境(特别是NVIDIA GPU)的配置要求以保障训练效率。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值