星期五, 5月 25, 2012

Why IBM Turned Off Siri and Dropbox


http://www.informationweek.com/byte/news/radio/personal-tech/240000962

Why IBM Turned Off Siri (and Dropbox and Lots Of Other Things)

Make a Comment | Serdar YegulalpBYTEMay 23, 2012 08:35 PM


If there's one company I didn't expect to have massive growing pains with BYOD, it's IBM. Then again, maybe they're more of a poster child for the promise and peril of BYOD than we might have expected.

MORE INSIGHTS

Webcasts

More >>

White Papers

More >>

Reports

More >>
Their problems with Consumerization of IT are, in the abstract, no different from the same issue any other company has faced when they start a BYOD policy: access to potentially disruptive services; a proliferation of unwanted and unauthorized software within the organization; unclear consequences for many actions.
But everything I've heard about IBM's corporate culture tells me it's a place where IBM comes first--their tools, their software, their processes, their systems, their everything. Small wonder they get antsy when people bring in third-party solutions like Dropbox, even if those products and services provide valuable benefits to the business.
This might have worked in decades past, but it's becoming increasingly untenable for companies of its size -- or, for that matter, any size. Heterogeneity's the way IT works now, with BYOD only being one part of that picture. IBM may never have gotten fired for buying IBM -- along with plenty of other people, once upon a time, but what about now?
So what's behind IBM's sudden reassessment of BYOD?
They're worried about leaks. And rightfully so. One of the major challenges of any BYOD arrangement is how to keep insiders from walking out with the company's intellectual property -- which is the single biggest way corporate espionage continues to be committed. (It isn't hackers, Anonymous notwithstanding.) Shutting off access to Siri was apparently part of this, as they didn't know what happened to the queries once they were made.
But the newest trend in COIT, and a rising one, is professional versions of the same services with management policies built in. Box.com, for instance, has all this and more. I suspect just about every "personal" service launched from now on will come with a "professional" tier--and if it does, it better have disclaimers about what's done with data gathered from both regular and corporate customers.
Their BYOD policy wasn't as well-thought-out as they hoped. Based on what the above-linked article says, it sounds like IBM's BYOD initiative was rolled out with the expectations that end users would know how to deal with their own devices; but they didn't, for the most part, have that knowledge. (Says the article: "'We found a tremendous lack of awareness as to what constitutes a risk,' says Horan. So now, she says, 'we're trying to make people aware.'")
Their expectations were wrong. What you expect to get from BYOD is as important as how you go about implementing it. One telling quote from the piece: "The trend toward employee-owned devices isn't saving IBM any money" (according to IBM's CIO, Jeanette Horan). The problem, as I've seen elsewhere, is how you define savings. Perhaps for them the projected costs of supporting BYOD -- and especially, the cost of setting up retroactive protection measures -- exceed any imagined gains in productivity.
But until they produce some hard numbers to back that up, I'm going to go out on a limb and say the gains provided through BYOD (and everything that goes with it) are more than worth the hassle, if only in terms of employee satisfaction and comfort. Some of those things cannot be quantified easily or conventionally, especially if you're only looking at the current quarter or a season or two ahead.
I'm sure even IBM recognizes it can't keep its finger in the dyke forever. COIT is something you either make happen, or which happens to you -- and there's only so far they can turn their own clock back before it breaks. But if IBM gets it right, they could serve as one of the better models for others to follow, instead of a classic example of what not to do.


報導:IBM內部禁用Siri、Dropbox
文/陳曉莉 (編譯) 2012-05-24
此外,BYOD並沒有替IBM省下任何金錢,打破BOYD可降低企業成本的說法,因為企業可能要耗費更多的成本來支援BYOD或維持其安全性,IBM的例子恰巧展示了現今擁抱BOYD的企業所面臨的挑戰。
MIT Technology Review引述 IBM資訊長Jeanette Horan表示,許多受歡迎的行動應用程式可能造成內部的安全風險,因此已列出禁用的行動程式,諸如Dropbox,以及蘋果的iCloud與Siri等。

IBM禁用網路硬碟空間Dropbox或iCloud可能很合理,Horan說,該公司擔心員工以行動裝置使用公開的檔案分享服務可能會導致機密資料外洩。至於禁用Siri,則是擔心使用者的查詢可能被存在某處而不自知。Horan坦承,IBM可能太過保守,但保守是IBM的本質。

雖然IBM也是自帶裝置上班(Bring Your Own Device,BYOD)政策的擁護者,但對於BYOD亦進行了一定的規範。例如在員工裝置連網之前,IT部門會先設定該裝置,啟動遠端移除功能,以在裝置遺失或失竊時能移除裝置上的機密資訊;而且對不同品牌的裝置或是不同職位的員工設有不同的規範,例如某些員工只能透過自己的裝置存取IBM的電子郵件、行事曆與聯絡人名單,而有些人則能存取內部的應用程式與檔案,但後者的裝置上必須加上安全軟體以防資訊外洩。

BYOD符合了消費化IT的趨勢, 思科最近的調查顯示,有95%的企業允許員工在職場上使用自己的裝置,以改善員工的生產力並提高工作滿意度。市場研究機構Gartner則預測該趨勢將使企業IT的預算脫離IT部門的掌控,而且IT部門必須具備更好的協調性。

有媒體評論指出,IBM除了擔心資訊外洩之外,也發現員工並不如原先所預期地知道如何進行裝置的安全管理,此外,Horan甚至還說,BYOD並沒有替IBM省下任何金錢,打破BOYD可降低企業成本的說法,因為企業可能要耗費更多的成本來支援BYOD或維持其安全性,IBM的例子恰巧展示了現今擁抱BOYD的企業所面臨的挑戰。(編譯/陳曉莉)

星期三, 5月 23, 2012

祖克柏



祖克柏規定自己一天的時間中,至少要有1小時健身、1小時學中文、6小時睡覺,其他時間都專注在產品技術上,年僅28歲的他能讀寫法語、希伯來語、拉丁語、古希臘語和中文,即便年紀輕輕就坐擁上億身價,但祖克柏物質慾望卻極低,不開名車、崇尚簡單生活哲學。
據《財星》雜誌報導,祖克柏只吃自己親手宰殺的動物,曾在臉書上分享自己殺雞及料理後的食物照,祖克柏說,自從他開始執行這項挑戰後,每次邀朋友到家裡聚餐,大夥兒都變得不太敢吃肉。
一名住在離祖克柏住處不遠的廚師說:「他用割喉的方式殺羊,是最仁慈的宰殺方法。」祖克柏說:「因為我只吃自己殺的動物,所以我基本上成了素食主義者。我覺得很多人都忘了,為了你要吃肉,就有動物必須犧牲。」希望自己常存感謝的心。

星期二, 5月 22, 2012

Intel® Advanced Encryption Standard (AES)


Intel® Advanced Encryption Standard (AES) Instructions Set - Rev 3

Submit New Article
January 24, 2010 10:00 PM PST

Introduction

Intel® AES instructions are a new set of instructions available beginning with the all new 2010 Intel® Core™ processor family based on the 32nm Intel® microarchitecture codename
Westmere. These instructions enable fast and secure data encryption and decryption, using the Advanced Encryption Standard (AES) which is defined by FIPS Publication number 197. Since AES is currently the dominant block cipher, and it is used in various protocols, the new instructions are valuable for a wide range of applications.

The architecture consists of six instructions that offer full hardware support for AES. Four instructions support the AES encryption and decryption, and other two instructions support the AES key expansion.

The AES instructions have the flexibility to support all usages of AES, including all standard key lengths, standard modes of operation, and even some nonstandard or future variants. They offer a significant increase in performance compared to the current pure-software implementations.

Beyond improving performance, the AES instructions provide important security benefits. By running in data-independent time and not using tables, they help in eliminating the major timing and cache-based attacks that threaten table-based software implementations of AES. In addition, they make AES simple to implement, with reduced code size, which helps reducing the risk of inadvertent introduction of security flaws, such as difficult-to-detect side channel leaks.

This paper gives an overview of the AES algorithm and Intel's new AES instructions. It provides guidelines and demonstrations for using these instructions to write secure and high performance AES implementations. This version of the paper also provides a high performance library for implementing AES in the ECB/CBC/CTR modes, and discloses for the first time, the measured performance numbers.

Unified Storage


unified storage (network unified storage or NUS)

Unified storage (sometimes termed network unified storage or NUS) is a storage system that makes it possible to run and manage files and applications from a single device. To this end, a unified storage system consolidates file-based and block-based access in a single storage platform and supports fibre channel SAN, IP-based SAN (iSCSI), and NAS (network attached storage).
A unified storage system simultaneously enables storage of file data and handles the block-based I/O (input/output) of enterprise applications. In actual practice, unified storage is often implemented in a NAS platform that is modified to add block-mode support. For example, Reldata Inc offers the SANnet universal IP storage system and Network Appliance Inc. offers a unified storage architecture. Numerous other products based on Microsoft's WUDSS (Windows Unified Data Storage Server) have been configured to support both block and file I/O.
One advantage of unified storage is reduced hardware requirements. Instead of separate storage platforms, like NAS for file-based storage and a RAID disk array for block-based storage, unified storage combines both modes in a single device. Alternatively, a single device could be deployed for either file or block storage as required.
In addition to lower capital expenditures for the enterprise, unified storage systems can also be simpler to manage than separate products. However, the actual management overhead depends on the full complement of features and functionality provided in the platform. Furthermore, unified storage often limits the level of control in file-based versus block-based I/O, potentially leading to reduced or variable storage performance. For these reasons, mission-critical applications should continue to be deployed on block-based storage systems.
Unified storage systems generally cost the same and enjoy the same level of reliability as dedicated file or block storage systems. Users can also benefit from advanced features such as storage snapshots and replication, although heterogeneous support between different storage platforms should be considered closely. While experts predict a bright outlook for unified storage products, it is likely that dedicated block-based storage systems will remain a popular choice when consistent high performance and fine control granularity are important considerations.
This was last updated in December 2006
Editorial Director: Margaret Rouse

星期一, 5月 21, 2012


Web 2.0: Article

Citrix Buys Virtual Computer

It means to combine the acquisition’s NxTop widgetry with its XenClient hypervisor

Citrix has acquired Virtual Computer, a little Massachusetts outfit with enterprise-scale management solutions for client-side virtualization.
It means to combine the acquisition's NxTop widgetry with its XenClient hypervisor to create a new Citrix XenClient Enterprise edition that can manage "large fleets" of corporate laptops across a distributed enterprise and give users a virtual desktop "to go."
It's due this quarter as a standalone product at a reported $175 a user.
Citrix said it's getting the management piece faster by buying it.
Virtual Computer has historically focused on solutions for Xen-based client hypervisors. Its technology includes backup, disaster recovery, provisioning, security and monitoring capabilities. The merger also promises greater integration between XenClient and XenDesktop.


Citrix Announces XenClient Enterprise and Acquisition of Virtual Computer

New Offering Combines Power of XenClient Hypervisor with Enterprise-Class Management of Virtual Computer

San Francisco, CA » 5/9/2012 » Today, at Citrix Synergy™, the conference where mobile workstyles and cloud services meet, Citrix announced the acquisition of Virtual Computer, provider of enterprise-scale management solutions for client-side virtualization. Citrix will combine the newly-acquired Virtual Computer technology with its market-leading XenClient® hypervisor to create the new Citrix XenClient Enterprise edition. The new XenClient Enterprise will combine all the power of the XenClient hypervisor with a rich set of management functionality designed to help enterprise customers manage large fleets of corporate laptops across a distributed enterprise. The combined solution will give corporate laptop users the power of virtual desktops “to go”, while making it far more secure and cost-effective for IT to manage thousands of corporate laptops across today’s increasingly mobile enterprise.
The number of highly mobile workers as a segment of total employees is growing dramatically. IDC expects that by 2015 they are expected to make up nearly 40 percent of the workforce*. As a result, the number of laptops used by professional workers is exploding. Industry analysts see the growth in mobile devices like tablets and smartphones as complementary to PCs, making it more important than ever to have a holistic, enterprise-wide desktop virtualization strategy that enables anywhere, anytime access to desktops, applications and data from any device. IT will continue to invest in laptops for mobile and office-based workers, and must address the deployment, management and security challenges that go with these devices, while faced with the added demands mobile devices introduce to the enterprise.

CDN content delivery network


content delivery network (CDN) is a large distributed system of servers deployed in multiple data centers in the Internet. The goal of a CDN is to serve content to end users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics, URLs and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, ondemand streaming media, and social networks.
A CDN operator gets paid by content providers such as media companies and e-commerce vendors for delivering their content to their audience of end users. In turn, a CDN pays ISPs, carriers, and network operators for hosting its servers in their datacenters. Besides better performance and availability, CDNs also offload the traffic served directly from the content provider's origin infrastructure, resulting in cost savings for the content provider.[1] In addition, CDNs provide the content provider a degree of protection from DoS attacks by using their large distributed server infrastructure to absorb the attack traffic. While most early CDNs served content using dedicated servers owned and operated by the CDN, there is a recent trend[2] to use a hybrid model that uses P2P technology. In the hybrid model, content is served using both the dedicated servers and other peer user-owned computers as applicable.

星期五, 5月 18, 2012

bring your own device (BYOD) phenomenon


Will BYOD revive the network-access control idea? Gartner thinks it will

First integrated NAC/mobile-device management products announced; Gartner says expect others to follow

By , Network World
May 08, 2012 06:04 AM ET
Is the BYOD craze going to bring a revival of NAC, the policy-based network-access control that was hyped a decade ago but didn't end up widely adopted for endpoint security?
Gartner, for one, is predicting the bring your own device (BYOD) phenomenon, in which employees are being allowed to use their own personal Apple iPads, iPhones, GoogleAndroid devices and other mobile-ware for business purposes, will lead to a revival of NAC.
NAC, you may recall, was supposed to be widely used for employee and guest worker computer access to enterprise networks, doing things like checking to make sure antivirus or patch updates were in place before allowing users on. Though a respected technology, NAC just didn't catch on to big effect. This time around though, NAC will be wedded to mobile-device management (MDM) software and the NAC function will be there to ensure MDM requirements are being met before allowing that Android and iPhone device or Windowsmobile devices onto the networks -- at least that's the idea.
"NAC has been around for almost 10 years," says Gartner analyst Lawrence Orans, who acknowledges the "first wave" of NAC crested with a fairly modest adoption, mainly by financial institutions and some high-security situations, plus a few universities.
But NAC is getting a second chance to go mainstream because of BYOD, and this time it will gain much more ground as a security approach, Orans predicts. "BYOD is an unstoppable trend," he predicts, with businesses in ever greater numbers allowing employees to carry enterprise data on personal tablets.
It seems the software industry may be willing to bet on it, too. The first integrated NAC/MDM was announced today as Fiberlink, which provides MDM via its cloud-based MaaS360 mobile-device management service, detailed how it's partnering with ForeScout with its agentlessCounterAct appliance for NAC.
According to Scott Gordon, ForeScout vice president of worldwide marketing, anyone with the Fiberlink MDM will now be able to exert NAC controls for Apple iOS or Google Android devices with a CounterAct add-on module. And ForeScout in turn will soon be selling what it calls "ForeScout MDM powered by MaaS360" under a licensing arrangement with Fiberlink. ForeScout anticipates similar arrangement with other MDM vendors.
There are a lot of MDM vendors today -- London-based consultancy Ovum estimates there are about 70 MDM vendors of varying types angling for attention.
NAC being forged into MDM offers some advantages, says Orans, in terms of allowing IT managers to set policy-based controls on BYOD tablets and smartphones in the enterprise. In the mobile-device context, NAC might check to see if there's BYOD "containerization" in place, for instance, to make sure personal and business data is cordoned off in some way before granting network access.
Fiberlink and ForeScout say their approach for BYOD allows for a policy to isolate personally owned devices in a limited access zone, where they may access a subset of applications and data.
Employees may find advantages in the NAC/MDM controls, too, said Fiberlink's Neil Florio, vice president of marketing, because it will allow for the enforcement of privacy settings. "Employees have the fear that management will have the ability to see things on their devices they wish they wouldn't," he noted. But an IT organization can set that, determining not to look into personal data on a BYOD tablet.
Orans says the Fiberlink/ForeScout partnership may be the first to meld NAC and MDM but there are going to be several more to follow in the future.
Ellen Messmer is senior editor at Network World, an IDG publication and website, where she covers news and technology trends related to information security.
Read more about security in Network World's Security section.

星期五, 5月 04, 2012

HP Converged Cloud and HP Public Cloud: Details Confirmed by Joe Panettieri on 4.10.12


HP Converged Cloud and HP Public Cloud: Details Confirmed by Joe Panettieri on 4.10.12
 
Hewlett-Packard officially launched the HP Converged Cloud and HP Public Cloud strategies today. HP Converged Cloud seeks to offer channel partners and customers a hybrid delivery approach that spans traditional IT, private, managed and public clouds. Plus, the HP Public Cloud is built on OpenStack and will include a MySQL relational database service that competes with Microsoft SQL Azure and other cloud database platforms.
Missing from the HP Converged Cloud and HP Public Cloud announcements: A clear channel partner program message that describes margins and recurring revenue opportunities to cloud integrators, cloud consultants, cloud brokers, VARs and MSPs.
Still, let’s cut HP some slack. Today’s HP Converged Cloud and HP Public Cloud announcements were designed to deliver big-picture statements rather than deeper partner program details.
HP Converged Cloud Details
The HP Converged Cloud effort leverages OpenStack, the open source cloud platform. But this is far more than an OpenStack strategy. The HP Converged Cloud effort focuses on three core themes, according to the company:
Choice – through an open, standards-based approach supporting multiple hypervisors, operating systems and development environments as well as a heterogeneous infrastructure and an extensible partner ecosystem.
Confidence – through a management and security offering that spans information, applications and infrastructure.
Consistency – through a single common architecture.
HP Public Cloud Details
Also announced by HP today:
An HP Public Cloud that will focus on Infrastructure as a service. A public beta opens May 10. Plus, a private beta will allow partners and customers to test MySQL as a relational database service — essentially countering Microsoft SQL Azure and cloud database platforms from Amazon and Rackspace, among others.
To manage hybrid environments, HP has launched HP Cloud Maps, which offers pre-packaged application templates for “push-button” deployment, HP claims.
An HP Service Virtualization 2.0 platform will allow partners and customers to test cloud and mobile applications without disrupting production systems, HP claims.
HP Virtual Application networks, which claims to speed application deployment and automates management.
HP Virtual Network Protection Service provides security at the network virtualization management layer.
HP Network Cloud Optimization Service helps customers to enhance their network to improve cloud-based service delivery.
HP Enterprise Cloud Services will allow customers to outsource cloud management to HP
Gaining Clarity
The arrival of HP Public Cloud and HP Converged Cloud could help Hewlett-Packard to end confusion about the company’s cloud computing and cloud services strategy.
While still CEO of HP, Leo Apotheker in early 2011 delivered a rambling HP cloud strategy speech at the HP Americas Partner Conference in Las Vegas. Apotheker claimed the HP cloud effort would compete with everything from Apple iTunes to Rackspace. But he offered no details on actual cloud deliverables, and Apotheker ultimately was ousted in September 2011.
Fast forward to the present, and HP has unveiled a massive portfolio of cloud services for partners and customers. But how will the HP Cloud strategy play with partners and customers? Talkin’ Cloud expects to gain some early clues during the OpenStack Design Summit and Conference (April 16-20, San Francisco). HP is expected to speak at the conference, which will also include insights from Dell, IBM, Rackspace and other cloud rivals.

英特爾:2015年將是Scale-out儲存的天下 文/鄭逸寧 2012-05-01


英特爾:2015年將是Scale-out儲存的天下
文/鄭逸寧 2012-05-01
英特爾預言,Scale-out儲存系統將取代現有的Scale-up架構,並在2015年達到8成全球儲存網路市占率,解決巨量資料挑戰 
整體而言,英特爾將企業儲存系統從過去到未來的演進分為3階段,分別為Scale-up架構、Scale-out架構,以及無所不在儲存服務(Ubiquitous Storage)。英特爾儲存事業部總經理David Tuhy表示,目前Scale-out架構正在取代現有的Scale-up架構,全球市占率將於2015年達到8成,成為主流的儲存架構

儲存從Scale-up轉Scale-out,但高價仍是普及瓶頸
第一種是目前最普遍的Scale-up架構儲存系統,在這樣的架構之下,企業只能利用單臺安裝了儲存系統的控制器來擴充儲存容量,因而受限於單臺控制器的硬體規格,萬一儲存容量暴增,以至於超過單臺控制器的極限,企業就必須汰換舊有設備,新採購更高硬體規格的控制器。此外,全套儲存系統都要透過內網互相連結,導致企業難以跨不同儲存系統集中控管與調配資源。

基於這些特性,Scale-up儲存系統經常造成企業彈性擴充與管理上的限制。英特爾舉例,Scale-up架構難以在公有雲與私有雲資料中心之間傳遞資料,而且不利於快速擴充,若企業未來要擴充至PB等級的儲存容量,則被迫要不斷投資儲存的硬體設備。此外,IT人員只能獨立管理不同套的儲存系統,難以透過單一平臺集中控管多套系統,無法規畫為儲存資源池,來有效配置儲存資源。

為了解決Scale-up儲存系統的限制,Scale-out架構應運而生,而且英特爾認為,現階段Scale-out正在逐步取代Scale-up儲存系統。Scale-out儲存系統不受限於單臺控制器節點的硬體規格,每臺安裝儲存系統的控制器將如同資料中心儲存網路的模組化元件,透過外部網路互相連結,可視為單一套儲存系統來集中控管。更重要的是,當未來企業要擴充儲存容量時,只要添購儲存控制器的數量,就能繼續擴充同一套儲存系統的容量。

而且每臺控制器提供如同一臺伺服器的規格,來執行更高階的管理功能,包括熱抽換、壓縮、精簡配置(Provisioning)、資料重複刪除等,讓企業集中控管所有控制器節點的運作,利於企業擴充、配置儲存資源。

不過,目前Scale-out儲存系統的初期採購成本過高,導致企業難以普遍採用。臺灣IDC伺服器與儲存分析師高振偉表示,Scale-out儲存系統架構需要許多搭載更高規格處理器的控制器,才能提供高階的擴充與管理功能,導致整套Scale-out儲存系統的價格居高不下。此外,企業導入Scale-out儲存系統時,通常必須重新規畫儲存網路架構,因而需要搭配廠商的系統整合服務,來整合既有的儲存資源,這也成為另一筆成本負擔。

高振偉表示,近兩年來,主要是HP、IBM等大廠推出Scale-out儲存系統,目前台積電、聯發科等高科技製造業,以及教育研究機構、政府單位等產業陸續導入。這些企業主要是看重Scale-out架構的擴充彈性,不僅用來儲存與日俱增的結構化資料庫,高科技業者還用來儲存大量非結構化的客戶資料與晶圓設計資料,利於快速擴充儲存容量。他坦言,目前較小的儲存廠商仍主打傳統Scale-up產品,明年才會陸續推出Scale-out產品,可望壓低價格,降低企業的導入門檻。

到了2015年,儲存系統將基於Scale-out儲存系統,來提供Ubiquitous無所不在的儲存服務,將能夠支援混合雲型態、自動化與終端覺知。 

透過這3項特性,企業的資料可以安全地在公、私有雲端基礎架構運作,並提供自動化管理功能,包括動態擴充、搜尋、復原系統內的資料等。此外,儲存系統還會具備終端覺知的功能,由於使用者的各種終端裝置將進行大量的資料產出與取用,像是上網、拍照、看影片等,不僅造成資料量暴增,還必須快速地存取與傳遞這些資料。因此,儲存系統必須有效控管終端裝置的資料產出與使用,才能實現無所不在的儲存服務。文⊙鄭逸寧