臺灣菸酒總經理林讚峰:CEO也要逼自己學IT | ||||
| ||||
臺灣菸酒從2009年開始斥資2億元在全臺上百據點導入SAP ERP,這個專案就是由臺灣菸酒總經理林讚峰所提議,而他認為,「導入ERP對臺灣菸酒最大的幫助是提高銷售端資料的透明度,以及管理銷售據點。」 | ||||
臺灣菸酒從2009年開始斥資2億元在全臺上百據點導入SAP ERP,這個專案就是由臺灣菸酒總經理林讚峰所提議,而他認為,「導入ERP對臺灣菸酒最大的幫助是提高銷售端資料的透明度,以及管理銷售據點。」 這並不意味臺灣菸酒不在意生產端的管理,林讚峰表示,相對銷售端,臺灣菸酒生產端的流程單純許多,對資料和數據需求的急迫性也沒有銷售端來的高,除此之外,林讚峰對於臺灣菸酒所生產出來的產品有很高的信心,但,「現在這個時代,不是產品做得好就好,重要的是,要怎麼賣。」 這個觀念對臺灣菸酒來說,其實是很大的轉變。你很難想像,20年前,在臺灣菸酒還是臺灣省菸酒公賣局時,一個酒廠或是菸廠的廠長,必須具有公務人員簡任官職等,相當於地方政府最高職等的事務官位階,權力以及影響力可見一斑。不難想見,過去生產單位在臺灣菸酒是如何受到重視。不過,隨著臺灣菸酒公賣局改制成為臺灣菸酒公司,情況開始有所轉變,銷售端的重要性也與日俱增。 在CEO眼中,導入ERP的最終目的是幫助銷售,不是生產 臺灣菸酒有很複雜的通路體系,從大賣場、零售商、海產店、夜店等,都是通路商。過去,臺灣菸酒在全臺各地都設有營業所,再由這些營業所配銷產品給銷售據點,累積至今,臺灣菸酒現在總共擁有4萬多個銷售商。 這些為數甚多的銷售商,過去都由各個營業所負責管理,各自業務管理上的資料格式、表單設計和工作流程,通通都不一樣,因此也難以彙整,換句話說,總部想要掌握即時資訊根本是不可能的事情。 臺灣菸酒公司化以後,經營心態也跟著公司化。「導入ERP就是要增強我們的戰鬥力。」林讚峰認為,導入ERP的第一步就是要先統一資料,才能有即時反應的能力。 林讚峰說了一件很有意思的例子,啤酒的重量重、利潤低,再加上有產品品質責任歸屬的考量,北部廠生產的酒不可以到北部以外的地方販售,中部、南部廠所生產的酒也是如此。因此,各地酒廠就負責生產當地所需要的產品數量,過往,臺灣菸酒都是根據歷史資料來推估各廠生產數量。 不過,啤酒需要量其實跟氣溫有很大的關係。所以像是臺灣頭與臺灣尾,冬天啤酒的銷量可以差到3倍之多,就是受到氣溫的影響,導致臺中以北的人在冬天喝啤酒的量會大幅減少,因此不同的酒廠,在冬天的忙碌程度也不一樣。 麻煩的是,氣溫年年都會變化,林讚峰表示,用歷史資料來推估總會有誤差,導致每座工廠來不及因應市場的銷售。比如說,今年南部冬天的氣溫特別低,若只依照過往資料來生產,就造成生產過剩的問題。「這都是因為沒有辦法收集最新市場端的即時資訊,並依據正確資料反應。」 因此,林讚峰希望臺灣菸酒對銷售端資料的反應速度,「要做到像7-11的反應一樣,產品一賣完,補貨就到,不能有時間差。」這些,林讚峰都打算靠ERP來完成。 用資訊系統來跟其他世界大廠競爭 臺灣菸酒不是第一個導ERP的國營事業導,論規模,也不是最大的,可是,臺灣菸酒的業務性質肯定是最複雜,不僅跨越生產銷售,更重要的是,不同於其他國營事業通常經營的是獨占事業,「臺灣菸酒不僅要面對眾多競爭者,而且,競爭者是來自世界各地的廠商。」林讚峰說。 談到與世界競爭,林讚峰倒是很感慨,因為臺灣的菸廠中有三座,除了生產臺灣菸酒自營商品之外,還分別各自代工日本、英國、美國知名品牌菸廠的菸,令他吃驚的是,這三個不同國家菸廠資訊流的相似度,比臺灣自己三座菸廠的同質性還高。林讚峰認為,這是因為國際菸廠藉由資訊系統建立了標準化的流程。「所以,國際菸廠在全世界都可以清楚掌握當地市場對菸價的反應,並且隨時依銷售數據調整售價,跟他們一比,難怪我們的市占率年年下滑。」 對林讚峰來說,市占率下滑是事實,但國際廠商藉由資訊系統提供正確即時數據來協助企業營運決策的能力,讓他了解資訊系統對企業營運的重要性,也讓他在經營管理的路上特別重視資訊系統。 聽林讚峰談資訊系統,一時之間你會以為自己是跟一位從CIO升任CEO的人聊天,但是,林讚峰擁有的是美國麻省理工學院生化工程博士的學位,更是個紅?專家,一路求學就業之路,沒有任何IT的背景。 「我要逼別人,得先逼自己」林讚峰說。在這個轉型中的國營單位要推動資訊化改革,其中的難度外人很難想像,身為總經理的他,如果想輕鬆點,大可以把事情全交辦下去即可,可是,他把自己定位為「全方位問題解決者。」有問題,他一定跳下去解決,「我一定要跳下去跟大家玩一玩。」因為,這種跨部門的專案若是沒有總經理的角色跳下來玩一玩,肯定就無法順利推動。 很難,但還是很積極的學。這就是一個想要以資訊系統改造企業的CEO,所具有的態度。文⊙辜雅蕾 |
星期四, 3月 29, 2012
一個值得學習的觀念 臺灣菸酒總經理林讚峰
星期二, 3月 27, 2012
Internet Small Computer System Interface (iSCSI)
In computing, iSCSI (i/aɪˈskʌzi/ eye-skuz-ee), is an abbreviation of Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a Storage Area Network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike traditional Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.
Functionality
iSCSI uses TCP/IP (typically TCP ports 860 and 3260). In essence, iSCSI simply allows two hosts to negotiate and then exchange SCSI commands using IP networks. By doing this iSCSI takes a popular high-performance local storage bus and emulates it over wide-area networks, creating a storage area network (SAN). Unlike some SAN protocols, iSCSI requires no dedicated cabling; it can be run over existing switching and IP infrastructure. As a result, iSCSI is often seen as a low-cost alternative to Fibre Channel, which requires dedicated infrastructure except in its FCoE (Fibre Channel over Ethernet) form. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN).
Although iSCSI can communicate with arbitrary types of SCSI devices, system administrators almost always use it to allow server computers (such as database servers) to access disk volumes on storage arrays. iSCSI SANs often have one of two objectives:
- Storage consolidation
- Organizations move disparate storage resources from servers around their network to central locations, often in data centers; this allows for more efficiency in the allocation of storage. In a SAN environment, a server can be allocated a new disk volume without any change to hardware or cabling.
- Disaster recovery
- Organizations mirror storage resources from one data center to a remote data center, which can serve as a hot standby in the event of a prolonged outage. In particular, iSCSI SANs allow entire disk arrays to be migrated across a WAN with minimal configuration changes, in effect making storage "routable" in the same manner as network traffic.
[edit]Network booting
For general data storage on an already-booted computer, any type of generic network interface may be used to access iSCSI devices. However, a generic consumer-grade network interface is not able to boot a diskless computer from a remote iSCSI data source. Instead it is commonplace for a server to load its initial operating system from a TFTP server or local boot device, and then use iSCSI for data storage once booting from the local device has finished.
A separate DHCP server may be configured to assist interfaces equipped with network boot capability to be able to boot over iSCSI. In this case the network interface looks for a DHCP server offering a PXE or bootp boot image. This is used to kick off the iSCSI remote boot process, using the booting network interface's MAC address to direct the computer to the correct iSCSI boot target.
Most Intel Ethernet controllers for servers support iSCSI boot. [1]
Network switch 網路交換器的功能種類
Network switch
From Wikipedia, the free encyclopedia
A network switch or switching hub is a computer networking device that connects network segments or network devices. The term commonly refers to a multi-port network bridge that processes and routes data at the data link layer (layer 2) of the OSI model. Switches that additionally process data at thenetwork layer (layer 3) and above are often referred to as layer-3 switches or multilayer switches.
A switch is a telecommunication device which receives a message from any device connected to it and then transmits the message only to that device for which the message was meant. This makes the switch a more intelligent device than hub (which receives a message and then transmits it to all the other devices on its network.) The network switch plays an integral part in most modern Ethernetlocal area networks (LANs). Mid-to-large sized LANs contain a number of linked managed switches. Small office/home office (SOHO) applications typically use a single switch, or an all-purposeconverged device such as a residential gateway to access small office/home broadband services such as DSL or cable internet. In most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. User devices may also include a telephone interface for VoIP.[edit]Function
An Ethernet switch operates at the data link layer of the OSI model to create a separate collision domain for each switch port. With 4 computers (e.g., A, B, C, and D) on 4 switch ports, A and B can transfer data back and forth, while C and D also do so simultaneously, and the two conversations will not interfere with one another. In the case of a hub, they would all share the bandwidth and run inhalf duplex, resulting in collisions, which would then necessitate retransmissions. Using a switch is called microsegmentation. This allows computers to have dedicated bandwidth on a point-to-point connections to the network and to therefore run in full duplex without collisions.
[edit]Role of switches in networks
Switches may operate at one or more layers of the OSI model, including data link and network. A device that operates simultaneously at more than one of these layers is known as a multilayer switch.
In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and token ring is easier at layer 3.
Devices that interconnect at layer 3 are traditionally called routers, so layer-3 switches can also be regarded as (relatively primitive) routers.
In some service provider and other environments where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall,[2][3] network intrusion detection,[4] and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules.[5]
In other cases, the switch is used to create a mirror image of data that can go to an external device. Since most switch port mirroring provides only one mirrored stream, network hubs can be useful for fanning out data to several read-only analyzers, such as intrusion detection systems and packet sniffers.
[edit]Layer-specific functionality
Main article: Multilayer switch
While switches may learn about topologies at many layers, and forward at one or more layers, they do tend to have common features. Other than for high-performance applications, modern commercial switches use primarily Ethernet interfaces.
At any layer, a modern switch may implement power over Ethernet (PoE), which avoids the need for attached devices, such as a VoIP phone or wireless access point, to have a separate power supply. Since switches can have redundant power circuits connected to uninterruptible power supplies, the connected device can continue operating even when regular office power fails.
[edit]Layer 1 hubs versus higher-layer switches
A network hub, or repeater, is a simple network device. Hubs do not manage any of the traffic that comes through them. Any packet entering a port is broadcast out or "repeated" on every other port, except for the port of entry. Since every packet is repeated on every other port, packet collisions affect the entire network, limiting its capacity.
There are specialized applications where a hub can be useful, such as copying traffic to multiple network sensors. High end switches have a feature which does the same thing called port mirroring.
By the early 2000s, there was little price difference between a hub and a low-end switch.[6]
[edit]Layer 2
A network bridge, operating at the data link layer, may interconnect a small number of devices in a home or the office. This is a trivial case of bridging, in which the bridge learns the MAC address of each connected device.
Single bridges also can provide extremely high performance in specialized applications such as storage area networks.
Classic bridges may also interconnect using a spanning tree protocol that disables links so that the resulting local area network is a tree without loops. In contrast to routers, spanning tree bridges must have topologies with only one active path between two points. The older IEEE 802.1D spanning tree protocol could be quite slow, with forwarding stopping for 30 seconds while the spanning tree would reconverge. A Rapid Spanning Tree Protocol was introduced as IEEE 802.1w, but the newest edition of IEEE 802.1D adopts the 802.1w extensions as the base standard.
The IETF is specifying the TRILL protocol, which is the application of link-state routing technology to the layer-2 bridging problem. Devices which implement TRILL, called RBridges, combine the best features of both routers and bridges.
While layer 2 switch remains more of a marketing term than a technical term,[citation needed] the products that were introduced as "switches" tended to use microsegmentation and Full duplex to prevent collisions among devices connected to Ethernet. By using an internal forwarding plane much faster than any interface, they give the impression of simultaneous paths among multiple devices.
Once a bridge learns the topology through a spanning tree protocol, it forwards data link layer frames using a layer 2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth method were performance-increasing methods when used on "switch" products with the same input and output port bandwidths:
- Store and forward: The switch buffers and verifies each frame before forwarding it.
- Cut through: The switch reads only up to the frame's hardware address before starting to forward it. Cut-through switches have to fall back to store and forward if the outgoing port is busy at the time the packet arrives. There is no error checking with this method.
- Fragment free: A method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frames that are in error because of a collision will not be forwarded. This way the frame will always reach its intended destination. Error checking of the actual data in the packet is left for the end device.
- Adaptive switching: A method of automatically selecting between the other three modes.
While there are specialized applications, such as storage area networks, where the input and output interfaces are the same bandwidth, this is not always the case in general LAN applications. In LANs, a switch used for end user access typically concentrates lower bandwidth and uplinks into a higher bandwidth.
[edit]Layer 3
Within the confines of the Ethernet physical layer, a layer-3 switch can perform some or all of the functions normally performed by a router. The most common layer-3 capability is awareness of IP multicast through IGMP snooping. With this awareness, a layer-3 switch can increase efficiency by delivering the traffic of a multicast group only to ports where the attached device has signaled that it wants to listen to that group.
[edit]Layer 4
While the exact meaning of the term layer-4 switch is vendor-dependent, it almost always starts with a capability for network address translation, but then adds some type of load distribution based onTCP sessions.[7]
[edit]Layer 7
Layer-7 switches may distribute loads based on Uniform Resource Locator URL or by some installation-specific technique to recognize application-level transactions. A layer-7 switch may include aweb cache and participate in a content delivery network.[8]
Layer 7 Switching & Load Balancing 第七層網路交換器與負載平衡器
posted on Tuesday, August 12, 2008 4:44 AM
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller.
LOAD BALANCING
Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content.
Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are:
- round-robin
- weighted round-robin
- least connections
- weighted least connections
Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance.
LAYER 7 SWITCHING
Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing".
A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript.
Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable ofdirecting requests based on URI, host, HTTP headers, and anything in the application message.
The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services.
LAYER 7 LOAD BALANCING
By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers).
Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available.
Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP.
Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.
How Does Layer 7 Load Balancing Work?第七層網路交換器如何運作??
This post explains what is meant by layer 7 load balancing and an example of content switching using HTTP URL parsing is given later in the post. In the OSI model layer 7 is the applications layer and there are a number of application protocoals that are used this layer:
- Hypertext Transport Protocol (HTTP) for web pages
- File Transport Protocol (FTP) for file transfer
- Real Time Streaming Protocol (RTSP) for streaming media such as video
A load balancing switch can be used at layer 7 to load balance in a number of different ways. These include:
- HTTP header Inspection
- HTTP URL Parsing
- RTSP Parsing
One of the main reasons why layer 7 load balancing is used is that our company will want to ensure that differ web applications run on servers that are configured and set up to maximise the efficiency of those servers and also to minimise the cost in acquiring and running the servers.
Suppose that we are a web hosting company like Hostgator or Bluehost and we need to host many different types of website. These would include websites with just static pages, online shops with backend transaction processing, and even websites like youtube that are dedicated to providing video streaming to the world. If we did not load balance the incoming requests from the web based on content type (or application type) then we would have to make sure that all of the servers we use were of the same technical specification and capable of handling our most complex applications. This would be expensive and our finanical director would have constant headaches while us techies could gloat to our friends about all the massive servers we had. Our friends in turn would have data center envy :-). They would then give their financial director a headache from the constant requests to get bigger servers so they could gloat at us - and so on.
In reality what we would do is something like the following. First we would decide that we needed three different types of server, one for each application type:
- a low spec server configured to run static websites
- a higher spec server configured to run the online shops
- an even higher spec server configured for video streaming
Then we would place a content switch in front of the three different types of servers/applications. And we would configure the content switch to send the different types of request to the different types of servers. Easy really!
HTTP URL Parsing
Lets say that on our smaller servers we only run static websites that consit mainly of html web pages and gig or jpeg images. On our medium sized servers we run complex shopping sites that use applications servers and datbase servers and so on. These are more dynamic websites and need to handle a high volume of transactions day and night (24/7).
Our content switch will need to insect incoming requests and make sure to send them to the right set of servers. It can do this by looking at the file extensions in the url. Extensions like html, gif and jpeg can go to the static website servers and the more complex (such as ASP or Java) can go to the dynamic servers.
訂閱:
文章 (Atom)