The Power of it survival Guide for the cio



Download 1.93 Mb.
Page10/12
Date29.07.2017
Size1.93 Mb.
#24299
1   ...   4   5   6   7   8   9   10   11   12

Best Practices




Automate your asset management



Automated asset management consists of electronically supported procurement, automated inventory, and centralized data repository that are available to financial, administrative and technical planners, system administrators, and the service desk. Data managed within the asset management system consists of contract terms, hardware inventory, software inventory, accounting, maintenance records, change history, support history, and other technical and financial information.
A hardware inventory consists of a listing of all IT hardware in an organization. It includes all network components such as clients, servers, peripherals, and network devices. The data includes relevant identifications (serial numbers, asset tags, bar codes, etc.) of each discrete component (system units, add in cards, peripherals, and monitors) of the client computers, servers, peripherals and network hardware. The financial, physical, and technical history of the asset should also be maintained.
A software inventory is an up-to-date listing that contains detailed information about the client, network and server software installed within an organization. Inventory information should be stored in a central repository.
These inventories should include acquisition records, moves, adds, changes, service records, contract information, depreciation history, lease terms and usage for hardware and license terms and conditions, date of acquisition, user name and location, system installation details, maintenance agreements and usage monitoring for software. Inventories should be automatically updated and maintained, and include history, and other relevant data. This information is used for technical planning, support and financial management.

Automate your systems management

Automated systems management proactively and reactively notifies system operators of failures, capacity issues, traffic issues, virus attacks and other transient events. The tools allow monitoring of system status, performance indicators, thresholds, notification of users, and dispatch of trouble tickets. It provides optimal system performance, quicker resolution of problems, and minimizes failures. The automated system should be fully integrated with processes and policies that provide manual intervention when needed, support for remote/mobile users, and include policies for file/disk sharing and download.



Avoid Single Points of Failure



Redundancy is the provision of multiple interchangeable components to perform a single function to cope with failures and errors. The idea is to avoid what is called Single Points of Failure (SPOFs). The proverbial chain is as strong as its weakest link, and this is true for IT as well. Often, the failure of one single element can cause the non-availability of a complete system. If that element cannot be replaced, very rapidly it is a SPOF.
Redundancy can be achieved by installing two or even three computers to do the same job. There are several ways this could be done. They could all be active all the time thus giving extra performance as well as extra availability (clustering); one could be active and the other simply monitoring its activity so as to be ready to take over if it fails (hot standby); the spare can also be kept turned off and only switched on when needed (cold standby). Redundancy also often allows for repair actions with no loss of operational continuity (hot swappable devices).

Another common form of hardware redundancy is disk mirroring; writing duplicate data to more than one hard disk, to protect against loss of data in the event of device failure. This technique can be realized in either hardware (sharing a disk controller) or in software. It is a common feature of RAID systems. When this technique is used with magnetic tape storage systems, it is called twinning.


The Internet has made that more and more businesses are online 24 hours per day and seven days a week. This means that they can no longer afford to have their IT systems down for reason of failure or scheduled maintenance. The market has discovered this need and almost any vendor now has High Availability (HA) products in his offerings. These systems make use of redundancy at component level, usually combined with a software solution to guarantee 99.999% (“five nines”) availability; in practical terms this means a downtime of five minutes or less per year!
Many people stop thinking about redundancy at this point. However, if the main system and it backup are placed side-by-side, a power outage or a fire can affect both. Consequently, an entire building, a site or even a town (earthquakes or other disasters) can be a SPOF! Such major disasters can be handled by locating the backup node(s) in different building and at a certain distance from the main site.
Remember that disaster recovery depends on people as well. September 11 taught us that having good and geographically separated backup facilities is not sufficient. Qualified people must be able to get to the backup facility to manage fail-over operations. Disaster scenarios must be tested periodically, and backup data must be verified as correct and complete.

Centralise your client management



Client image control is the ability to create a client specific configuration of applications, settings, and privileges on a server. These can then be automatically downloaded to a specific address or set of addresses on the network; therefore, configuring the clients initially, and ultimately standardizing the maintenance of configurations. A client agent is used to synchronize the server and client images for change management.
Automated software distribution is the ability to install software on a client device without having to physically visit it. When this is integrated into a comprehensive change management process, it can significantly reduce the time and cost of software changes, enable a more frequent change cycle, be used to upgrade applications, operating systems, utility software, as well as "patches" to troubleshoot software problems and update data files.
Unattended Power Up is the ability for a client computer to power up (from a sleep mode) remotely on the network. This feature allows network and systems management tasks to be executed regardless of whether a system is powered on or off.
Client hardware event management is the ability for networked components to identify and communicate abnormal operating and performance conditions to a systems manager or operator that can respond with corrective measures. This feature, when integrated into management systems and support practices, can enable the support team to be proactive in service calls, minimizing unplanned downtime, and preventing cascade effects of hardware failures as well as providing valuable trend information for service providers.

Consolidate and rationalize continuously

The idea is – of course – to benefit from economy of scale and reduced complexity through standardization. Some points to consider are:




  • Consolidation of computing power (bigger servers, clusters);

  • Consolidation of storage capacity (SAN and NAS);

  • Consolidation of networks (LAN and WAN);

  • Consolidation of geographically distributed data centers;

  • Standardization of desktops and other end-user equipment.

In this context it could be a good idea to think of the deployment of a thin client infrastructure. For many organizations this could be the solution with the lowest Total Cost of Ownership (TCO), not only because the hardware is cheaper but also because of:




  • Extended Lifetime - The currently accepted lifetime of a PC is two years, although often depreciated over three years. Thin clients can be used until they die, which can be longer than a PC because of the lack of moving parts;

  • Lower Bandwidth requirements - The highest bandwidth user is a worker that performs the same tasks repetitively. In a thin client configuration, these users typically need only 20 kbit/s, thus reducing the need for high-speed LAN and enabling teleworking over dial-up connections. With thin clients, bandwidth requirements for backup are also reduced, as data is stored centrally;

  • Lower Power consumption - Power consumption of a thin client is typically 14% of a PC. This leads to an important saving in the exploitation costs;

  • Simpler Licensing - By centrally installing applications, licensing is simpler to manage, making it possible to grant access from the server on a "need to have" basis without creating packages and visiting local workstations or installing licenses for "in case of;”

  • Better Backup - Having all data centrally creates an economy of scale advantage by using more efficient backup mediums;

  • Physical security - PCs are prime targets for theft; thin clients are not. Of course the servers have to be protected adequately;

  • Better Support – Higher quality and consistency of deployment, repair and replacement.



Keep control over your architecture

Because the infrastructure has an influence on the strategic capabilities of the organization outsourcing the control over its architecture is comparable with outsourcing a core business activity, we will see further in this book why this is a bad idea.



Keep your infrastructure scalable



Scalability is the aptitude to logically and physically increase performance and capacity to meet reasonable growth and change over time. A scalable infrastructure enables the rollout of homogeneous platforms across users and departments with different processing requirements, while providing technical staff with a common platform to support.
To reduce costs many infrastructure planners design infrastructure to satisfy minimum short-term business requirements. However, this strategy can negatively impact the organization's long-term ability to respond to change. Therefore, infrastructure design should take into account the future, encompassing not only the initial costs of a solution, but also the cost of change.
Computing Hardware – Truth is that there are no clear-cut engineering rules to determine the needs and consequently the level of investment required in computing hardware. One can only observe a posteriori whether the processor power of a server is sufficient to support several users with a given application. Indeed, the numbers of MIPS (Mega Instructions per Second) or MegaFLOPS (Mega Floating Point Operations per Second) of a processor are theoretical measures and it is difficult to project these figures to your daily working environment. Because of this, scalability, which is factoring out these uncertainties, is particularly important when choosing new computing hardware.
Storage - The required amount of storage of course depends on the applications you run. Audio and Video- applications and graphics programs produce particularly high volumes of data. In addition, the Operating System and application programs require a lot of disk space. As a result, a hard disk that originally seemed large can quickly fill up. The conclusion is that storage should be planned generously, since more space is usually required than originally calculated. One should also consider that the performance suffers when the capacity limits are reached. The constant decrease in hard disk prices, however, is a reason for not purchasing much more than is initially needed. Purchasing a smaller disk with little reserve capacity and later upgrading to a larger disk could prove to be more economical than initially investing in a larger disk, but only if the time between the initial purchase and the upgrade is long enough.
If your storage needs are important enough, a Storage Area Network could be a good option.
Network - A good networking design must answer both the LAN and WAN needs of its users. I challenge anybody who can make an accurate calculation of the required number of Megabits per second for a given operational environment. The powerful network analysis tools that are available on the market today can only give you an image of the past, and it are rules of thumb that are applied to state whether a network has enough bandwidth or not.
Many LAN applications depend on lots of surplus bandwidth. This is especially true for Ethernet, which begins to show performance degradation once about 20% of the theoretical capacity is exceeded.
WANs tend to operate with tight bandwidth margins. Probably the best advice here is the same - plan for expansion, but in a different way. Plan so that you can upgrade your WAN service without changing your LAN configuration. Dialup SLIP or PPP is fine for one or two computers. Once you have half a dozen computers in use, shift to a router configuration, even if it is still using PPP. Static IP address assignment is probably the best option, as are intelligent inverse name server entries.
Cabling - The biggest start up cost for a network is the labour needed to install it. So do not just install two-pair cable, install eight-pair and leave six unused. Do not just install one Ethernet cable; install two or three, and maybe run some fibre alongside it. Seriously consider a Structured Cabling System (SCS). The initial investment will rapidly pay off. The term Structured Cabling System refers to all the cabling and components installed in a logical, hierarchical way. It is designed to be as independent as possible of the network which uses it, so that can this can be modified with a minimum of rework to the cabling.
The advantages SCSs are:


  • Consistency - An SCS means the same cabling system for everything: telephony, LAN, etc…

  • Support for multi-vendor equipment - A standards-based cable plant will support your applications and hardware even after you change or mix & match vendors;

  • Simplify moves/adds/changes - Need to move a connection from one room to another? Add a modem connection to the office? Add a two-line phone, modem, and fax to the office? Share a file server or colour printer? A SCS should be devised to be able to cope with all this, without any problem;

  • Simplify troubleshooting - Even cables that were installed correctly can fail. With a structured wiring system, problems are less likely to down the entire network, they are easier to isolate and easier to fix;

  • Support for future applications - If you install Category 6 your cable plant will support future applications like multimedia, video conferencing, and who knows what else, with little or no upgrade effort.



Printing also deserves a strategy

Few terms are so frequently used in the IT domain as Total Cost of Ownership (TCO) and Return on Investment (ROI); many IT strategies are actually based on the reduction of TCO or the improvement of ROI and this by all possible means. However, few IT strategies take into account the possibilities of optimising TCO and ROI in the area of physical production of documents.


According to IDC, up to ten percent of the turnover of an organization can go to the production, the management and the distribution of documents. In some organizations, printing alone is good for three percent of the revenues, says Gartner, and the total cost of printing is between ten and twenty percent of the IT budget. On top of that, about half the calls to helpdesk have to do with printing. In the same time, other studies indicate that 70 percent of the organizations have no idea of what is spent on document management and the output on paper is still growing by about twenty percent every year. Even if these figures are exaggerated they are important enough to have a closer look.
It may be an overstatement to say that there is no attention at all for the cost of printers and consumables, but often the TCO analysis stops with the number printers and toner cartridges. This is not surprising when considering the price of these consumables; the Financial Times calculated that toner costs about seven times more per litre as the finest champagne and it is by far not that good to drink!
Many suppliers are convinced that the best way to save money on printing is to invest in the newest technologies, allowing for a centralized management. Having the printers send their usage and maintenance information directly to the supporting organization can dramatically reduce downtime and the number of helpdesk calls and give a detailed insight in the usage of a printer and hence in the possibilities to improve.

It must be clear by now; a good printing strategy is based on more than the Cost per Page (CPP), which only takes into account the purchase and maintenance costs of printers and accessories. A simple example is the extreme centralization of printing devices. Of course, it is possible to save money by centralizing all print jobs, but when you centralize too much this will reduce the efficiency of the employees, so you have to take into account the distance between the users and their printers.


On the other hand, one has to decide on what types of documents really have to be produced on paper. Maybe it is more efficient to scan incoming documents, such as invoices, and to treat them electronically further down the chain. Also, a good archiving infrastructure will help the users to get rid of their habit to print every document for filing purposes only.

Reconsider your storage strategy

While backup is designed for recovery of active data and systems, the process of archiving is meant to preserve semi-active or inactive information in order to mitigate the risk of litigation or to ensure regulatory compliance. The access requirements for archived information can span years or even decades.


An interesting fact is that most of the data on your systems is probably "inactive", meaning that it is not updated anymore and that the access frequency is very low. Keeping these data on fast, high-end disks can be very expensive in many respects.
Imagine you have 1 TB on high-end storage. At a price of $0.20 per MB this costs you some $200,000. Migrating the 80% of "inactive" data to low-end disk (@ $0.02 per MB) will make you save $124,000. Moving these data to tape (@ $0.005 per MB) can even make you save $136,000.
Making backups also costs in terms of media. A 1 TB full backup to tape @ $0.05 per MB costs $ 5000 in tapes. Suppose you make a full backup every two weeks and keep the tapes for one year you will have an annual tape cost of $130,000. This can be reduced to $26,000 when migrating the 80% of "inactive" data in our example.
Migrating "inactive" data to off-line or near-line storage can also be a solution for your backup window. If a full backup would take you 10 hours you can reduce this to 2 hours by migrating 80% of your unused data. By the same occasion, the time it takes to restore in case of a catastrophe will be reduced proportionally.
It is clear that these considerations can give you a strong business case when planning to invest in storage infrastructure and tools.

Rely on open standards, but don’t be stubborn

Many organizations, especially the larger ones, have a big variety of IT systems, both on the hardware side as in the software field. There are many, perfectly good reasons for this:




  • There have been mergers with other organizations in the past;

  • There have been internal reorganizations;

  • Technology has evolved;

  • Different managers had different ideas;

  • There have been commercial opportunities.

The result of this can be that several segments of the infrastructure are incompatible or cannot interoperate. These problems can be avoided by using an open architecture.


Open architectures use off-the-shelf components and conform to established standards [49]. An open architecture allows the systems to be connected easily to devices and software made by other manufacturers. The main advantage of open architectures is that anyone can design add-on products for it, which in turn generates the availability of a wide variety of products on the market. A system with a closed architecture, on the other hand, is one whose design is proprietary (owned by a company). It also implies that this company has not divulged specifications that would allow other companies to duplicate the product, making it difficult to connect the system to other systems. Manufacturers often use this approach to bind their customers to their products thus ensuring a continuous revenue flow.

Sometimes, it is not possible to use an open architecture, because there is a monopoly of a certain manufacturer or the technology is new and there are no established standards yet. In the first case it is best not to be stubborn and conform to the de facto standard (if you can’t beat them, join ‘em). In the second case it is better to wait until it becomes clear which choices will be made (and they will be made, if the technology is there to stay).



Standardize as much as you can



Platform standardization is the standardization of specific system models and operating system platforms for servers, client computers, and peripherals. Platform standardization is the limiting of available operating systems, client computer models, mobile computer models, server models, printer models, and network communication device models that may be purchased. The standards are set for specific user types and designate specific model number and operating system platforms that can be purchased.
Vendor standardization limits the number of vendors that an organization purchases from. By standardizing on fewer vendors, an organization can gain purchasing leverage and reduce incompatibility issues, support issues, vendor liaison requirements, testing of new technology, and administrative costs of vendor management. While it may limit the available selection of technology and features, it enables larger discounts with volume purchasing.

Use the right tools

When the only tool you have is a hammer, every problem looks like a nail.”


(Unknown)
It is a normal human reaction to defend what is best known; this is not different with IT people. Somebody who has worked for the last ten years in a mainframe environment will try to solve everything with a mainframe. The person who has been educated at the university in a UNIX world will look for UNIX solutions and the programmer who only has experience in a PC environment will always first try to write a PC program. Be aware of the fact that this can end in wars of religion between the adepts of different schools. They will always come back on their own arguments and refuse to listen to those of the other party. These discussions are of course completely pointless so the IT Manager should not allow them to take place. He has to select the best-suited technology and force his collaborators to use it, even if it is initially against their opinion.
In a somewhat different context, but equally applicable here, this idea can best be described by Steven Covey's word-picture [12]:
Suppose you were to come upon someone in the woods working feverishly to saw down a tree.

"What are you doing?" you ask.

"Can't you see?" comes the impatient reply. "I'm sawing down this tree."

"You look exhausted!" you exclaim. "How long have you been at it?"

"Over five hours," he returns, "and I'm beat! This is hard work."

"Well why don't you take a break for a few minutes and sharpen that saw?" you inquire. "I'm sure it would go a lot faster."

"I don't have time to sharpen the saw," the man says emphatically. "I'm too busy sawing!"
Sharpening the saw is about learning new things and using the right tools.


Download 1.93 Mb.

Share with your friends:
1   ...   4   5   6   7   8   9   10   11   12




The database is protected by copyright ©ininet.org 2024
send message

    Main page