1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu Windows 2000 Terninal Services P2 docx

20 299 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Challenges of the Virtual Environment
Thể loại Chapter
Năm xuất bản 2000
Định dạng
Số trang 20
Dung lượng 162,77 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Benefits of the Mainframe Model As you can see in Figure 1.1, the mainframe model supports not only desktop PCs, but also remote terminals.. The idea of the local area network Ethernet

Trang 1

Businesses are teeing up to new challenges brought on by an increas-ingly virtual environment Telecommuting has increased the number of remote access users who need to access applications with specific busi-ness configurations The pervasive use of the Internet provides an easy, nearly universal, avenue of connectivity, although connections are some-times slow The use of hand-held computing has exploded, but questions remain as to what kind of applications can be used

For a business facing these types of challenges, the hole in one can be found in thin-client technology The leader in this technology is Citrix, whose main product is MetaFrame MetaFrame runs over Microsoft’s Windows 2000 with Terminal Services and provides fast, consistent access to business applications With Citrix MetaFrame, the reach of business applications can be extended over an enterprise network and the public Internet

What Defines a Mainframe?

Mainframe computers are considered to be a notch below supercom-puters and a step above minicomsupercom-puters in the hierarchy of processing

In many ways, mainframes are considerably more powerful than super-computers because they can support more simultaneous programs Supercomputers are considered faster, however, because they can exe-cute a single process faster than a typical mainframe Depending on how

a company wants to market a system, the same machine that could serve

as a mainframe for one company could be a minicomputer at another Today, the largest mainframe manufacturers are Unisys and (surprise, surprise) IBM

Mainframes work on the model of centralized computing Although a mainframe may be no faster than a desktop computer in raw speed, mainframes use peripheral channels (individual PCs in their own right)

to handle Input/Output (IO) processes This frees up considerable proc-essing power Mainframes can have multiple ports into high-speed

memory caches and separate machines to coordinate IO operations

between the channels The bus speed on a mainframe is typically much higher than a desktop, and mainframes generally employ hardware with considerable error-checking and correction capabilities The mean time between failures for a mainframe computer is 20 years, much greater than that of PCs

Trang 2

Mean Time Between Failures (MTBF) is a phrase often used in the com-puting world MTBF is the amount of time a system will run before suf-fering a critical failure of some kind that requires maintenance Because each component in a PC can have a separate MTBF, the MTBF is calcu-lated using the weakest component Obviously, when buying a PC you want to look for the best MTBF numbers Cheap parts often mean a lower MTBF

All of these factors free up the CPU to do what it should be doing—

pure calculation With Symmetric Multiprocessing (SMP), today’s main-frames are capable of handling thousands of remote terminals Figure 1.1 shows a typical mainframe arrangement

Benefits of the Mainframe Model

As you can see in Figure 1.1, the mainframe model supports not only desktop PCs, but also remote terminals Traditionally called dumb terminals because they contained no independent processing capabili-ties, mainframe terminals today are actually considered “smart” because

of their built-in screen display instruction sets Terminals rely on the central mainframe for all processing requirements and are used only for input/output The advantages to using terminals are considerable First, terminals are relatively cheap when compared to a PC Second, with only minimal components, terminals are very easy to maintain In addi-tion, terminals present the user with the same screen no matter when

or where they log on, which cuts down on user confusion and application training costs

The centralized architecture of a mainframe is another key benefit of this model Once upon a time, mainframes were considered to be vast, complicated machines, which required dedicated programmers to run

Today’s client/server networking models can be far more complex than any mainframe system Deciding between different operating systems,

Trang 3

protocols, network topography, and wiring schemes can give a network manager a serious headache By comparison, mainframe computing is fairly straight-forward in its design and in many cases is far easier to implement Five years ago, word was that mainframes were going the way

of the dinosaur Today, with over two trillion dollars of mainframe applica-tions in place, that prediction seems to have been a bit hasty

Centralized computing with mainframes is considered not only the past, but also possibly the future of network architecture As organizations undergo more downsizing and shift towards a central, scalable solution for their employees, a mainframe environment looks more and more appealing The initial price tag may put many companies off, but for those that can afford it, the total cost of ownership (TCO) could be considerably less than

a distributed computing environment The future of mainframes is still uncertain, but it looks like they will be around for quite some time

Figure 1.1The mainframe computing environment

Mainframe

Front-end processor

Terminal

Terminal

Terminal

Terminal

Hub

PC PC PC PC

Storage drives

Trang 4

History and Benefits of Distributed Computing

Distributed computing is a buzzword often heard when discussing today’s client/server architecture It is the most common network environment today, and continues to expand with the Internet We’ll look at distributed computing’s origins in this section, and take a look at where it might be headed

The Workstation

As we mentioned before, distributed computing was made possible when DEC developed the minicomputer Capable of performing timesharing oper-ations, the minicomputer allowed many users to use the same machine via remote terminals, but each had a separate virtual environment Minicom-puters were popular, but considerably slower than their mainframe coun-terparts As a result, to scale a minicomputer, system administrators were forced to buy more and more of them This trend in buying led to cheaper and cheaper computers, which in turn eventually made the personal com-puter a possibility people were willing to accept Thus, the reality of the workstation was born

Although originally conceived by Xerox Corporation’s Palo Alto Research Center (PARC) in 1970, it would be some time before worksta-tions became inexpensive and reliable enough to see mainstream use

PARC went on to design such common tools as the mouse, window-based computing, the first Ethernet system, and the first distributed-file-and-print servers All of these inventions made workstations a reasonable alter-native to time-sharing minicomputers Since the main cost of a computer

is the design and manufacturing process, the more units you build, the cheaper they are to sell The idea of the local area network (Ethernet) cou-pled with PARC’s Xerox Distributed File server (XDFS) meant that worksta-tions were now capable of duplicating the tasks of terminals for a much lower price tag than the mainframe system Unfortunately for Xerox, they ignored almost every invention developed by the PARC group and ended up letting Steve Jobs and Apple borrow the technology

The most dominant player in distributed computing, however, is Microsoft Using technology they borrowed (some may argue “stole”) from Apple, Microsoft launched the Windows line of graphical user interface (GUI) products that turned the workstation into a much more valuable tool Using most of the ideas PARC had developed (the mouse, Ethernet, distributed file sharing), Microsoft gave everyone from the home user to the network manager a platform that was easy to understand and could be rapidly and efficiently used by almost everyone Apple may have been the first to give the world a point-and-click interface, but Microsoft was the

Trang 5

company that led it into the 1990’s All of these features enabled Microsoft

to develop a real distributed computing environment

Enter Distributed Computing

Distributed computing has come a long way since that first local area net-work (LAN) Today, almost every organization employs some type of dis-tributed computing The most commonly used system is client/server architecture, where the client (workstation) requests information and ser-vices from a remote server Servers can be high-speed desktops, microcom-puters, minicommicrocom-puters, or even mainframe machines Typically connected

by a LAN, the client/server model has become increasingly complex over the last few years To support the client/server model a wide array of oper-ating systems have been developed, which may or may not interact well with other systems UNIX, Windows, Novell, and Banyan Vines are several

of the operating systems that are able to communicate with each other, although not always efficiently

However, the advantages to the client/server model can be consider-able Since each machine is capable of performing its own processing, applications for the client/server model tend to vary based on the original design Some applications will use the server as little more than a file-sharing device Others will actually run processes at both the client and server levels, dividing the work as is most time-effective A true client/ server application is designed to provide the same quality of service as a mainframe or minicomputer would provide Client/server operations can

be either two- or three-tiered, as described in the following sections

Two-Tiered Computing

In two-tiered computing, an applications server (such as a database) per-forms the server-side portion of the processing, such as record searching

or generation A client software piece will be used to perform the access, editing, and manipulation processes Figure 1.2 shows a typical two-tiered client/server solution Most distributed networks today are two-tiered client/server models

Three-Tiered Computing

Three-tiered computing is used in situations where the processing power required to execute an application will be insufficient on some or all

existing workstations In three-tiered computing, server-side processing duties are still performed by the database server Many of the process duties that would normally be performed by the workstation are instead handled by an applications processing server, and the client is typically

Trang 6

responsible only for screen updates, keystrokes, and other visual changes.

This greatly reduces the load on client machines and can allow older machines to still utilize newer applications Figure 1.3 shows a typical three-tiered client/server solution

Figure 1.2Two-tiered computing solution

Database Server Client PC

Client requests data

Database server returns requested information

Figure 1.3Three-tiered computing solution

Applications Server DatabaseServer Client PC

Client wants to run

a database query

Applications server requests database file

DB server returns file

Applications server processes query and returns output to client

Trang 7

Windows 2000 with Terminal Services and Citrix MetaFrame can be con-sidered either two-tiered or three-tiered computing, depending on the network design Although there are some differences between the methods used, both Terminal Services and MetaFrame use a client PC and

an applications server

Distributed Computing and the Internet

Recently, a new distributed-computing model has emerged: the Internet, which is one giant distributed-computing environment Client PCs connect

to servers that pass requests to the appropriate remote servers, which exe-cute the commands given and return the output back to the client The Internet was originally devised by the military to link its research and engi-neering sites across the United States with a centralized computer system Called Advanced Research Projects Agency Network (ARPAnet), the system was put into place in 1971 and had 19 operational nodes By 1977, a new network had connected radio packet networks, Satellite Networks

(SATNET), and ARPAnet together to demonstrate the possibility of mobile computing Called the Internet, the network was christened when a user sent a message from a van on the San Francisco Bay-shore Freeway over 94,000 miles via satellite, landline, and radio waves back to the University

of Southern California campus

In 1990, MCI created a gateway between separate networks to allow their MCIMail program to send e-mail messages to users on either system Hailed as the first commercial use of the Internet, MCIMail was a precursor for the rapid expansion of Internet services that would explode across the United States Now, a large portion of the world is able to surf the Internet, send e-mail to their friends, and participate in live chats with other users Another growing demand on the Internet is the need to use distributed computing to run applications remotely Thin-client programs, which are capable of connecting to remote application servers across an Internet con-nection, are becoming more and more common for organizations that need

to make resources available to users outside their local network We’ll talk about thin clients later in the chapter; for now it’s enough to know that Citrix is the major supplier of thin-client technology and Web connectivity today

Trang 8

Benefits of Distributed Computing Distributed computing can be an excellent fit for many organizations With the client/server model, the hardware requirements for the servers are far less than would be required for a mainframe This translates into reduced initial cost Since each workstation has its own processing power, it can work offline should the server portion be unavailable And through the use

of multiple servers, LANs, wide area networks (WANs), and other services such as the Internet, distributed computing systems can reach around the world It is not uncommon these days for companies to have employees who access the corporate system from their laptops regardless of where they are located, even on airplanes

Distributed computing also helps to ensure that there is no one central point of failure If information is replicated across many servers, then one server out of the group going offline will not prevent access to that infor-mation Careful management of data replication can guarantee that all but the most catastrophic of failures will not render the system inoperable

Redundant links provide fault-tolerant solutions for critical information systems This is one of the key reasons that the military initially adopted the distributed computing platform

Finally, distributed computing allows the use of older machines to per-form more complex processes than what they might be capable of other-wise With some distributed computing programs, clients as old as a 386 computer could access and use resources on your Windows 2000 servers

as though they were local PCs with up-to-date hardware That type of access can appear seamless to the end user If developers only had to write software for one operating system platform, they could ignore having to test the program on all the other platforms available All this adds up to cost savings for the consumer and potential time savings for a developer

Windows 2000 with Terminal Services and Citrix MetaFrame combine both the distributed computing qualities and the mainframe model as well

Meeting the Business Requirements of Both Models

Organizations need to take a hard look at what their requirements will be before implementing either the mainframe or distributed computing model

A wrong decision early in the process can create a nightmare of manage-ment details Mainframe computing is more expensive in the initial cost outlay Distributed computing requires more maintenance over the long run Mainframe computing centralizes all of the applications processing

Distributed computing does exactly what it says—it distributes it! The reason to choose one model over the other is a decision each organization

Trang 9

has to make individually With the addition of thin-client computing to the mix, a network administrator can be expected to pull all of his or her hair out before a system is implemented Table 1.1 gives some general consider-ations to use when deciding between the different computing models

Table 1.1Considerations for Choosing a Computing Model

If you need… Then consider using…

An environment with a

variety of platforms

avail-able to the end user

A homogeneous

environ-ment where users are

presented with a standard

view

Lower cost outlays in the

early stages

Easy and cost-efficient

expansion

Excellent availability of

soft-ware packages for a variety

of business applications

An excellent Mean Time

Between Failures (MTBF)

Distributed computing Each end user will have

a workstation with its own processing capabili-ties and operating system This gives users more control over their working environment Mainframe computing Dummy terminals allow administrators to present a controlled, standard environment for each user regardless of

machine location

Distributed computing Individual PCs and com-puters will cost far less than a mainframe system Keep in mind that future maintenance costs may outweigh that savings

Mainframe computing Once the mainframe system has been implemented, adding new ter-minals is a simple process compared with installing and configuring a new PC for each user

Distributed computing The vast majority of applications being released are for desktop computing, and those software packages are often less expensive even at an enterprise level than similar mainframe packages

Mainframe computing The typical mainframe incorporates more error-checking hardware than most PCs or servers do This gives them a very good service record, which means less maintenance costs over the life of the equip-ment In addition, the ability to predict hard-ware failures before they occur helps to keep mainframe systems from developing the same problems that smaller servers frequently have

Trang 10

The Main Differences Between Remote Control and Remote Node

There are two types of remote computing in today’s network environments and choosing which to deploy is a matter of determining what your needs really are Remote node software is what is typically known as remote access It is generally implemented with a client PC dialing in to connect to some type of remote access server On the other side, remote control soft-ware gives a remote client PC control over a local PC’s desktop Users at either machine will see the same desktop In this section we’ll take a look

at the two different methods of remote computing, and consider the bene-fits and drawbacks of each method

Remote Control

Remote control software has been in use for several years From smaller packages like PCAnywhere to larger, enterprise-wide packages like SMS, remote control software gives a user or administrator the ability to control

a remote machine and thus the ability to perform a variety of functions

With remote control, keystrokes are transmitted from the remote machine

to the local machine over whatever network connection has been estab-lished The local machine in turn sends back screen updates to the remote

PC Processing and file transfer typically takes place at the local level, which helps reduce the bandwidth requirements for the remote PC

Figure 1.4 shows an example of a remote control session

Figure 1.4Remote control session

Remote PC

Local Client

Local Server Screens

Keystrokes

Remote Connection

LAN Data

Ngày đăng: 23/12/2013, 01:16

TỪ KHÓA LIÊN QUAN