1. Trang chủ
  2. » Công Nghệ Thông Tin

Test Driven JavaScript Development- P17 pdf

20 285 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 189,34 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

By keeping the connection open and flushing whenever new data is available, the server can push a multipart response to the client, which enables it to receive chunks of data several time

Trang 1

Next up, Listing 13.32 makes sure the url property is set on the poller In order to make this assertion we need a reference to the poller object, so the method will need to return it

Listing 13.32 Expecting the url property to be set

"test should set url property on poller object":

function () { var poller = ajax.poll("/url");

assertSame("/url", poller.url);

}

Implementing this test requires two additional lines, as in Listing 13.33

Listing 13.33 Setting the URL

function poll(url, options) { var poller = Object.create(ajax.poller);

poller.url = url;

poller.start();

return poller;

}

The remaining tests will simply check that the headers, callbacks, and interval are set properly on the poller Doing so closely resembles what we just did with the underlying poller interface, so I’ll leave writing the tests as an exercise

Listing 13.34 shows the final version of ajax.poll

Listing 13.34 Final version of ajax.poll

function poll(url, options) { var poller = Object.create(ajax.poller);

poller.url = url;

options = options || {};

poller.headers = options.headers;

poller.success = options.success;

poller.failure = options.failure;

poller.complete = options.complete;

poller.interval = options.interval;

poller.start();

return poller;

} ajax.poll = poll;

Trang 2

13.2 Comet

Polling will definitely help move an application in the general direction of “live”

by making a more continuous data stream from the server to the client possible

However, this simple model has two major drawbacks:

• Polling too infrequently yields high latency

• Polling too frequently yields too much server load, which may be unnecessary

if few requests actually bring back data

In systems requiring very low latency, such as instant messaging, polling to keep

a constant data flow could easily mean hammering servers frequently enough to

make the constant requests a scalability issue When the traditional polling strategy

becomes a problem, we need to consider alternative options

Comet, Ajax Push, and Reverse Ajax are all umbrella terms for various ways to

implement web applications such that the server is effectively able to push data to the

client at any given time The straightforward polling mechanism we just built is

possi-bly the simplest way to do this—if it can be defined as a Comet implementation—but

as we have just seen, it yields high latency or poor scalability

There are a multitude of ways to implement live data streams, and shortly we will take a shot at one of them Before we dive back into code, I want to quickly

discuss a few of the options

13.2.1 Forever Frames

One technique that works without even requiring the XMLHttpRequest object

is so-called “forever frames.” A hidden iframe is used to request a resource from

the server This request never finishes, and the server uses it to push script tags to

the page whenever new events occur Because HTML documents are loaded and

parsed incrementally, new script blocks will be executed when the browser receives

them, even if the whole page hasn’t loaded yet Usually the script tag ends with a

call to a globally defined function that will receive data, possibly implemented as

JSON-P (“JSON with padding”)

The iframe solution has a few problems The biggest one is lack of error hand-ling Because the connection is not controlled by code, there is little we can do if

something goes wrong Another issue that can be worked around is browser loading

indicators Because the frame never finishes loading, some browsers will (rightfully

so) indicate to the user that the page is still loading This is usually not a desirable

Trang 3

feature, seeing as the data stream should be a background progress the user doesn’t need to consider

The forever frame approach effectively allows for true streaming of data and only uses a single connection

13.2.2 Streaming XMLHttpRequest

Similar streaming to that of the forever frames is possible using the XMLHttp-Requestobject By keeping the connection open and flushing whenever new data

is available, the server can push a multipart response to the client, which enables it

to receive chunks of data several times over the same connection Not all browsers support the required multipart responses, meaning that this approach cannot be easily implemented in a cross-browser manner

13.2.3 HTML5

HTML5 provides a couple of new ways to improve server-client communication

One alternative is the new element, eventsource, which can be used to listen to server-side events rather effortlessly The element is provided with a src attribute and an onmessage event handler Browser support is still scarce

Another important API in the HTML5 specification is the WebSocket API

Once widely supported, any solution using separate connections to fetch and update data will be mostly superfluous Web sockets offer a full-duplex communications channel, which can be held open for as long as required and allows true streaming

of data between client and server with proper error handling

13.3 Long Polling XMLHttpRequest

Our Comet implementation will use XMLHttpRequest long polling Long polling

is an improved polling mechanism not very different from the one we have already implemented In long polling the client makes a request and the server keeps the connection open until it has new data, at which point it returns the data and closes the connection The client then immediately opens a new connection and waits for more data This model vastly improves communication in those cases in which the client needs data as soon as they’re available, yet data does not appear too often If new data appear very often, the long polling method performs like regular polling, and could possibly be subject to the same failing, in which clients poll too intensively

Implementing the client side of long polling is easy Whether or not we are using regular or long polling is decided by the behavior of the server, wherein

Trang 4

implementation is less trivial, at least with traditional threaded servers For these,

such as Apache, long polling does not work well The one thread-per-connection

model does not scale with long polling, because every client keeps a near-consistent

connection Evented server architecture is much more apt to deal with these

situ-ations, and allows minimal overhead We’ll take a closer look at the server-side in

Chapter 14, Server-Side JavaScript with Node.js.

13.3.1 Implementing Long Polling Support

We will use what we have learned to add long polling support to our poller without

requiring a long timeout between requests The goal of long polling is low latency,

and as such we would like to eliminate the timeout, at least in its current state

How-ever, because frequent events may cause the client to make too frequent requests,

we need a way to throttle requests in the extreme cases

The solution is to modify the way we use the timeout Rather than timing out the desired amount of milliseconds between requests, we will count elapsed time

from each started request and make sure requests are never fired too close to each

other

13.3.1.1 Stubbing Date

To test this feature we will need to fake the Date constructor As with measuring

performance, we’re going to use a new Date() to keep track of elapsed time

To fake this in tests, we will use a simple helper The helper accepts a single date

object, and overrides the Date constructor The next time the constructor is used,

the fake object is returned and the native constructor is restored The helper lives

in lib/stub.js and can be seen in Listing 13.35

Listing 13.35 Stubbing the Date constructor for fixed output

(function (global) {

var NativeDate = global.Date;

global.stubDateConstructor = function (fakeDate) { global.Date = function () {

global.Date = NativeDate;

return fakeDate;

};

};

}(this));

Trang 5

This helper contains enough logic that it should not be simply dropped into the project without tests Testing the helper is left as an exercise

13.3.1.2 Testing with Stubbed Dates

Now that we have a way of faking time, we can formulate the test that expects new requests to be made immediately if the minimum interval has passed since the last request was issued Listing 13.36 shows the test

Listing 13.36 Expecting long-running request to immediately re-connect

upon completion

TestCase("PollerTest", { setUp: function () { /* */

this.ajaxRequest = ajax.request;

/* */

}, tearDown: function () { ajax.request = this.ajaxRequest;

/* */

}, /* */

"test should re-request immediately after long request":

function () { this.poller.interval = 500;

this.poller.start();

var ahead = new Date().getTime() + 600;

stubDateConstructor(new Date(ahead));

ajax.request = stubFn();

this.xhr.complete();

assert(ajax.request.called);

} });

The test sets up the poller interval to 500ms, and proceeds to simulate a request lasting for 600ms It does this by making new Date return an object 600ms into the future, and then uses this.xhr.complete() to complete the fake request

Once this happens, the minimum interval has elapsed since the previous request

Trang 6

started and so we expect a new request to have fired immediately The test fails and

Listing 13.37 shows how to pass it

Listing 13.37 Using the interval as minimum interval between started requests

function start() {

/* */

var requestStart = new Date().getTime();

ajax.request(this.url, { complete: function () { var elapsed = new Date().getTime() - requestStart;

var remaining = interval - elapsed;

setTimeout(function () { poller.start();

}, Math.max(0, remaining));

/* */

}, /* */

});

}

Running the tests, somewhat surprisingly, reveals that the test still fails The clue

is the setTimeout call Note that even if the required interval is 0, we make the

next request through setTimeout, which never executes synchronously

One benefit of this approach is that we avoid deep call stacks Using an asyn-chronous call to schedule the next request means that the current request call

exits immediately, and we avoid making new requests recursively However, this

cleverness is also what is causing us trouble The test assumes that the new request

is scheduled immediately, which it isn’t We need to “touch” the clock inside the

test in order to have it fire queued timers that are ready to run Listing 13.38 shows

the updated test

Listing 13.38 Touching the clock to fire ready timers

"test should re-request immediately after long request":

function () {

this.poller.interval = 500;

this.poller.start();

var ahead = new Date().getTime() + 600;

stubDateConstructor(new Date(ahead));

Trang 7

ajax.request = stubFn();

this.xhr.complete();

Clock.tick(0);

assert(ajax.request.called);

}

And that’s it The poller now supports long polling with an optional minimal interval between new requests to the server The poller could be further extended to support another option to set minimum grace period between requests, regardless

of the time any given request takes This would increase latency, but could help a stressed system

13.3.2 Avoiding Cache Issues

One possible challenge with the current implementation of the poller is that of caching Polling is typically used when we need to stream fresh data off the server, and having the browser cache responses is likely to cause trouble Caching can be controlled from the server via response headers, but sometimes we don’t control the server implementation In the interest of making the poller as generally useful as possible, we will extend it to add some random fuzz to the URL, which effectively avoids caching

To test the cache buster, we simply expect the open method of the transport

to be called with the URL including a timestamp, as seen in Listing 13.39

Listing 13.39 Expecting poller to add cache buster to URL

"test should add cache buster to URL": function () { var date = new Date();

var ts = date.getTime();

stubDateConstructor(date);

this.poller.url = "/url";

this.poller.start();

assertEquals("/url?" + ts, this.xhr.open.args[1]);

}

To pass this test, Listing 13.40 simply adds the date it is already recording to the URL when making a request

Trang 8

Listing 13.40 Adding a cache buster

function start() {

/* */

var requestStart = new Date().getTime();

/* */

ajax.request(this.url + "?" + requestStart, { /* */

});

}

Although the cache buster test passes, the test from Listing 13.11 now fails because it is expecting the unmodified URL to be used The URL is now being

tested in a dedicated test, and the URL comparison in the original test can be

removed

As we discussed in the previous chapter, adding query parameters to arbitrary URLs such as here will break if the URL already includes query parameters Testing

for such a URL and updating the implementation is left as an exercise

13.3.3 Feature Tests

As we did with the request interface, we will guard the poller with feature

de-tection, making sure we don’t define the interface if it cannot be successfully used

Listing 13.41 shows the required tests

Listing 13.41 Poller feature tests

(function () {

if (typeof tddjs == "undefined") { return;

} var ajax = tddjs.namespace("ajax");

if (!ajax.request || !Object.create) { return;

} /* */

}());

Trang 9

13.4 The Comet Client

Although long polling offers good latency and near-constant connections, it also comes with limitations The most serious limitation is that of number of concurrent http connections to any given host in most browsers Older browsers ship with a maximum of 2 concurrent connections by default (even though it can be changed

by the user), whereas newer browsers can default to as many as 8 In any case, the connection limit is important If you deploy an interface that uses long polling and a user opens the interface in two tabs, he will wait indefinitely for the third tab—no HTML, images, or CSS can be downloaded at all, because the poller is currently using the 2 available connections Add the fact that XMLHttpRequest cannot be used for cross-domain requests, and you have a potential problem on your hands

This means that long polling should be used consciously It also means that keeping more than a single long polling connection in a single page is not a viable approach To reliably handle data from multiple sources, we need to pipe all mes-sages from the server through the same connection, and use a client that can help delegate the data

In this section we will implement a client that acts as a proxy for the server It will poll a given URL for data and allow JavaScript objects to observe different topics

Whenever data arrive from the server, the client extracts messages by topic and notifies respective observers This way, we can limit ourselves to a single connection, yet still receive messages relating to a wide range of topics

The client will use the observable object developed in Chapter 11, The

Observer Pattern, to handle observers and the ajax.poll interface we just

imple-mented to handle the server connection In other words, the client is a thin piece of glue to simplify working with server-side events

13.4.1 Messaging Format

For this example we will keep the messaging format used between the server and the client very simple We want client-side objects to be able to observe a single topic, much like the observable objects did, and be called with a single object

as argument every time new data is available The simplest solution to this problem seems to be to send JSON data from the server Each response sends back an object whose property names are topics, and their values are arrays of data related to that topic Listing 13.42 shows an example response from the server

Trang 10

Listing 13.42 Typical JSON response from server

{

"chatMessage": [{

"id": "38912",

"from": "chris",

"to": "",

"body": "Some text ",

"sent_at": "2010-02-21T21:23:43.687Z"

}, {

"id": "38913",

"from": "lebowski",

"to": "",

"body": "More text ",

"sent_at": "2010-02-21T21:23:47.970Z"

}],

"stock": { /* */ }, /* */

}

Observers could possibly be interested in new stock prices, so they would show their interest through client.observe("stock", fn); Others may

be more interested in the chat messages coming through I’m not sure what kind

of site would provide both stock tickers and real-time chat on the same page, but

surely in this crazy Web 2.0 day and age, such a site exists The point being, the data

from the server may be of a diverse nature because a single connection is used for

all streaming needs

The client will provide a consistent interface by doing two things First, it allows observers to observe a single topic rather than the entire feed Second, it will

call each observer once per message on that topic This means that in the above

example, observers to the “chatMessage” topic will be called twice, once for each

chat message

The client interface will look and behave exactly like the observables developed

in Chapter 11, The Observer Pattern This way code using the client does not need

to be aware of the fact that data is fetched from and sent to a server Furthermore,

having two identical interfaces means that we can use a regular observable in

tests for code using the client without having to stub XMLHttpRequest to avoid

going to the server in tests

Ngày đăng: 03/07/2014, 05:20