1. Trang chủ
  2. » Công Nghệ Thông Tin

Thinking in Java 4th Edition phần 7 pdf

108 461 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 108
Dung lượng 1,38 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The File class Before getting into the classes that actually read and write data to streams, we’ll look at a library utility that assists you with file directory issues.. It makes sense

Trang 1

To run the program, you type a command line of either:

java RandomBounds lower

or

java RandomBounds upper

In both cases, you are forced to break out of the program manually, so it would appear that

Math.random( ) never produces either o.o or l.o But this is where such an experiment can

be deceiving If you consider that there are about 262 different double fractions between o and 1, the likelihood of reaching any one value experimentally might exceed the lifetime of

one computer, or even one experimenter It turns out that 0.0 is included in the output of

Math.random( ) Or, in math lingo, it is [0,1) Thus, you must be careful to analyze your

experiments and to understand their limitations

Choosing between Sets

Depending on the behavior you desire, you can choose a TreeSet, a HashSet, or a

LinkedHashSet The following test program gives an indication of the performance

trade-off between these implementations:

//: containers/SetPerformance.java

// Demonstrates performance differences in Sets

// {Args: 100 5000} Small to keep build testing short

import java.util.*;

public class SetPerformance {

static List<Test<Set<Integer>>> tests =

new ArrayList<Test<Set<Integer>>>();

static {

tests.add(new Test<Set<Integer>>("add") {

int test(Set<Integer> set, TestParam tp) {

int loops = tp.loops;

int size = tp.size;

for(int i = 0; i < loops; i++) {

int test(Set<Integer> set, TestParam tp) {

int loops = tp.loops;

Trang 2

int span = tp.size * 2;

for(int i = 0; i < loops; i++)

int test(Set<Integer> set, TestParam tp) {

int loops = tp.loops * 10;

for(int i = 0; i < loops; i++) {

Tester.run(new TreeSet<Integer>(), tests);

Tester.run(new HashSet<Integer>(), tests);

Tester.run(new LinkedHashSet<Integer>(), tests);

because it maintains its elements in sorted order, so you use it only when you need a sorted

Set Because of the internal structure necessary to support sorting and because iteration is

something you’re more likely to do, iteration is usually faster with a TreeSet than a

HashSet

Note that LinkedHashSet is more expensive for insertions than HashSet; this is because

of the extra cost of maintaining the linked list along with the hashed container

Exercise 34: (1) Modify SetPerformance.java so that the Sets hold String objects

instead of Integers Use a Generator from the Arrays chapter to create test values

Trang 3

Choosing between Maps

This program gives an indication of the trade-off between Map implementations:

//: containers/MapPerformance.java

// Demonstrates performance differences in Maps

// {Args: 100 5000} Small to keep build testing short

import java.util.*;

public class MapPerformance {

static List<Test<Map<Integer,Integer>>> tests =

new ArrayList<Test<Map<Integer,Integer>>>();

static {

tests.add(new Test<Map<Integer,Integer>>("put") {

int test(Map<Integer,Integer> map, TestParam tp) {

int loops = tp.loops;

int size = tp.size;

for(int i = 0; i < loops; i++) {

int test(Map<Integer,Integer> map, TestParam tp) {

int loops = tp.loops;

int span = tp.size * 2;

for(int i = 0; i < loops; i++)

int test(Map<Integer,Integer> map, TestParam tp) {

int loops = tp.loops * 10;

Tester.run(new TreeMap<Integer,Integer>(), tests);

Tester.run(new HashMap<Integer,Integer>(), tests);

Tester.run(new LinkedHashMap<Integer,Integer>(),tests);

Tester.run(

new IdentityHashMap<Integer,Integer>(), tests);

Tester.run(new WeakHashMap<Integer,Integer>(), tests);

Tester.run(new Hashtable<Integer,Integer>(), tests);

Trang 4

Hashtable performance is roughly the same as HashMap Since HashMap is intended to

replace Hashtable, and thus uses the same underlying storage and lookup mechanism

(which you will learn about later), this is not too surprising

A TreeMap is generally slower than a HashMap As with TreeSet, a TreeMap is a way to

create an ordered list The behavior of a tree is such that it’s always in order and doesn’t have

to be specially sorted Once you fill a TreeMap, you can call keySet( ) to get a Set view of the keys, then toArray( ) to produce an array of those keys You can then use the static method Arrays.binarySearch( ) to rapidly find objects in your sorted array Of course, this only makes sense if the behavior of a HashMap is unacceptable, since HashMap is designed to rapidly find keys Also, you can easily create a HashMap from a TreeMap with

a single object creation or call to putAll( ) In the end, when you’re using a Map, your first choice should be HashMap, and only if you need a constantly sorted Map will you need

TreeMap

LinkedHashMap tends to be slower than HashMap for insertions because it maintains

the linked list (to preserve insertion order) in addition to the hashed data structure Because

of this list, iteration is faster

Trang 5

IdentityHashMap has different performance because it uses == rather than equals( ) for

comparisons WeakHashMap is described later in this chapter

Exercise 35: (1) Modify MapPerformance.java to include tests of SlowMap

Exercise 36: (5) Modify SlowMap so that instead of two ArrayLists, it holds a single ArrayList of MapEntry objects Verify that the modified version works correctly Using MapPerformance.java, test the speed of your new Map Now change the put( ) method

so that it performs a sort( ) after each pair is entered, and modify get( ) to use

Collections.binarySearch( ) to look up the key Compare the performance of the new

version with the old ones

Exercise 37: (2) Modify SimpleHashMap to use ArrayLists instead of LinkedLists

Modify MapPerformance.java to compare the performance of the two implementations

HashMap performance factors

It’s possible to hand-tune a HashMap to increase its performance for your particular

application So that you can understand performance issues when tuning a HashMap, some

terminology is necessary:

Capacity: The number of buckets in the table

Initial capacity: The number of buckets when the table is created HashMap and

HashSet have constructors that allow you to specify the initial capacity

Size: The number of entries currently in the table

Load factor: Size/capacity A load factor of o is an empty table, 0.5 is a half-full table,

etc A lightly loaded table will have few collisions and so is optimal for insertions and

lookups (but will slow down the process of traversing with an iterator) HashMap and

HashSet have constructors that allow you to specify the load factor, which means that

when this load factor is reached, the container will automatically increase the capacity (the number of buckets) by roughly doubling it and will redistribute the existing objects

into the new set of buckets (this is called rehashing)

The default load factor used by HashMap is 0.75 (it doesn’t rehash until the table is

three-fourths full) This seems to be a good trade-off between time and space costs A higher load factor decreases the space required by the table but increases the lookup cost, which is

important because lookup is what you do most of the time (including both get( ) and put(

))

If you know that you’ll be storing many entries in a HashMap, creating it with an

appropriately large initial capacity will prevent the overhead of automatic rehashing.11

Exercise 38: (3) Look up the HashMap class in the JDK documentation Create a HashMap, fill it with elements, and determine the load factor Test the lookup speed with

this map, then attempt to increase the speed by making a new HashMap with a larger initial

capacity and copying the old map into the new one, then run your lookup speed test again on the new map

      

11 In a private message, Joshua Bloch wrote: " I believe that we erred by allowing implementation details (such as hash table size and load factor) into our APIs The client should perhaps tell us the maximum expected size of a collection, and

we should take it from there Clients can easily do more harm than good by choosing values for these parameters As an

extreme example, consider Vector’s capacitylncrement No one should ever set this, and we shouldn’t have provided

it If you set it to any nonzero value, the asymptotic cost of a sequence of appends goes from linear to quadratic In other words, it destroys your performance Over time, we’re beginning to wise up about this sort of thing If you look at

IdentityHashMap, you’ll see that it has no low-level tuning parameters."

Trang 6

Exercise 39: (6) Add a private rehash( ) method to SimpleHashMap that is

invoked when the load factor exceeds 0.75 During rehashing, double the number of buckets, then search for the first prime number greater than that to determine the new number of buckets

SortedMap<K,V>, Class<K> keyType, Class <V> valueType) checkedSortedSet(

SortedSet<T>, Class<T> type)

Produces a dynamically type-safe

view of a Collection, or a specific subtype of Collection Use this

when it’s not possible to use the statically checked version

These were shown in the Generics

chapter under the heading

"Dynamic type safety."

max(Collection) min(Collection) Produces the maximum or minimum element in the argument

using the natural comparison method of the objects in the

Collection

max(Collection, Comparator) min(Collection, Comparator) Produces the maximum or minimum element in the

Collection using the Comparator

indexOfSubList(List source, List target) Produces starting index of the first place where target appears inside

source, or -1 if none occurs

lastIndexOfSubList(List source, List target) Produces starting index of the last place where target appears inside

source, or -1 if none occurs

replaceAll(List<T>,

T oldVal, T newVal) Replaces all oldVal with newVal

reverse(List) Reverses all the elements in place

reverseOrder( ) reverseOrder(

Comparator<T>)

Returns a Comparator that

reverses the natural ordering of a collection of objects that implement

Comparable<T> The second

version reverses the order of the

supplied Comparator

Trang 7

rotate(List, int distance) Moves all elements forward by

distance, taking the ones off the

end and placing them at the beginning

shuffle(List) shuffle(List, Random) Randomly permutes the specified list The first form provides its own

randomization source, or you may provide your own with the second form

sort(List<T>) sort(List<T>, Comparator<? super T> c)

Sorts the List<T> using its natural

ordering The second form allows

you to provide a Comparator for

sorting

copy(List<? super T> dest, List<? extends T> src) Copies elements from src to dest

swap(List, int i, int j) Swaps elements at locations i and j

in the List Probably faster than

what you’d write by hand

fill(List<? super T>, T x) Replaces all the elements of list

with x

nCopies(int n, T x) Returns an immutable List<T> of

size n whose references all point to

x

disjoint(Collection, Collection) Returns true if the two collections

have no elements in common

frequency(Collection, Object x) Returns the number of elements in

the Collection equal to x

emptyList( ) emptyMap( ) emptySet( )

Returns an immutable empty List,

Map, or Set These are generic, so

the resulting Collection will be

parameterized to the desired type

singleton(T x) singletonList(T x) singletonMap(K key, V value)

Produces an immutable Set<T>,

List<T>, or Map<K,V>

containing a single entry based on the given argument(s)

list(Enumeration<T> e) Produces an ArrayList<T>

containing the elements in the order in which they are returned by

the (old-style) Enumeration (predecessor to the Iterator) For

converting from legacy code

enumeration(Collection<T>) Produces an old-style

Enumeration<T> for the

argument

Note that min( ) and max( ) work with Collection objects, not with Lists, so you don’t need to worry about whether the Collection should be sorted or not (As mentioned earlier,

you do need to sort( ) a List or an array before performing a binarySearch( ).)

Here’s an example showing the basic use of most of the utilities in the above table:

Trang 8

//: containers/Utilities.java

// Simple demonstrations of the Collections utilities

import java.util.*;

import static net.mindview.util.Print.*;

public class Utilities {

static List<String> list = Arrays.asList(

"one Two three Four five six one".split(" "));

public static void main(String[] args) {

// Converting an old-style Vector

// to a List via an Enumeration:

[one, Two, three, Four, five, six, one]

‘list’ disjoint (Four)?: false

max: three

min: Four

Trang 9

max w/ comparator: Two

min w/ comparator: five

indexOfSubList: 3

lastIndexOfSubList: 3

replaceAll: [Yo, Two, three, Four, five, six, Yo]

reverse: [Yo, six, five, Four, three, Two, Yo]

rotate: [three, Two, Yo, Yo, six, five, Four]

copy: [in, the, matrix, Yo, six, five, Four]

swap: [Four, the, matrix, Yo, six, five, in]

shuffled: [six, matrix, the, Four, Yo, five, in]

fill: [pop, pop, pop, pop, pop, pop, pop]

frequency of ‘pop’: 7

dups: [snap, snap, snap]

‘list’ disjoint ‘dups’?: true

arrayList: [snap, snap, snap]

*///:~

The output explains the behavior of each utility method Note the difference in min( ) and

max( ) with the String.CASE_INSENSITIVE_ORDER Comparator because of

capitalization

Sorting and searching Lists

Utilities to perform sorting and searching for Lists have the same names and signatures as those for sorting arrays of objects, but are static methods of Collections instead of Arrays Here’s an example that uses the list data from Utilities.java:

//: containers/ListSortSearch.java

// Sorting and searching Lists with Collections utilities

import java.util.*;

import static net.mindview.util.Print.*;

public class ListSortSearch {

public static void main(String[] args) {

String key = list.get(7);

int index = Collections.binarySearch(list, key);

print("Location of " + key + " is " + index +

", list.get(" + index + ") = " + list.get(index));

print("Location of " + key + " is " + index +

", list.get(" + index + ") = " + list.get(index));

}

Trang 10

Location of six is 7, list.get(7) = six

Case-insensitive sorted: [five, five, Four, one, one, six, six, three, three, Two]

Location of three is 7, list.get(7) = three

*///:~

Just as when searching and sorting with arrays, if you sort using a Comparator, you must

binarySearch( ) using the same Comparator

This program also demonstrates the shuffle( ) method in Collections, which randomizes the order of a List A ListIterator is created at a particular location in the shuffled list, and

used to remove the elements from that location until the end of the list

Exercise 40: (5) Create a class containing two String objects and make it Comparable

so that the comparison only cares about the first String Fill an array and an ArrayList with objects of your class, using the RandomGenerator generator Demonstrate that sorting works properly Now make a Comparator that only cares about the second String, and

demonstrate that sorting works properly Also perform a binary search using your

Comparator

Exercise 41: (3) Modify the class in the previous exercise so that it will work with

HashSets and as a key in HashMaps

Exercise 42: (2) Modify Exercise 40 so that an alphabetic sort is used

Making a Collection or Map

unmodifiable

Often it is convenient to create a read-only version of a Collection or Map The

Collections class allows you to do this by passing the original container into a method that

hands back a read-only version There are a number of variations on this method, for

Collections (if you can’t treat a Collection as a more specific type), Lists, Sets, and Maps

This example shows the proper way to build read-only versions of each:

//: containers/ReadOnly.java

// Using the Collections.unmodifiable methods

import java.util.*;

import net.mindview.util.*;

import static net.mindview.util.Print.*;

public class ReadOnly {

static Collection<String> data =

Trang 11

[BULGARIA, BURKINA FASO, BOTSWANA, BENIN, ANGOLA, ALGERIA]

{BULGARIA=Sofia, BURKINA FASO=Ouagadougou, BOTSWANA=Gaberone,

BENIN=Porto-Novo, ANGOLA=Luanda, ALGERIA=Algiers}

*///:~

Calling the "unmodifiable" method for a particular type does not cause compile-time

checking, but once the transformation has occurred, any calls to methods that modify the

contents of a particular container will produce an UnsupportedOperationException

In each case, you must fill the container with meaningful data before you make it read-only

Once it is loaded, the best approach is to replace the existing reference with the reference that

is produced by the "unmodifiable" call That way, you don’t run the risk of accidentally trying

to change the contents once you’ve made it unmodifiable On the other hand, this tool also

allows you to keep a modifiable container as private within a class and to return a read-only

reference to that container from a method call So, you can change it from within the class, but everyone else can only read it

Synchronizing a Collection or Map

The synchronized keyword is an important part of the subject of multithreading, a more

complicated topic that will not be introduced until the Concurrency chapter Here, I shall

note only that the Collections class contains a way to automatically synchronize an entire

container The syntax is similar to the "unmodifiable" methods:

//: containers/Synchronization.java

// Using the Collections.synchronized methods

import java.util.*;

public class Synchronization {

public static void main(String[] args) {

Collection<String> c =

Collections.synchronizedCollection(

new ArrayList<String>());

Trang 12

List<String> list = Collections.synchronizedList(

The Java containers also have a mechanism to prevent more than one process from

modifying the contents of a container The problem occurs if you’re in the middle of iterating through a container, and then some other process steps in and inserts, removes, or changes

an object in that container Maybe you’ve already passed that element in the container,

maybe it’s ahead of you, maybe the size of the container shrinks after you call size( )—there

are many scenarios for disaster The Java containers library uses a fail-fast mechanism that

looks for any changes to the container other than the ones your process is personally

responsible for If it detects that someone else is modifying the container, it immediately

produces a ConcurrentModification- Exception This is the "fail-fast" aspect—it doesn’t

try to detect a problem later on using a more complex algorithm

It’s quite easy to see the fail-fast mechanism in operation—all you must do is create an

iterator and then add something to the collection that the iterator is pointing to, like this:

//: containers/FailFast.java

// Demonstrates the "fail-fast" behavior

import java.util.*;

public class FailFast {

public static void main(String[] args) {

Collection<String> c = new ArrayList<String>();

The exception happens because something is placed in the container after the iterator is

acquired from the container The possibility that two parts of the program might modify the same container produces an uncertain state, so the exception notifies you that you should

change your code—in this case, acquire the iterator after you have added all the elements to

the container

Trang 13

The ConcurrentHashMap, CopyOnWriteArrayList, and CopyOnWriteArraySet use techniques that avoid ConcurrentModificationExceptions

Holding references

The java.lang.ref library contains a set of classes that allow greater flexibility in garbage

collection These classes are especially useful when you have large objects that may cause

memory exhaustion There are three classes inherited from the abstract class Reference:

SoftReference, WeakReference, and PhantomReference Each of these provides a

different level of indirection for the garbage collector if the object in question is only

reachable through one of these Reference objects

If an object is reachable, it means that somewhere in your program the object can be found

This could mean that you have an ordinary reference on the stack that goes right to the object, but you might also have a reference to an object that has a reference to the object in question; there can be many intermediate links If an object is reachable, the garbage

collector cannot release it because it’s still in use by your program If an object isn’t

reachable, there’s no way for your program to use it, so it’s safe to garbage collect that object

You use Reference objects when you want to continue to hold on to a reference to that

object—you want to reach that object—but you also want to allow the garbage collector to release that object Thus, you have a way to use the object, but if memory exhaustion is imminent, you allow that object to be released

You accomplish this by using a Reference object as an intermediary (a proxy) between you

and the ordinary reference In addition, there must be no ordinary references to the object

(ones that are not wrapped inside Reference objects) If the garbage collector discovers that

an object is reachable through an ordinary reference, it will not release that object

In the order of SoftReference, WeakReference, and PhantomReference, each one is

"weaker" than the last and corresponds to a different level of reachability Soft references are for implementing memory-sensitive caches Weak references are for implementing

"canonicalizing mappings"—where instances of objects can be simultaneously used in

multiple places in a program, to save storage—that do not prevent their keys (or values) from being reclaimed Phantom references are for scheduling pre-mortem cleanup actions in a more flexible way than is possible with the Java finalization mechanism

With SoftReferences and WeakReferences, you have a choice about whether to place them on a ReferenceQueue (the device used for premortem cleanup actions), but a

PhantomReference can only be built on a ReferenceQueue Here’s a simple

private static final int SIZE = 10000;

private long[] la = new long[SIZE];

private String ident;

public VeryBig(String id) { ident = id; }

public String toString() { return ident; }

protected void finalize() {

System.out.println("Finalizing " + ident);

}

}

Trang 14

public class References {

private static ReferenceQueue<VeryBig> rq =

new ReferenceQueue<VeryBig>();

public static void checkQueue() {

Reference<? extends VeryBig> inq = rq.poll();

new VeryBig("Soft " + i), rq));

System.out.println("Just created: " + sa.getLast());

new VeryBig("Weak " + i), rq));

System.out.println("Just created: " + wa.getLast());

new VeryBig("Phantom " + i), rq));

System.out.println("Just created: " + pa.getLast());

checkQueue();

}

}

} /* (Execute to see output) *///:~

When you run this program (you’ll want to redirect the output into a text file so that you can view the output in pages), you’ll see that the objects are garbage collected, even though you

still have access to them through the Reference object (to get the actual object reference, you use get( )) You’ll also see that the ReferenceQueue always produces a Reference containing a null object To use this, inherit from a particular Reference class and add

more useful methods to the new class

The WeakHashMap

The containers library has a special Map to hold weak references: the WeakHashMap

This class is designed to make the creation of canonicalized mappings easier In such a mapping, you are saving storage by creating only one instance of a particular value When the program needs that value, it looks up the existing object in the mapping and uses that (rather than creating one from scratch) The mapping may make the values as part of its

initialization, but it’s more likely that the values are made on demand

Trang 15

Since this is a storage-saving technique, it’s very convenient that the WeakHashMap allows

the garbage collector to automatically clean up the keys and values You don’t have to do

anything special to the keys and values you want to place in the WeakHashMap; these are automatically wrapped in WeakReferences by the map The trigger to allow cleanup is that

the key is no longer in use, as demonstrated here:

//: containers/CanonicalMapping.java

// Demonstrates WeakHashMap

import java.util.*;

class Element {

private String ident;

public Element(String id) { ident = id; }

public String toString() { return ident; }

public int hashCode() { return ident.hashCode(); }

public boolean equals(Object r) {

return r instanceof Element &&

class Key extends Element {

public Key(String id) { super(id); }

}

class Value extends Element {

public Value(String id) { super(id); }

}

public class CanonicalMapping {

public static void main(String[] args) {

int size = 1000;

// Or, choose size via the command line:

if(args.length > 0)

size = new Integer(args[0]);

Key[] keys = new Key[size];

WeakHashMap<Key,Value> map =

new WeakHashMap<Key,Value>();

for(int i = 0; i < size; i++) {

Key k = new Key(Integer.toString(i));

Value v = new Value(Integer.toString(i));

} /* (Execute to see output) *///:~

The Key class must have a hashCode( ) and an equals( ) since it is being used as a key in a hashed data structure The subject of hashCode( ) was described earlier in this chapter

When you run the program, you’ll see that the garbage collector will skip every third key,

because an ordinary reference to that key has also been placed in the keys array, and thus

those objects cannot be garbage collected

Trang 16

Java 1.0/1.1 containers

Unfortunately, a lot of code was written using the Java 1.0/1.1 containers, and even new code

is sometimes written using these classes So although you should never use the old containers when writing new code, you’ll still need to be aware of them However, the old containers were quite limited, so there’s not that much to say about them, and since they are

anachronistic, I will try to refrain from overemphasizing some of their hideous design

decisions

Vector & Enumeration

The only self-expanding sequence in Java 1.0/1.1 was the Vector, so it saw a lot of use Its

flaws are too numerous to describe here (see the 1st edition of this book, available as a free

download from www.MindView.net) Basically, you can think of it as an ArrayList with

long, awkward method names In the revised Java container library, Vector was adapted so that it could work as a Collection and a List This turns out to be a bit perverse, as it may confuse some people into thinking that Vector has gotten better, when it is actually included

only to support older Java code

The Java 1.0/1.1 version of the iterator chose to invent a new name, "enumeration," instead

of using a term that everyone was already familiar with ("iterator") The Enumeration interface is smaller than Iterator, with only two methods, and it uses longer method names:

boolean hasMoreElements( ) produces true if this enumeration contains more

elements, and Object nextElement( ) returns the next element of this enumeration if there

are any more (otherwise it throws an exception)

Enumeration is only an interface, not an implementation, and even new libraries

sometimes still use the old Enumeration, which is unfortunate but generally harmless Even though you should always use Iterator when you can in your own code, you must be prepared for libraries that want to hand you an Enumeration

In addition, you can produce an Enumeration for any Collection by using the

Collections.enumeration( ) method, as seen in this example:

//: containers/Enumerations.java

// Java 1.0/1.1 Vector and Enumeration

import java.util.*;

import net.mindview.util.*;

public class Enumerations {

public static void main(String[] args) {

Trang 17

The last line creates an ArrayList and uses enumeration( ) to adapt an Enumeration from the ArrayList Iterator Thus, if you have old code that wants an Enumeration, you

can still use the new containers

inherited from Vector So it has all of the characteristics and behaviors of a Vector plus

some extra Stack behaviors It’s difficult to know whether the designers consciously thought

that this was an especially useful way of doing things, or whether it was just a naive design; in any event it was clearly not reviewed before it was rushed into distribution, so this bad design

is still hanging around (but you shouldn’t use it)

Here’s a simple demonstration of Stack that pushes each String representation of an

enum It also shows how you can just as easily use a LinkedList as a stack, or the Stack

class created in the Holding Your Objects chapter:

//: containers/Stacks.java

// Demonstration of Stack Class

import java.util.*;

import static net.mindview.util.Print.*;

enum Month { JANUARY, FEBRUARY, MARCH, APRIL, MAY, JUNE,

JULY, AUGUST, SEPTEMBER, OCTOBER, NOVEMBER }

public class Stacks {

public static void main(String[] args) {

Stack<String> stack = new Stack<String>();

for(Month m : Month.values())

stack.push(m.toString());

print("stack = " + stack);

// Treating a stack as a Vector:

stack.addElement("The last line");

print("element 5 = " + stack.elementAt(5));

print("popping elements:");

while(!stack.empty())

printnb(stack.pop() + " ");

// Using a LinkedList as a Stack:

LinkedList<String> lstack = new LinkedList<String>();

// Using the Stack class from

// the Holding Your Objects Chapter:

net.mindview.util.Stack<String> stack2 =

new net.mindview.util.Stack<String>();

for(Month m : Month.values())

stack2.push(m.toString());

Trang 18

stack = [JANUARY, FEBRUARY, MARCH, APRIL, MAY, JUNE, JULY, AUGUST,

SEPTEMBER, OCTOBER, NOVEMBER]

element 5 = JUNE

popping elements:

The last line NOVEMBER OCTOBER SEPTEMBER AUGUST JULY JUNE MAY APRIL MARCH FEBRUARY JANUARY lstack = [NOVEMBER, OCTOBER, SEPTEMBER, AUGUST, JULY, JUNE, MAY, APRIL, MARCH, FEBRUARY, JANUARY]

NOVEMBER OCTOBER SEPTEMBER AUGUST JULY JUNE MAY APRIL MARCH FEBRUARY JANUARY stack2 = [NOVEMBER, OCTOBER, SEPTEMBER, AUGUST, JULY, JUNE, MAY, APRIL, MARCH, FEBRUARY, JANUARY]

NOVEMBER OCTOBER SEPTEMBER AUGUST JULY JUNE MAY APRIL MARCH FEBRUARY JANUARY

*///:~

A String representation is generated from the Month enum constants, inserted into the

Stack with push( ), and later fetched from the top of the stack with a pop( ) To make a

point, Vector operations are also performed on the Stack object This is possible because,

by virtue of inheritance, a Stack is a Vector Thus, all operations that can be performed on a

Vector can also be performed on a Stack, such as elementAt( )

As mentioned earlier, you should use a LinkedList when you want stack behavior, or the

net.mindview.util.Stack class created from the LinkedList class

BitSet

A BitSet is used if you want to efficiently store a lot of on-off information It’s efficient only

from the standpoint of size; if you’re looking for efficient access, it is slightly slower than using a native array

In addition, the minimum size of the BitSet is that of a long: 64 bits This implies that if you’re storing anything smaller, like 8 bits, a BitSet will be wasteful; you’re better off

creating your own class, or just an array, to hold your flags if size is an issue (This will only

be the case if you’re creating a lot of objects containing lists of on-off information, and should

only be decided based on profiling and other metrics If you make this decision because you just think something is too big, you will end up creating needless complexity and wasting a lot of time.)

A normal container expands as you add more elements, and the BitSet does this as well The following example shows how the BitSet works:

//: containers/Bits.java

// Demonstration of BitSet

import java.util.*;

import static net.mindview.util.Print.*;

public class Bits {

public static void printBitSet(BitSet b) {

Trang 19

public static void main(String[] args) {

Random rand = new Random(47);

// Take the LSB of nextInt():

// Test bitsets >= 64 bits:

BitSet b127 = new BitSet();

Trang 20

The random number generator is used to create a random byte, short, and int, and each one is transformed into a corresponding bit pattern in a BitSet This works fine because a

BitSet is 64 bits, so none of these cause it to increase in size Then larger BitSets are

created You can see that the BitSet is expanded as necessary

An EnumSet (see the Enumerated Types chapter) is usually a better choice than a BitSet if

you have a fixed set of flags that you can name, because the EnumSet allows you to

manipulate the names rather than numerical bit locations, and thus reduces errors

EnumSet also prevents you from accidentally adding new flag locations, which could cause

some serious, difficult-to-find bugs The only reasons you should use BitSet instead of

EnumSet is if you don’t know how many flags you will need until run time, or if it is

unreasonable to assign names to the flags, or you need one of the special operations in

BitSet (see the JDK documentation for BitSet and EnumSet)

Summary

The containers library is arguably the most important library for an objectoriented language Most programming will use containers more than any other library components Some languages (Python, for example) even include the fundamental container components (lists, maps and sets) as built-ins

As you saw in the Holding Your Objects chapter, it’s possible to do a number of very

interesting things using containers, without much effort However, at some point you’re forced to know more about containers in order to use them properly—in particular, you must

know enough about hashing operations to write your own hashCode( ) method (and you

must know when it is necessary), and you must know enough about the various container implementations that you can choose the appropriate one for your needs This chapter

covered these concepts and discussed additional useful details about the container library At this point you should be reasonably well prepared to use the Java containers in your everyday programming tasks

The design of a containers library is difficult (this is true of most library design problems) In C++, the container classes covered the bases with many different classes This was better than what was available prior to the C++ container classes (nothing), but it didn’t translate well into Java At the other extreme, I’ve seen a containers library that consists of a single class, "container," which acts like both a linear sequence and an associative array at the same time The Java container library strikes a balance: the full functionality that you expect from

a mature container library, but easier to learn and use than the C++ container classes and other similar container libraries The result can seem a bit odd in places Unlike some of the decisions made in the early Java libraries, these oddities were not accidents, but carefully considered decisions based on trade-offs in complexity

Solutions to selected exercises can be found in the electronic document The Thinking in Java Annotated Solution Guide, available for sale from www.MindView.net

 

Trang 21

I/O

Creating a good input/output (I/O) system is one of the more difficult tasks for a language designer This is evidenced by the number of different approaches

The challenge seems to be in covering all possibilities Not only are there different sources and sinks of I/O that you want to communicate with (files, the console, network connections, etc.), but you need to talk to them in a wide variety of ways (sequential, random-access, buffered, binary, character, by lines, by words, etc.) The Java library designers attacked this problem by creating lots of classes In fact, there are so many classes for Java’s I/O system that it can be intimidating at first (ironically, the Java I/O design actually prevents an

explosion of classes) There was also a significant change in the I/O library after Java i.o,

when the original byte-oriented library was supplemented with char-oriented, based I/O classes The nio classes (for "new I/O," a name we’ll still be using years from now

Unicode-even though they were introduced in JDK 1.4 and so are already "old") were added for

improved performance and functionality As a result, there are a fair number of classes to learn before you understand enough of Java’s I/O picture that you can use it properly In addition, it’s rather important to understand the evolution of the I/O library, even if your first reaction is "Don’t bother me with history, just show me how to use it!" The problem is that without the historical perspective, you will rapidly become confused with some of the classes and when you should and shouldn’t use them This chapter will give you an

introduction to the variety of I/O classes in the standard Java library and how to use them

The File class

Before getting into the classes that actually read and write data to streams, we’ll look at a

library utility that assists you with file directory issues The File class has a deceiving name;

you might think it refers to a file, but it doesn’t In fact, "FilePath" would have been a better

name for the class It can represent either the name of a particular file or the names of a set

of files in a directory If it’s a set of files, you can ask for that set using the list( ) method, which returns an array of String It makes sense to return an array rather than one of the

flexible container classes, because the number of elements is fixed, and if you want a

different directory listing, you just create a different File object This section shows an example of the use of this class, including the associated FilenameFilter interface

A directory lister

Suppose you’d like to see a directory listing The File object can be used in two ways If you call list( ) with no arguments, you’ll get the full list that the File object contains However, if

you want a restricted list—for example, if you want all of the files with an extension of Java—

then you use a "directory filter," which is a class that tells how to select the File objects for

display Here’s the example Note that the result has been effortlessly sorted (alphabetically)

using the java.util.Arrays.sort( ) method and the

Trang 22

import java.util.*;

public class DirList {

public static void main(String[] args) {

File path = new File(".");

class DirFilter implements FilenameFilter {

private Pattern pattern;

public DirFilter(String regex) {

The DirFilter class implements the interface FilenameFilter Notice how simple the

FilenameFilter interface is:

public interface FilenameFilter {

boolean accept(File dir, String name);

}

DirFilter’s sole reason for existence is to provide the accept( ) method to the list( )

method so that list( ) can "call back" accept( ) to determine which file names should be

included in the list Thus, this structure is often referred to as a callback More specifically,

this is an example of the Strategy design pattern, because list( ) implements basic

functionality, and you provide the Strategy in the form of a FilenameFilter in order to complete the algorithm necessary for list( ) to provide its service Because list( ) takes a

FilenameFilter object as its argument, it means that you can pass an object of any class

that implements FilenameFilter to choose (even at run time) how the list( ) method will

behave The purpose of a Strategy is to provide flexibility in the behavior of code

The accept( ) method must accept a File object representing the directory that a particular file is found in, and a String containing the name of that file Remember that the list( ) method is calling accept( ) for each of the file names in the directory object to see which one should be included; this is indicated by the boolean result returned by accept( )

accept( ) uses a regular expression matcher object to see if the regular expression regex

matches the name of the file Using accept( ), the list( ) method returns an array

Trang 23

Anonymous inner classes

This example is ideal for rewriting using an anonymous inner class (described in Inner

Classes) As a first cut, a method filter( ) is created that returns a reference to a

public class DirList2 {

public static FilenameFilter filter(final String regex) {

// Creation of anonymous inner class:

return new FilenameFilter() {

private Pattern pattern = Pattern.compile(regex);

public boolean accept(File dir, String name) {

return pattern.matcher(name).matches();

}

}; // End of anonymous inner class

}

public static void main(String[] args) {

File path = new File(".");

Note that the argument to filter( ) must be final This is required by the anonymous inner

class so that it can use an object from outside its scope This design is an improvement

because the FilenameFilter class is now tightly bound to DirList2 However, you can take

this approach one step further and define the anonymous inner class as an argument to

list(), in which case it’s even smaller:

public class DirList3 {

public static void main(final String[] args) {

File path = new File(".");

String[] list;

if(args.length == 0)

list = path.list();

else

Trang 24

list = path.list(new FilenameFilter() {

private Pattern pattern = Pattern.compile(args[0]);

public boolean accept(File dir, String name) {

Exercise 1: (3) Modify DirList.java (or one of its variants) so that the FilenameFilter

opens and reads each file (using the net.mindview.util.TextFile utility) and accepts the

file based on whether any of the trailing arguments on the command line exist in that file

Exercise 2: (2) Create a class called SortedDirList with a constructor that takes a File

object and builds a sorted directory list from the files at that File Add to this class two overloaded list( ) methods: the first produces the whole list, and the second produces the

subset of the list that matches its argument (which is a regular expression)

Exercise 3: (3) Modify DirList.java (or one of its variants) so that it sums up the file

sizes of the selected files

expression that you provide:

//: net/mindview/util/Directory.java

// Produce a sequence of File objects that match a

// regular expression in either a local directory,

// or by walking a directory tree

package net.mindview.util;

import java.util.regex.*;

import java.io.*;

import java.util.*;

public final class Directory {

public static File[]

Trang 25

local(File dir, final String regex) {

return dir.listFiles(new FilenameFilter() {

private Pattern pattern = Pattern.compile(regex);

public boolean accept(File dir, String name) {

public static File[]

local(String path, final String regex) { // Overloaded

return local(new File(path), regex);

}

// A two-tuple for returning a pair of objects:

public static class TreeInfo implements Iterable<File> {

public List<File> files = new ArrayList<File>();

public List<File> dirs = new ArrayList<File>();

// The default iterable element is the file list:

public Iterator<File> iterator() {

public String toString() {

return "dirs: " + PPrint.pformat(dirs) +

"\n\nfiles: " + PPrint.pformat(files);

}

}

public static TreeInfo

walk(String start, String regex) { // Begin recursion

return recurseDirs(new File(start), regex);

}

public static TreeInfo

walk(File start, String regex) { // Overloaded

return recurseDirs(start, regex);

}

public static TreeInfo walk(File start) { // Everything

return recurseDirs(start, ".*");

}

public static TreeInfo walk(String start) {

return recurseDirs(new File(start), ".*");

}

static TreeInfo recurseDirs(File startDir, String regex){

TreeInfo result = new TreeInfo();

for(File item : startDir.listFiles()) {

// Simple validation test:

public static void main(String[] args) {

Trang 26

}

} ///:~

The local( ) method uses a variant of File.list( ) called listFiles( ) that produces an array

of File You can see that it also uses a FilenameFilter If you need a List instead of an array, you can convert the result yourself using Arrays.asList( )

The walk( ) method converts the name of the starting directory into a File object and calls

recurseDirs( ), which performs a recursive directory walk, collecting more information

with each recursion To distinguish ordinary files from directories, the return value is

effectively a "tuple" of objects—a List holding ordinary files, and another holding directories The fields are intentionally made public here, because the point of Treelnfo is simply to collect the objects together—if you were just returning a List, you wouldn’t make it private,

so just because you are returning a pair of objects, it doesn’t mean you need to make them

private Note that Treelnfo implements Iterable<File>, which produces the files, so that

you have a "default iteration" over the file list, whereas you can specify directories by saying

".dirs"

The Treelnfo.toString( ) method uses a "pretty printer" class so that the output is easer to view The default toString( ) methods for containers print all the elements for a container

on a single line For large collections this can become difficult to read, so you may want to use

an alternate formatting Here’s a tool that adds newlines and indents each element:

//: net/mindview/util/PPrint.java

// Pretty-printer for collections

package net.mindview.util;

import java.util.*;

public class PPrint {

public static String pformat(Collection<?> c) {

The Directory utility is placed in the net.mindview.util package so that it is easily

available Here’s a sample of how you can use it:

//: io/DirectoryDemo.java

// Sample use of Directory utilities

import java.io.*;

Trang 27

import net.mindview.util.*;

import static net.mindview.util.Print.*;

public class DirectoryDemo {

public static void main(String[] args) {

// All directories:

PPrint.pprint(Directory.walk(".").dirs);

// All files beginning with ‘T’

for(File file : Directory.local(".", "T.*"))

print(file);

print(" -");

// All Java files beginning with ‘T’:

for(File file : Directory.walk(".", "T.*\\.java"))

print(file);

print("======================");

// Class files containing "Z" or "z":

for(File file : Directory.walk(".",".*[Zz].*\\.class"))

You may need to refresh your knowledge of regular expressions from the Strings chapter in

order to understand the second arguments in local( ) and walk( )

We can take this a step further and create a tool that will walk directories and process the

files within them according to a Strategy object (this is another example of the Strategy

design pattern):

//: net/mindview/util/ProcessFiles.java

package net.mindview.util;

import java.io.*;

public class ProcessFiles {

public interface Strategy {

void process(File file);

}

private Strategy strategy;

private String ext;

public ProcessFiles(Strategy strategy, String ext) {

Trang 28

File fileArg = new File(arg);

processDirectoryTree(File root) throws IOException {

for(File file : Directory.walk(

root.getAbsolutePath(), ".*\\." + ext))

strategy.process(file.getCanonicalFile());

}

// Demonstration of how to use it:

public static void main(String[] args) {

new ProcessFiles(new ProcessFiles.Strategy() {

public void process(File file) {

System.out.println(file);

}

}, "java").start(args);

}

} /* (Execute to see output) *///:~

The Strategy interface is nested within ProcessFiles, so that if you want to implement it you must implement ProcessFiles.Strategy, which provides more context for the reader

ProcessFiles does all the work of finding the files that have a particular extension (the ext

argument to the constructor), and when it finds a matching file, it simply hands it to the

Strategy object (which is also an argument to the constructor)

If you don’t give it any arguments, ProcessFiles assumes that you want to traverse all the

directories off of the current directory You can also specify a particular file, with or without the extension (it will add the extension if necessary), or one or more directories

In main( ) you see a basic example of how to use the tool; it prints the names of all the Java

source files according to the command line that you provide

Exercise 4: (2) Use Directory.walk( ) to sum the sizes of all files in a directory tree

whose names match a particular regular expression

Exercise 5: (1) Modify ProcessFiles.java so that it matches a regular expression rather

than a fixed extension

Checking for and creating directories

The File class is more than just a representation for an existing file or directory You can also use a File object to create a new directory or an entire directory path if it doesn’t exist You

can also look at the characteristics of files (size, last modification date, read/write), see

whether a File object represents a file or a directory, and delete a file The following example shows some of the other methods available with the File class (see the JDK documentation

from http://java.sun.com for the full set):

Trang 29

//: io/MakeDirectories.java

// Demonstrates the use of the File class to

// create directories and manipulate files

// {Args: MakeDirectoriesTest}

import java.io.*;

public class MakeDirectories {

private static void usage() {

System.err.println(

"Usage:MakeDirectories path1 \n" +

"Creates each path\n" +

"Usage:MakeDirectories -d path1 \n" +

"Deletes each path\n" +

"Usage:MakeDirectories -r path1 path2\n" +

"Renames from path1 to path2");

System.exit(1);

}

private static void fileData(File f) {

System.out.println(

"Absolute path: " + f.getAbsolutePath() +

"\n Can read: " + f.canRead() +

"\n Can write: " + f.canWrite() +

old = new File(args[1]),

rname = new File(args[2]);

Trang 30

Absolute path: d:\aaa-TIJ4\code\io\MakeDirectoriesTest

Can read: true

Can write: true

In fileData( ) you can see various file investigation methods used to display information

about the file or directory path

The first method that’s exercised by main( ) is renameTo( ), which allows you to rename (or move) a file to an entirely new path represented by the argument, which is another File

object This also works with directories of any length

If you experiment with the preceding program, you’ll find that you can make a directory path

of any complexity, because mkdirs( ) will do all the work for you

Exercise 6: (5) Use ProcessFiles to find all the Java source-code files in a particular

directory subtree that have been modified after a particular date

Input and output

Programming language I/O libraries often use the abstraction of a stream, which represents

any data source or sink as an object capable of producing or receiving pieces of data The stream hides the details of what happens to the data inside the actual I/O device

The Java library classes for I/O are divided by input and output, as you can see by looking at the class hierarchy in the JDK documentation Through inheritance, everything derived from

the InputStream or Reader classes has basic methods called read( ) for reading a single

byte or an array of bytes Likewise, everything derived from OutputStream or Writer

classes has basic methods called write( ) for writing a single byte or an array of bytes

However, you won’t generally use these methods; they exist so that other classes can use them—these other classes provide a more useful interface Thus, you’ll rarely create your stream object by using a single class, but instead will layer multiple objects together to

provide your desired functionality (this is the Decorator design pattern, as you shall see in

this section) The fact that you create more than one object to produce a single stream is the primary reason that Java’s I/O library is confusing

It’s helpful to categorize the classes by their functionality In Java l.o, the library designers started by deciding that all classes that had anything to do with input would be inherited

from InputStream, and all classes that were associated with output would be inherited from OutputStream

Trang 31

As is the practice in this book, I will attempt to provide an overview of the classes, but

assume that you will use the JDK documentation to determine all the details, such as the exhaustive list of methods of a particular class

Types of InputStream

InputStream’s job is to represent classes that produce input from different sources These

sources can be:

1 An array of bytes

2 A String obj ect

3 A file

4 A "pipe," which works like a physical pipe: You put things in at one end and they come

out the other

5 A sequence of other streams, so you can collect them together into a single stream

6 Other sources, such as an Internet connection (This is covered in Thinking in

Enterprise Java, available at www.MindView.net.)

Each of these has an associated subclass of InputStream In addition, the

FilterInputStream is also a type of InputStream, to provide a base class for "decorator"

classes that attach attributes or useful interfaces to input streams This is discussed later

Table I/O-1 Types of InputStream Class Function Constructor arguments

How to use it ByteArray-

InputStream Allows a buffer in memory to be used

Trang 32

Class Function Constructor arguments

How to use it Piped-

InputStream Produces the data that’s being written

to the associated

Stream

PipedOutput-Implements the

"piping" concept

PipedOutputStream

As a source of data in multithreading: Connect it

to a FilterlnputStream

object to provide a useful interface

InputStream Converts two or more

Filter-decorators that provide useful functionality to the other

InputStream

classes See Table I/O-3

See Table I/O-3

See Table I/O-3

Types of OutputStream

This category includes the classes that decide where your output will go: an array of bytes

(but not a String—presumably, you can create one using the array of bytes), a file, or a

"pipe."

In addition, the FilterOutputStream provides a base class for "decorator" classes that

attach attributes or useful interfaces to output streams This is discussed later

Table I/O-2 Types of OutputStream Class Function Constructor arguments

How to use it ByteArray-

OutputStream Creates a buffer in memory All the data

that you send to the stream is placed in this buffer

Optional initial size of the buffer

To designate the destination

of your data: Connect it to a

FilterOutputStream

object to provide a useful interface

OutputStream For sending information to a file A String representing the file name, or a File or

File-FileDescriptor object

Trang 33

Class Function Constructor arguments

How to use it

To designate the destination

of your data: Connect it to a

FilterOutputStream

object to provide a useful interface

OutputStream Any information you write to this

Piped-automatically ends

up as input for the associated

Stream Implements

Pipedlnput-the "piping" concept

PipedlnputStream

To designate the destination

of your data for multithreading: Connect it to

a FilterOutputStream

object to provide a useful interface

OutputStream Abstract class that is an interface for

Filter-decorators that provide useful functionality to the other

OutputStream

classes See Table 1/O-4-

See Table I/O-4

See Table I/O-4

Adding attributes

and useful interfaces

Decorators were introduced in the Generics chapter, on page 717 The Java I/O library

requires many different combinations of features, and this is the justification for using the Decorator design pattern.1The reason for the existence of the "filter" classes in the Java I/O library is that the abstract "filter" class is the base class for all the decorators A decorator must have the same interface as the object it decorates, but the decorator can also extend the interface, which occurs in several of the "filter" classes

There is a drawback to Decorator, however Decorators give you much more flexibility while you’re writing a program (since you can easily mix and match attributes), but they add

complexity to your code The reason that the Java I/O library is awkward to use is that you must create many classes—the "core" I/O type plus all the decorators—in order to get the single I/O object that you want

The classes that provide the decorator interface to control a particular InputStream or

OutputStream are the FilterlnputStream and FilterOutputStream, which don’t have

very intuitive names FilterlnputStream and FilterOutputStream are derived from the base classes of the I/O library, InputStream and OutputStream, which is a key

requirement of the decorator (so that it provides the common interface to all the objects that are being decorated)

      

1 It’s not clear that this was a good design decision, especially compared to the simplicity of I/O libraries in other

languages But it’s the justification for the decision

Trang 34

Reading from an InputStream with

FilterlnputStream

The FilterlnputStream classes accomplish two significantly different things

DatalnputStream allows you to read different types of primitive data as well as String

objects (All the methods start with "read," such as readByte( ), readFloat( ), etc.) This, along with its companion DataOutputStream, allows you to move primitive data from one

place to another via a stream These "places" are determined by the classes in Table I/O-1

The remaining FilterlnputStream classes modify the way an InputStream behaves

internally: whether it’s buffered or unbuffered, whether it keeps track of the lines it’s reading (allowing you to ask for line numbers or set the line number), and whether you can push back

a single character The last two classes look a lot like support for building a compiler (they were probably added to support the experiment of "building a Java compiler in Java"), so you probably won’t use them in general programming

You’ll need to buffer your input almost every time, regardless of the I/O device you’re

connecting to, so it would have made more sense for the I/O library to have a special case (or simply a method call) for unbuffered input rather than buffered input

Table I/O-3 Types of FilterlnputStream

arguments How to use it Data-

InputStream Used in concert with DataOutputStream, so

you can read primitives

(int, char, long, etc.)

from a stream in a portable fashion

InputStream

Contains a full interface

to allow you to read primitive types

Buffered- InputStream

Use this to prevent a physical read every time you want more data

You’re saying, "Use a buffer."

InputStream, with

optional buffer size

This doesn’t provide an interface per se It just adds buffering to the process Attach an interface object

InputStream Keeps track of line numbers in the input

LineNumber-stream; you can call

getLineNumber( ) and setLineNumber (int)

InputStream

This just adds line numbering, so you’ll probably attach an interface object

InputStream Has a one-byte pushback buffer so that you can

Pushback-push back the last character read

InputStream

Generally used in the

Trang 35

Class Function Constructor

arguments How to use it

scanner for a compiler

You probably won’t use this

them

The two important methods in PrintStream are print( ) and println( ), which are

overloaded to print all the various types The difference between print( ) and println( ) is

that the latter adds a newline when it’s done

PrintStream can be problematic because it traps all IOExceptions (you must explicitly test

the error status with checkError( ), which returns true if an error has occurred) Also,

PrintStream doesn’t internationalize properly and doesn’t handle line breaks in a

platform-independent way These problems are solved with PrintWriter, described later

BufferedOutputStream is a modifier and tells the stream to use buffering so you don’t get

a physical write every time you write to the stream You’ll probably always want to use this when doing output

Table I/O-4 Types of FilterOutputStream

arguments How to use it Data-

OutputStream Used in concert with DataInputStream so

you can write primitives

(int, char, long, etc.) to

a stream in a portable fashion

OutputStream

Contains a full interface to allow you

to write primitive types

PrintStream For producing formatted

Should be the "final"

Trang 36

Class Function Constructor

arguments How to use it

wrapping for your

OutputStream

object You’ll probably use this a lot

OutputStream Use this to prevent a physical write every time

Buffered-you send a piece of data

You’re saying, "Use a buffer." You can call

flush( ) to flush the

buffer

OutputStream, with

optional buffer size

This doesn’t provide an interface per se It just adds buffering to the process Attach an interface object

Readers & Writers

Java 1.1 made significant modifications to the fundamental I/O stream library When you see

the Reader and Writer classes, your first thought (like mine) might be that these were meant to replace the InputStream and OutputStream classes But that’s not the case

Although some aspects of the original streams library are deprecated (if you use them you

will receive a warning from the compiler), the InputStream and OutputStream classes still provide valuable functionality in the form of byte-oriented I/O, whereas the Reader and

Writer classes provide Unicode-compliant, character-based I/O In addition:

1 Java 1.1 added new classes into the InputStream and OutputStream hierarchy, so

it’s obvious those hierarchies weren’t being replaced

2 There are times when you must use classes from the "byte" hierarchy in combination

with classes in the "character" hierarchy To accomplish this, there are "adapter"

classes: InputStreamReader converts an InputStream to a Reader, and

OutputStreamWriter converts an OutputStream to a Writer

The most important reason for the Reader and Writer hierarchies is for

internationalization The old I/O stream hierarchy supports only 8-bit byte streams and doesn’t handle the 16-bit Unicode characters well Since Unicode is used for

internationalization (and Java’s native char is 16-bit Unicode), the Reader and Writer

hierarchies were added to support Unicode in all I/O operations In addition, the new

libraries are designed for faster operations than the old

Sources and sinks of data

Almost all of the original Java I/O stream classes have corresponding Reader and Writer

classes to provide native Unicode manipulation However, there are some places where the

byte-oriented InputStreams and OutputStreams are the correct solution; in particular,

thejava.util.zip libraries are byte-oriented rather than char-oriented So the most sensible

approach to take is to try to use the Reader and Writer classes whenever you can You’ll

discover the situations when you have to use the byte-oriented libraries because your code won’t compile

Here is a table that shows the correspondence between the sources and sinks of information (that is, where the data physically comes from or goes to) in the two hierarchies

Trang 37

Sources & sinks:

FileOutputStream FileWriter StringBufferlnputStream

(deprecated)

StringReader

(no corresponding class) StringWriter

ByteArrayInputStream CharArrayReader ByteArrayOutputStream CharArrayWriter

PipedInputStream PipedReader PipedOutputStream PipedWriter

In general, you’ll find that the interfaces for the two different hierarchies are similar, if not identical

Modifying stream behavior

For InputStreams and OutputStreams, streams were adapted for particular needs using

"decorator" subclasses of FilterInputStream and FilterOutputStream The Reader and Writer class hierarchies continue the use of this idea—but not exactly

In the following table, the correspondence is a rougher approximation than in the previous table The difference is because of the class organization; although

BufferedOutputStream is a subclass of FilterOutputStream, BufferedWriter is not a

subclass of FilterWriter (which, even though it is abstract, has no subclasses and so

appears to have been put in either as a placeholder or simply so you don’t wonder where it is) However, the interfaces to the classes are quite a close match

Filters:

Java 1.0 class

Corresponding Java 1.1 class

FilterInputStream FilterReader FilterOutputStream FilterWriter (abstract class with no

subclasses)

BufferedInputStream BufferedReader

(also has readLine( ))

BufferedOutputStream BufferedWriter DataInputStream Use DataInputStream

(except when you need to use

readLine( ), when you should use a

Trang 38

There’s one direction that’s quite clear: Whenever you want to use readLine( ), you

shouldn’t do it with a DataInputStream (this is met with a deprecation message at compile time), but instead use a BufferedReader Other than this, DataInputStream is still a

"preferred" member of the I/O library

To make the transition to using a PrintWriter easier, it has constructors that take any

OutputStream object as well as Writer objects PrintWriter’s formatting interface is

virtually the same as PrintStream

In Java SE5, PrintWriter constructors were added to simplify the creation of files when

writing output, as you shall see shortly

One PrintWriter constructor also has an option to perform automatic flushing, which happens after every println( ) if the constructor flag is set

Unchanged classes

Some classes were left unchanged between Java 1.0 and Java 1.1:

Java 1.0 classes without corresponding Java 1.1 classes

DataOutputStream File

RandomAccessFile SequenceInputStream DataOutputStream, in particular, is used without change, so for storing and retrieving

data in a transportable format, you use the InputStream and OutputStream hierarchies

Trang 39

Off by itself:

RandomAccessFile

RandomAccessFile is used for files containing records of known size so that you can move

from one record to another using seek( ), then read or change the records The records don’t

have to be the same size; you just have to determine how big they are and where they are placed in the file

At first it’s a little bit hard to believe that RandomAccessFile is not part of the

InputStream or OutputStream hierarchy However, it has no association with those

hierarchies other than that it happens to implement the DataInput and DataOutput interfaces (which are also implemented by DataInputStream and DataOutputStream)

It doesn’t even use any of the functionality of the existing InputStream or OutputStream

classes; it’s a completely separate class, written from scratch, with all of its own (mostly

native) methods The reason for this may be that RandomAccessFile has essentially

different behavior than the other I/O types, since you can move forward and backward within

a file In any event, it stands alone, as a direct descendant of Object

Essentially, a RandomAccessFile works like a DataInputStream pasted together with a

DataOutputStream, along with the methods getFilePointer( ) to find out where you are

in the file, seek( ) to move to a new point in the file, and length( ) to determine the

maximum size of the file In addition, the constructors require a second argument (identical

to fopen( ) in C) indicating whether you are just randomly reading ("r") or reading and writing ("rw") There’s no support for write-only files, which could suggest that

RandomAccessFile might have worked well if it were inherited from DataInputStream

The seeking methods are available only in RandomAccessFile, which works for files only

BufferedInputStream does allow you to mark( ) a position (whose value is held in a

single internal variable) and reset( ) to that position, but this is limited and not very useful

Most, if not all, of the RandomAccessFile functionality is superseded as of JDK 1.4 with

the nio memory-mapped files, which will be described later in this chapter

Typical uses of I/O streams

Although you can combine the I/O stream classes in many different ways, you’ll probably just use a few combinations The following examples can be used as a basic reference for typical I/O usage

In these examples, exception handing will be simplified by passing exceptions out to the console, but this is appropriate only in small examples and utilities In your code you’ll want

to consider more sophisticated error-handling approaches

Buffered input file

To open a file for character input, you use a FileInputReader with a String or a File

object as the file name For speed, you’ll want that file to be buffered so you give the resulting

reference to the constructor for a BufferedReader Since BufferedReader also provides the readLine( ) method, this is your final object and the interface you read from When

readLine( ) returns null, you’re at the end of the file

//: io/BufferedInputFile.java

import java.io.*;

Trang 40

public class BufferedInputFile {

// Throw exceptions to console:

public static String

read(String filename) throws IOException {

// Reading input by lines:

BufferedReader in = new BufferedReader(

new FileReader(filename));

String s;

StringBuilder sb = new StringBuilder();

while((s = in.readLine())!= null)

} /* (Execute to see output) *///:~

The StringBuilder sb is used to accumulate the entire contents of the file (including

newlines that must be added since readLine( ) strips them off) Finally, close( ) is called to

close the file.2

Exercise 7: (2) Open a text file so that you can read the file one line at a time Read each

line as a String and place that String object into a LinkedList Print all of the lines in the

LinkedList in reverse order

Exercise 8: (1) Modify Exercise 7 so that the name of the file you read is provided as a

command-line argument

Exercise 9: (1) Modify Exercise 8 to force all the lines in the LinkedList to uppercase

and send the results to System.out

Exercise 10: (2) Modify Exercise 8 to take additional command-line arguments of words

to find in the file Print all lines in which any of the words match

Exercise 11: (2) In the innerclasses/GreenhouseController.java example,

GreenhouseController contains a hard-coded set of events Change the program so that it

reads the events and their relative times from a text file, ((difficulty level 8): Use a Factory Method design pattern to build the events—see Thinking in Patterns (with Java) at

www.MindView.net.)

Input from memory

Here, the String result from BufferedInputFile.read( ) is used to create a

StringReader Then read( ) is used to read each character one at a time and send it out to

Java designers originally envisioned it (that is to say, it’s irreparably broken), so the only safe approach is to explicitly call

close( ) for files

Ngày đăng: 14/08/2014, 00:21

TỪ KHÓA LIÊN QUAN