Chemical Sensor Arrays and Pattern

Một phần của tài liệu Nanotechnology applications for clean water (Trang 440 - 445)

The nanostructured materials synthesized were subsequently incorporated as sensing elements in an array system. A sensor array is a system (SAS) consisting of three functional components that operate serially on a sample containing the analyte, a sample handler, an array of polymer sensors, and a signal processing system ( Fig. 27.5 ). The output of the SAS can identify an environmental toxin or biological molecule, estimate the concentration, or determine the characteristic properties of the compound present in air or other samples.

Fundamental to the SAS is the idea that each sensor in the array has diff erent sensitivity. For example, Compound No.1 may produce a high response in one sensor and lower responses in the others, whereas Compound No.2 may produce high readings for sensors other than the one that exhibit signifi cant response to Compound No. 1. What is important is that the pattern of response

Savage_Ch27.indd 399

Savage_Ch27.indd 399 11/11/2008 4:19:25 PM11/11/2008 4:19:25 PM

Figure 27.4 Schematic representation of metallic nanoparticles.

H2N NH2

O

O O

O O

O O

NH HO

O O

O O

OH

O HN

HO NH

O

O O

O HN

NH(CH2CH3)3 O-

O

NH

O

O O

O HN

O- O

NH HO

O O

O O

OR

O HN

y Pyromellitic dianhydride – PMDA

4,4'-oxydianiline – ODA

TEA / MeOH in THF

Organic solvents

Poly(amic acid)

y Polymer salt

+

Electrodeposition in PBS ( pH = 6.5) containing metallic salt

n × y Metallic polycarboxylate ion

+

-O

105°C, 48 h

Metallic – poly(amic acid) composite film n × y M

M

Savage_Ch27.indd 400

Savage_Ch27.indd 400 11/12/2008 4:15:46 PM11/12/2008 4:15:46 PM

across the sensors is distinct for diff erent compound. This allows the system to identify an unknown agent from the pattern of sensor responses or database.

Thus, each sensor in the array has a unique response profi le to the spectrum of chemicals under test. The pattern of response across all sensors in the array is used to identify and/or characterize the analyte.

We have further demonstrated the eff ective use of our nanostructured polymer arrays coupled with machine learning for the detection and classifi cation of organophosphate (OP) nerve agents’ stimulants. For organophosphates and volatile organics, we showed a signifi cant 168 percent specifi city improvement and a 40.5 percent positive predictive value improvement using the s2000 kernels at 100 percent and 98 percent sensitivities when compared to commercial system [ 32 , 33 ]. The OP molecules that are dissociated at the surface of the polymer electrode cause the change in sensor resistance. This chemisorption driven process can change the electrical resistance considerably, and can make the sensor array sensitive to OP concentrations over a wide range of concent- rations. Since the sensor completely regains its original resistance with OP cycling, it appears that the OP diff usion is confi ned to the surface layer.

27.3.1 Data Processing, Pattern Recognition, and Support Vector Machines

The task of a sensor array system is to identify the presence of toxic environmental chemicals (TECs) in the sample and perhaps to estimate its concentration. This is achieved by means of signal processing and pattern recognition ( Fig. 27.5 ). These two steps may be subdivided into four sequential stages: preprocessing, feature extraction, classifi cation, and decision-making.

Preprocessing compensates for sensor drift, compresses the transient response of the sensor array, and reduces sample-to-sample variations. Typical techniques include manipulation of the sensor baseline; normalization of sensor response ranges for all sensors in an array and compression of sensor transients. Feature

Figure 27.5 Schematic diagram of a standard multiarray sensor system.

Sample Handler

Sensor Arrays

Signal Processing System

Savage_Ch27.indd 401

Savage_Ch27.indd 401 11/17/2008 5:40:44 PM11/17/2008 5:40:44 PM

extraction has two purposes: to reduce the dimensionality of the measurement space and to extract information relevant for pattern recognition. Feature extraction is generally performed with classical principal component analysis (PCA) and linear discriminant analysis (LDA). Principal component analysis fi nds projections of maximum variance and is the most widely used linear feature extraction techniques. However, it is not optimal for classifi cation since it ignores the identity (class label) of the analyte examples in the database.

Linear discriminant analysis on the other hand looks at the class label of each example. Its goal is to fi nd projections and maximize the distance between examples of the same TEC agent. As an example, PCA may do better with a projection that contains high variance random noise whereas LDA may do better with a projection that contains subtle, but maybe crucial, agent-discriminatory information. LDA is more appropriate for classifi cation purposes.

Once the sensor signal is projected on an appropriate low dimensional space, the classifi cation stage can be trained to identify the patterns that are representative of each class of compound. The classical methods of performing the classifi cation task are K nearest neighbors (KNN), Bayesian classifi ers, and artifi cial neural networks (ANN). K nearest neighbors classifi ers fi nd examples in the TEC database that are closest to the unidentifi ed agents and will assign the nature of the agent represented by a majority of those examples. Bayesian classifi ers fi rst build a probability density function of each agent class on the low dimensional space. When presented with an identifi ed compound, the Bayesian classifi er will pick the class that maximizes the precompiled prob- ability distribution. The classifi er produces an estimate of the class for an unknown sample along with an estimate of the confi dence placed on the class assignment. However, the inability to present error-free detection, requirement for large training data set, as well as the inability to minimize true risk has led to the development of other processing approaches such as the support vector machines (SVMs).

Support vector machines are a new and radically diff erent type of classifi ers or “learning machines” that use a hypothesis space of linear functions in a high dimensional feature/space. SVMs are generally trained with learning algorithms originating from optimization theory that implements a learning bias derived from statistical learning theory. The use of SVMs for computational intelligence is a recent development, and certainly unknown for analytical monitoring of TECs. Several reviews provide extensive backgrounds to develop the mathematical foundation of SVMs ([ 32 , 33 ] and cited references). In the context of classifying TEC, the objective of SVMs is to construct an “optimal hyperplane” as the decision surface such that the margin of separation between two diff erent chemical substances is maximized. SVMs are based on the fundamental ideas of: (i) Structural/Empirical Risk Minimization (SRM/

ERM); (ii) the Vapnik-Chervonenkis (VC) dimension; (iii) the constrained optimization problem; and (iv) the SVM decision rule. Properly designed SVMs should have a good performance on untested data because of their

Savage_Ch27.indd 402

Savage_Ch27.indd 402 11/11/2008 4:19:26 PM11/11/2008 4:19:26 PM

ability to generalize and scale up to problems that are more complex. The fact that the margin does not depend on input dimensionality means it is immune to the curse of dimensionality. SVMs have been successfully applied to a variety of classifi cation problems including text categorization, handwritten digit recognition, gene expression analysis, and simple chemical and mixtures recognition.

We have studied the integration of our sensor network with SVM for the detection of TEC. This approach reduced the number of false negative errors by 173 percent, while making no false positive errors when compared to the baseline performance [ 32 , 33 ]. The reader may recall that obtaining larger and larger sets of valid training data would sometimes produce (with a great deal of training experience) a better performing neural network (NN), which resulted from classical training methods. This restriction is not incumbent on the structural minimization principles (SRM) and is the fundamental diff erence between training NNs and training SVMs. Finally, because SVMs minimize the true risk, they provide a global minimum.

27.3.2 Integration of Sensor Array with Chromatographic Systems

We have previously shown that combining ECP with conventional instru- mental techniques (such as chromatographic and FIA), provides a suitable app- roach to the development of sensitive and reproducible analytical signals [ 46 , 47 ].

Results obtained by the use of integrated conducting polymer sensors with chromatographic analysis and FIA revealed a signifi cant improvement in sensor performance and the overall analysis with respect to time and selectivity [ 46–49 ].

In Section 27.3 , we saw how sensor arrays and SVMs were utilized in the detection and classifi cation of organophosphate nerve agent simulants. However, sensor arrays cannot quantitatively detect mixtures of compounds. This is because SAS systems analyze mixtures of compounds as a one-component solution. To overcome this limitation, we have coupled sensor arrays to a gas chromatograph. In this scenario, the analyte mixture is fi rst separated by the GC column based on their relative retention times and detected using the multiarray polymer sensors. This new system showed signifi cant improvement over the existing multisensor array systems in the analysis of mixtures [ 48 ]. We have also demonstrated how the hyphenated combination of gas chromatography and conducting polymer sensor arrays could eff ectively be used in the separation and detection of mixtures of volatile organic compounds [ 49 ].

Other workers have also reported their research fi ndings on GC-Sensor Arrays. A good example is a prototype portable GC that combines a multi- adsorbent preconcentrator, a tandem-column separation stage, and a detection consisting of an integrated array of polymer coated surface acoustic wave devices (SAW) [ 50–54 ]. In that report the determination of vapor mixtures of common indoor air contaminants including 2-propanol, 3-methyl butanol,

Savage_Ch27.indd 403

Savage_Ch27.indd 403 11/11/2008 4:19:26 PM11/11/2008 4:19:26 PM

and 1 octen-3-ol was demonstrated. The advantages of sensor arrays include the fact that they do not require auxiliary gases and hazardous materials commonly required in classical GC detectors (e.g., fl ammable hydrogen gas in FID) and hazardous radioactive material (Ni-63 in ECD) for operation. Most sensor arrays are small thus leading to small dead volumes and increased sensitivity. The small size also has inherent advantages in miniaturization of chromatography-sensor array systems. Furthermore the sensor arrays can also be used in conjunction with machine learning programs.

Một phần của tài liệu Nanotechnology applications for clean water (Trang 440 - 445)

Tải bản đầy đủ (PDF)

(634 trang)