Next:
List of Contributors
Up:
csahtml
Previous:
csahtml
Contents
I. Computational Statistics
1. Computational Statistics: An Introduction
1.1 Computational Statistics and Data Analysis
1.2 The Emergence of a Field of Computational Statistics
1.2.1 Early Developments in Statistical Computing
1.2.2 Early Conferences and Formation of Learned Societies
1.2.3 The PC
1.2.4 The Cross Currents of Computational Statistics
1.2.5 Literature
1.3 Why This Handbook
1.3.1 Summary and Overview; Part II: Statistical Computing
1.3.2 Summary and Overview; Part III: Statistical Methodology
1.3.3 Summary and Overview; Part IV: Selected Applications
1.3.4 The Ehandbook
1.3.5 Future Handbooks in Computational Statistics
References
II. Statistical Computing
1. Basic Computational Algorithms
1.1 Computer Arithmetic
1.1.1 Integer Arithmetic
1.1.2 Floating Point Arithmetic
1.1.3 Cancellation
1.1.4 Accumulated Roundoff Error
1.1.5 Interval Arithmetic
1.2 Algorithms
1.2.1 Iterative Algorithms
1.2.2 Iterative Algorithms for Optimization and Nonlinear Equations
References
2. Random Number Generation
2.1 Introduction
2.2 Uniform Random Number Generators
2.2.1 Physical Devices
2.2.2 Generators Based on a Deterministic Recurrence
2.2.3 Quality Criteria
2.2.4 Statistical Testing
2.2.5 Cryptographically Strong Generators
2.3 Linear Recurrences Modulo
2.3.1 The Multiple Recursive Generator
2.3.2 The Lattice Structure
2.3.3 MRG Implementation Techniques
2.3.4 Combined MRGs and LCGs
2.3.5 Jumping Ahead
2.3.6 Linear Recurrences With Carry
2.4 Generators Based on Recurrences Modulo 2
2.4.1 A General Framework
2.4.2 Measures of Uniformity
2.4.3 Lattice Structure in Spaces of Polynomials and Formal Series
2.4.4 The LFSR Generator
2.4.5 The GFSR and Twisted GFSR
2.4.6 Combined Linear Generators Over
2.5 Nonlinear RNGs
2.6 Examples of Statistical Tests
2.7 Available Software and Recommendations
2.8 Non-uniform Random Variate Generation
2.8.1 Inversion
2.8.2 The Alias Method
2.8.3 Kernel Density Estimation and Generation
2.8.4 The Rejection Method
2.8.5 Thinning for Point Processes with Time-varying Rates
2.8.6 The Ratio-of-Uniforms Method
2.8.7 Composition and Convolution
2.8.8 Other Special Techniques
References
3. Markov Chain Monte Carlo Technology
3.1 Introduction
3.1.1 Organization
3.2 Markov Chains
3.2.1 Definitions and Results
3.2.2 Computation of Numerical Accuracyand Inefficiency Factor
3.3 Metropolis-Hastings Algorithm
3.3.1 Convergence Results
3.3.2 Example
3.3.3 Multiple-Block M-H Algorithm
3.4 The Gibbs Sampling Algorithm
3.4.1 The Algorithm
3.4.2 Invariance of the Gibbs Markov Chain
3.4.3 Sufficient Conditions for Convergence
3.4.4 Example: Simulating a Truncated Multivariate Normal
3.5 MCMC Sampling with Latent Variables
3.6 Estimation of Density Ordinates
3.7 Sampler Performance and Diagnostics
3.8 Strategies for Improving Mixing
3.8.1 Choice of Blocking
3.8.2 Tuning the Proposal Density
3.8.3 Other Strategies
3.9 Concluding Remarks
References
4. Numerical Linear Algebra
4.1 Matrix Decompositions
4.1.1 Cholesky Decomposition
4.1.2 LU Decomposition
4.1.3 QR Decomposition
4.1.4 Singular Value Decomposition
4.1.5 Matrix Inversion
4.2 Direct Methodsfor Solving Linear Systems
4.2.1 Gauss-Jordan Elimination
4.2.2 Iterative Refinement
4.3 Iterative Methodsfor Solving Linear Systems
4.3.1 General Principleof Iterative Methods for Linear Systems
4.3.2 Jacobi Method
4.3.3 Gauss-Seidel Method
4.3.4 Successive Overrelaxation Method
4.3.5 Gradient Methods
4.4 Eigenvalues and Eigenvectors
4.4.1 Power Method
4.4.2 Jacobi Method
4.4.3 Givens and Householder Reductions
4.4.4 QR Method
4.4.5 LR Method
4.4.6 Inverse Iterations
4.5 Sparse Matrices
4.5.1 Storage Schemes for Sparse Matrices
4.5.2 Methods for Sparse Matrices
References
5. The EM Algorithm
5.1 Introduction
5.1.1 Maximum Likelihood Estimation
5.1.2 EM Algorithm: Incomplete-Data Structure
5.1.3 Overview of the Chapter
5.2 Basic Theory of the EM Algorithm
5.2.1 The E- and M-Steps
5.2.2 Generalized EM Algorithm
5.2.3 Convergence of the EM Algorithm
5.2.4 Rate of Convergence of the EM Algorithm
5.2.5 Properties of the EM Algorithm
5.3 Examples of the EM Algorithm
5.3.1 Example 1: Normal Mixtures
5.3.2 Example 2: Censored Failure-Time Data
5.3.3 Example 3: Nonapplicability of EM Algorithm
5.3.4 Starting Values for EM Algorithm
5.3.5 Provision of Standard Errors
5.4 Variations on the EM Algorithm
5.4.1 Complicated E-Step
5.4.2 Complicated M-Step
5.4.3 Speeding up Convergence
5.5 Miscellaneous Topics on the EM Algorithm
5.5.1 EM Algorithm for MAP Estimation
5.5.2 Iterative Simulation Algorithms
5.5.3 Further Applications of the EM Algorithm
References
6. Stochastic Optimization
6.1 Introduction
6.1.1 General Background
6.1.2 Formal Problem Statement
6.1.3 Contrast of Stochastic and Deterministic Optimization
6.1.4 Some Principles of Stochastic Optimization
6.2 Random Search
6.2.1 Some General Properties of Direct Random Search
6.2.2 Two Algorithms for Random Search
6.3 Stochastic Approximation
6.3.1 Introduction
6.3.2 Finite-Difference SA
6.3.3 Simultaneous Perturbation SA
6.4 Genetic Algorithms
6.4.1 Introduction
6.4.2 Chromosome Coding and the Basic GA Operations
6.4.3 The Core Genetic Algorithm
6.4.4 Some Implementation Aspects
6.4.5 Some Comments on the Theory for GAs
6.5 Concluding Remarks
References
7. Transforms in Statistics
7.1 Introduction
7.2 Fourier and Related Transforms
7.2.1 Discrete Fourier Transform
7.2.2 Windowed Fourier Transform
7.2.3 Hilbert Transform
7.2.4 Wigner-Ville Transforms
7.3 Waveletsand Other Multiscale Transforms
7.3.1 A Case Study
7.3.2 Continuous Wavelet Transform
7.3.3 Multiresolution Analysis
7.3.4 Haar Wavelets
7.3.5 Daubechies' Wavelets
7.4 Discrete Wavelet Transforms
7.4.1 The Cascade Algorithm
7.4.2 Matlab Implementation of Cascade Algorithm
7.5 Conclusion
References
8. Parallel Computing Techniques
8.1 Introduction
8.2 Basic Ideas
8.2.1 Memory Architectures of Parallel Computers
8.2.2 Costs for Parallel Computing
8.3 Parallel Computing Software
8.3.1 Process Forking
8.3.2 Threading
8.3.3 OpenMP
8.3.4 PVM
8.3.5 MPI
8.3.6 HPF
8.4 Parallel Computing in Statistics
8.4.1 Parallel Applications in Statistical Computing
8.4.2 Parallel Software for Statistics
References
9. Statistical Databases
9.1 Introduction
9.2 Fundamentals of Data Management
9.2.1 File Systems
9.2.2 Relational Database Systems (RDBS)
9.2.3 Data Warehouse Systems (DWS)
9.3 Architectures, Concepts and Operators
9.3.1 Architecture of a Database System for OLTP
9.3.2 Architecture of a Data Warehouse
9.3.3 Concepts (ROLAP, MOLAP, HOLAP, Cube Operators)
9.3.4 Summarizability and Normal Forms
9.3.5 Comparison of Terminologies
9.4 Access Methods
9.4.1 Views (Virtual Tables)
9.4.2 Tree-based Indexing
9.4.3 Bitmap Index Structures
9.5 Extraction, Transformation and Loading (ETL)
9.5.1 Extraction
9.5.2 Transformation
9.6 Metadata and XML
9.7 Privacy and Security
9.7.1 Preventing Disclosure of Confidential Information
9.7.2 Query Set Restriction
9.7.3 Data Perturbation
9.7.4 Disclosure Risk versus Data Utility
References
10. Interactive and Dynamic Graphics
10.1 Introduction
10.2 Early Developments and Software
10.3 Concepts of Interactive and Dynamic Graphics
10.3.1 Scatterplots and Scatterplot Matrices
10.3.2 Brushing and Linked Brushing/Linked Views
10.3.3 Focusing, Zooming, Panning, Slicing, Rescaling, and Reformatting
10.3.4 Rotations and Projections
10.3.5 Grand Tour
10.3.6 Parallel Coordinate Plots
10.3.7 Projection Pursuit and Projection Pursuit Guided Tours
10.3.8 Pixel or Image Grand Tours
10.3.9 Andrews Plots
10.3.10 Density Plots, Binning, and Brushing with Hue and Saturation
10.3.11 Interactive and Dynamic Graphics for Categorical Data
10.4 Graphical Software
10.4.1 REGARD, MANET, and Mondrian
10.4.2 HyperVision, ExplorN, and CrystalVision
10.4.3 Data Viewer, XGobi, and GGobi
10.4.4 Other Graphical Software
10.5 Interactive 3D Graphics
10.5.1 Anaglyphs
10.5.2 Virtual Reality
10.6 Applications in Geography, Medicine, and Environmental Sciences
10.6.1 Geographic Brushing and Exploratory Spatial Data Analysis
10.6.2 Interactive Micromaps
10.6.3 Conditioned Choropleth Maps
10.7 Outlook
10.7.1 Limitations of Graphics
10.7.2 Future Developments
References
11. The Grammar of Graphics
11.1 Introduction
11.1.1 Architecture
11.2 Variables
11.2.1 Variable
11.2.2 Varset
11.2.3 Converting a Table of Data to a Varset
11.3 Algebra
11.3.1 Operators
11.3.2 Rules
11.3.3 SQL Equivalences
11.3.4 Related Algebras
11.3.5 Algebra XML
11.4 Scales
11.4.1 Axiomatic Measurement
11.4.2 Unit Measurement
11.4.3 Transformations
11.5 Statistics
11.6 Geometry
11.7 Coordinates
11.8 Aesthetics
11.9 Layout
11.9.1 Projection
11.9.2 Sets of Functions
11.9.3 Recursive Partitioning
11.10 Analytics
11.10.1 Statistical Model Equivalents
11.10.2 Subset Model Fitting
11.10.3 Lack of Fit
11.10.4 Scalability
11.10.5 An Example
11.11 Software
11.12 Conclusion
References
12. Statistical User Interfaces
12.1 Introduction
12.2 The Golden Rules and the ISO Norm 9241
12.3 Development of Statistical User Interfaces
12.3.1 Graphical User Interfaces
12.3.2 Toolbars
12.3.3 Menus
12.3.4 Forms and Dialog Boxes
12.3.5 Windows
12.3.6 Response Times
12.3.7 Catching the User Attention
12.3.8 Command Line Interfaces and Programming Languages
12.3.9 Error Messages
12.3.10 Help System
12.4 Outlook
References
13. Object Oriented Computing
13.1 Introduction
13.1.1 First Approach to Objects
13.1.2 Note on Unified Modelling Language
13.2 Objects and Encapsulation
13.2.1 Benefits of Encapsulation
13.2.2 Objects and Messages
13.2.3 Class
13.2.4 Object Composition
13.2.5 Access Control
13.3 Short Introduction to the UML
13.4 Inheritance
13.4.1 Base Class and Derived Class
13.4.2 Generalization and Specialization
13.4.3 Using Base Class as Common Interface
13.4.4 Inheritance, or Composition?
13.4.5 Multiple Inheritance
13.5 Polymorphism
13.5.1 Early and Late Binding
13.5.2 Implementation of the Late Binding
13.5.3 Abstract Class
13.5.4 Interfaces
13.5.5 Interfaces in C++
13.6 More about Inheritance
13.6.1 Substitution Principle
13.6.2 Substitution Principle Revised
13.6.3 Inheritance and Encapsulation
13.7 Structure of the Object Oriented Program
13.8 Conclusion
References
III. Statistical Methodology
1. Model Selection
1.1 Introduction
1.2 Basic Concepts - Trade-Offs
1.2.1 Remarks
1.3 AIC, BIC, C and Their Variations
1.4 Cross-Validation and Generalized Cross-Validation
1.5 Bayes Factor
1.6 Impact of Heteroscedasticity and Correlation
1.7 Discussion
References
2. Bootstrap and Resampling
2.1 Introduction
2.2 Bootstrap as a Data Analytical Tool
2.3 Resampling Tests and Confidence Intervals
2.4 Bootstrap for Dependent Data
2.4.1 The Subsampling
2.4.2 The Block Bootstrap
2.4.3 The Sieve Bootstrap
2.4.4 The Nonparametric Autoregressive Bootstrap
2.4.5 The Regression-type Bootstrap, the Wild Bootstrap and the Local Bootstrap
2.4.6 The Markov Bootstrap
2.4.7 The Frequency Domain Bootstrap
References
3. Design and Analysis of Monte Carlo Experiments
3.1 Introduction
3.2 Simulation Techniques in Computational Statistics
3.3 Black-Box Metamodels of Simulation Models
3.4 Designs for Linear Regression Models
3.4.1 Simple Regression Models for Simulations with a Single Factor
3.4.2 Simple Regression Models for Simulation Models with Multiple Factors
3.4.3 Fractional Factorial Designs and Other Incomplete Designs
3.4.4 Designs for Simulations with Too Many Factors
3.5 Kriging
3.5.1 Kriging Basics
3.5.2 Designs for Kriging
3.6 Conclusions
References
4. Multivariate Density Estimation and Visualization
4.1 Introduction
4.2 Visualization
4.2.1 Data Visualization
4.3 Density Estimation Algorithms and Theory
4.3.1 A High-Level View of Density Theory
4.3.2 The Theory of Histograms
4.3.3 ASH and Kernel Estimators
4.3.4 Kernel and Other Estimators
4.4 Visualization of Trivariate Functionals
4.5 Conclusions
References
5. Smoothing: Local Regression Techniques
5.1 Smoothing
5.2 Linear Smoothing
5.2.1 Kernel Smoothers
5.2.2 Local Regression
5.2.3 Penalized Least Squares (Smoothing Splines)
5.2.4 Regression Splines
5.2.5 Orthogonal Series
5.3 Statistical Properties of Linear Smoothers
5.3.1 Bias
5.3.2 Variance
5.3.3 Degrees of Freedom
5.4 Statistics for Linear Smoothers: Bandwidth Selection and Inference
5.4.1 Choosing Smoothing Parameters
5.4.2 Normal-based Inference
5.4.3 Bootstrapping
5.5 Multivariate Smoothers
5.5.1 Two Predictor Variables
5.5.2 Likelihood Smoothing
5.5.3 Extensions of Local Likelihood
References
6. Dimension Reduction Methods
6.1 Introduction
6.2 Linear Reduction of High-dimensional Data
6.2.1 Principal Component Analysis
6.2.2 Projection Pursuit
6.3 Nonlinear Reduction of High-dimensional Data
6.3.1 Generalized Principal Component Analysis
6.3.2 Algebraic Curve and Surface Fitting
6.3.3 Principal Curves
6.4 Linear Reduction of Explanatory Variables
6.4.1 Sliced Inverse Regression
6.4.2 Sliced Inverse Regression Model
6.4.3 SIR Model and Non-Normality
6.4.4 SIRpp Algorithm
6.4.5 Numerical Examples
6.5 Concluding Remarks
References
7. Generalized Linear Models
7.1 Introduction
7.2 Model Characteristics
7.2.1 Exponential Family
7.2.2 Link Function
7.3 Estimation
7.3.1 Properties of the Exponential Family
7.3.2 Maximum-Likelihood and Deviance Minimization
7.3.3 Iteratively Reweighted Least Squares Algorithm
7.3.4 Remarks on the Algorithm
7.3.5 Model Inference
7.4 Practical Aspects
7.5 Complements and Extensions
7.5.1 Weighted Regression
7.5.2 Overdispersion
7.5.3 Quasi- or Pseudo-Likelihood
7.5.4 Multinomial Responses
7.5.5 Contingency Tables
7.5.6 Survival Analysis
7.5.7 Clustered Data
7.5.8 Semiparametric Generalized Linear Models
References
8. (Non) Linear Regression Modeling
8.1 Linear Regression Modeling
8.1.1 Fitting of Linear Regression
8.1.2 Multicollinearity
8.1.3 Variable Selection
8.1.4 Principle Components Regression
8.1.5 Shrinkage Estimators
8.1.6 Ridge Regression
8.1.7 Continuum Regression
8.1.8 Lasso
8.1.9 Partial Least Squares
8.1.10 Comparison of the Methods
8.2 Nonlinear Regression Modeling
8.2.1 Fitting of Nonlinear Regression
8.2.2 Statistical Inference
8.2.3 Ill-conditioned Nonlinear System
References
9. Robust Statistics
9.1 Robust Statistics; Examples and Introduction
9.1.1 Two Examples
9.1.2 General Philosophy
9.1.3 Functional Approach
9.2 Location and Scale in
9.2.1 Location, Scale and Equivariance
9.2.2 Existence and Uniqueness
9.2.3 M-estimators
9.2.4 Bias and Breakdown
9.2.5 Confidence Intervals and Differentiability
9.2.6 Efficiency and Bias
9.2.7 Outliers in
9.3 Location and Scale in
9.3.1 Equivariance and Metrics
9.3.2 M-estimators of Location and Scale
9.3.3 Bias and Breakdown
9.3.4 High Breakdown Location and Scale Functionals in
9.3.5 Outliers in
9.4 Linear Regression
9.4.1 Equivariance and Metrics
9.4.2 M-estimators for Regression
9.4.3 Bias and Breakdown
9.4.4 High Breakdown Regression Functionals
9.4.5 Outliers
9.5 Analysis of Variance
9.5.1 One-way Table
9.5.2 Two-way Table
References
10. Semiparametric Models
10.1 Introduction
10.2 Semiparametric Modelsfor Conditional Mean Functions
10.2.1 Single Index Models
10.2.2 Partially Linear Models
10.2.3 Nonparametric Additive Models
10.2.4 Transformation Models
10.3 The Proportional Hazards Modelwith Unobserved Heterogeneity
10.4 A Binary Response Model
References
11. Bayesian Computational Methods
11.1 Introduction
11.2 Bayesian Computational Challenges
11.2.1 Bayesian Point Estimation
11.2.2 Testing Hypotheses
11.2.3 Model Choice
11.3 Monte Carlo Methods
11.3.1 Preamble: Monte Carlo Importance Sampling
11.3.2 First Illustrations
11.3.3 Approximations of the Bayes Factor
11.4 Markov Chain Monte Carlo Methods
11.4.1 Metropolis-Hastings as Universal Simulator
11.4.2 Gibbs Sampling and Latent Variable Models
11.4.3 Reversible Jump Algorithms for Variable Dimension Models
11.5 More Monte Carlo Methods
11.5.1 Adaptivity for MCMC Algorithms
11.5.2 Population Monte Carlo
11.6 Conclusion
References
12. Computational Methods in Survival Analysis
12.1 Introduction
12.1.1 Nonparametric Model
12.1.2 Parametric Models
12.2 Estimation of Shape or Power Parameter
12.3 Regression Models
12.3.1 The Score Test
12.3.2 Evaluation of Estimators in the Cox Model
12.3.3 Approximation of Partial Likelihood
12.4 Multiple Failures and Counting Processes
12.4.1 Intensity Function
12.4.2 Multiple Counting Processes
12.4.3 Power Law Model
12.4.4 Models Suitable for Conditional Estimation
References
13. Data and Knowledge Mining
13.1 Data Dredging and Knowledge Discovery
13.2 Knowledge Discovery in Databases
13.3 Supervised and Unsupervised Learning
13.4 Data Mining Tasks
13.4.1 Description and Summarization
13.4.2 Descriptive Modeling
13.4.3 Predictive Modeling
13.4.4 Discovering Patterns and Rules
13.4.5 Retrieving Similar Objects
13.5 Data Mining Computational Methods
13.5.1 Numerical Data Mining
13.5.2 Visual Data Mining
References
14. Recursive Partitioning and Tree-based Methods
14.1 Introduction
14.2 Basic Classification Trees
14.2.1 Tree Growing and Recursive Partitioning
14.2.2 Tree Pruning and Cost Complexity
14.3 Computational Issues
14.3.1 Splits Based on an Ordinal Predictor
14.3.2 Splits Based on a Nominal Predictor
14.3.3 Missing Values
14.4 Interpretation
14.5 Survival Trees
14.5.1 Maximizing Difference Between Nodes
14.5.2 Use of Likelihood Functions
14.5.3 A Straightforward Extension
14.5.4 Other Developments
14.6 Tree-based Methods for Multiple Correlated Outcomes
14.7 Remarks
References
15. Support Vector Machines
15.1 Introduction
15.2 Learning from Examples
15.2.1 General Setting of Statistical Learning
15.2.2 Desirable Properties for Induction Principles
15.2.3 Structural Risk Minimization
15.3 Linear SVM: Learning Theory in Practice
15.3.1 Linear Separation Planes
15.3.2 Canonical Hyperplanes and Margins
15.4 Non-linear SVM
15.4.1 The Kernel Trick
15.4.2 Feature Spaces
15.4.3 Properties of Kernels
15.5 Implementation of SVM
15.5.1 Basic Formulations
15.5.2 Decomposition
15.5.3 Incremental Support Vector Optimization
15.6 Extensions of SVM
15.6.1 Regression
15.6.2 One-Class Classification
15.7 Applications
15.7.1 Optical Character Recognition (OCR)
15.7.2 Text Categorization and Text Mining
15.7.3 Active Learning in Drug Design
15.7.4 Other Applications
15.8 Summary and Outlook
References
16. Bagging, Boosting and Ensemble Methods
16.1 An Introduction to Ensemble Methods
16.2 Bagging and Related Methods
16.2.1 Bagging
16.2.2 Unstable Estimators with Hard Decision Indicator
16.2.3 Subagging
16.2.4 Bagging More ''Smooth'' Base Procedures and Bragging
16.2.5 Bragging
16.2.6 Out-of-Bag Error Estimation
16.2.7 Disadvantages
16.2.8 Other References
16.3 Boosting
16.3.1 Boosting as Functional Gradient Descent
16.3.2 The Generic Boosting Algorithm
16.3.3 Small Step Size
16.3.4 The Bias-variance Trade-off for Boost
16.3.5 Boost with Smoothing Spline Base Procedure for One-dimensional Curve Estimation
16.3.6 Boost for Additive and Interaction Regression Models
16.3.7 Linear Modeling
16.3.8 Boosting Trees
16.3.9 Boosting and -penalized Methods (Lasso)
16.3.10 Other References
References
IV. Selected Applications
1. Computationally Intensive Value at Risk Calculations
1.1 Introduction
1.2 Stable Distributions
1.2.1 Characteristic Function Representation
1.2.2 Computation of Stable Density and Distribution Functions
1.2.3 Simulation of -stable Variables
1.2.4 Estimation of Parameters
1.2.5 Are Asset Returns -stable?
1.2.6 Truncated Stable Distributions
1.3 Hyperbolic Distributions
1.3.1 Simulation of Generalized Hyperbolic Variables
1.3.2 Estimation of Parameters
1.3.3 Are Asset Returns NIG Distributed?
1.4 Value at Risk, Portfolios and Heavy Tails
References
2. Econometrics
2.1 Introduction
2.2 Limited Dependent Variable Models
2.2.1 Multinomial Multiperiod Probit
2.2.2 Multivariate Probit
2.2.3 Mixed Multinomial Logit
2.3 Stochastic Volatility and Duration Models
2.3.1 Canonical SV Model
2.3.2 Estimation
2.3.3 Application
2.3.4 Extensions of the Canonical SV Model
2.3.5 Stochastic Duration and Intensity Models
2.4 Finite Mixture Models
2.4.1 Inference and Identification
2.4.2 Examples
References
3. Statistical and Computational Geometry of Biomolecular Structure
3.1 Introduction
3.2 Statistical Geometry of Molecular Systems
3.3 Tetrahedrality of Delaunay Simplices as a Structural Descriptor in Water
3.4 Spatial and Compositional Three-dimensional Patterns in Proteins
3.5 Protein Structure Comparison and Classification
3.6 Conclusions
References
4. Functional Magnetic Resonance Imaging
4.1 Introduction: Overview and Purpose of fMRI
4.2 Background
4.2.1 Magnetic Resonance (MR)
4.2.2 Magnetic Resonance Imaging (MRI)
4.2.3 Functional MRI
4.3 fMRI Data
4.3.1 Design of an fMRI Experiment
4.3.2 Data Collection
4.3.3 Sources of Bias and Variance in the Data
4.4 Modeling and Analysis
4.4.1 An Ideal Model
4.4.2 A Practical Approach
4.5 Computational Issues
4.5.1 Software Packages for fMRI Data Analysis
4.5.2 Other Computational Issues
4.6 Conclusions
References
5. Network Intrusion Detection
5.1 Introduction
5.2 Basic TCP/IP
5.3 Passive Sensing of Denial of Service Attacks
5.4 Streaming Data
5.5 Visualization
5.6 Profiling and Anomaly Detection
5.7 Discussion
References
Index
Subsections
List of Contributors