You are on page 1of 83

Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.

org/
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Processing Near-Surface
Seismic-Reflection Data:
A Primer

Gregory S. Baker

Course Notes Series


Roger A. Young, Series Editor

Society of Exploration Geophysicists


Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

These course notes are published without the normal SEG peer reviews.
They have not been examined for accuracy and clarity. Questions or
comments by the reader should be referred directly to the author.

ISBN 0-931830-48-6 (Series)


ISBN 1-56080-090-9 (Volume)

Society of Exploration Geophysicists


P. O. Box 702740
Tulsa, Oklahoma 74170-2740

© 1999 by Gregory S. Baker


All rights reserved. This book and CD-ROM or portions hereof
may not be reproduced in any form without permission in
writing from the author.

Published in 1999

Printed in the United States of America


Contents
Preface ...................................................................................................................iv

1 Introduction .............................................................................................................1
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

1.1 Organization......................................................................................................1
1.2 Near-Surface Reflection Seismology Background................................................2
1.3 Processing Objectives........................................................................................3
1.4 Demonstration Data from Kansas and England.....................................................5
1.4.1 Processing Goals ...................................................................................5
1.4.2 Geometry and Equipment .........................................................................7

2 Processing Steps and Associated Pitfalls ...............................................9


2.1 Processing Flowcharts........................................................................................9
2.1.1 General Shallow-Seismic Processing Flowchart ............................................9
2.1.2 Kansas Data Processing Flowchart ..........................................................10
2.1.3 England Data Processing Flowchart..........................................................11
2.2 Preprocessing.................................................................................................12
2.2.1 Data Input and Seismic Data Formats........................................................12
2.2.2 Computing Time Considerations ...............................................................14
2.2.3 Defining Geometry and Trace Headers ......................................................16
2.2.4 First Examination of the Data: Identifying Signal and Noise ............................17
2.3 Improving the Signal-to-Noise Ratio (S/N)...........................................................23
2.3.1 Killing Noisy Traces ...............................................................................24
2.3.2 Muting Coherent Noise ...........................................................................25
2.3.3 Elevation and Refraction Statics Corrections..............................................31
2.3.4 Filtering...............................................................................................36
2.3.5 Scaling ...............................................................................................42
2.4 Velocity...........................................................................................................45
2.4.1 Common-Midpoint (CMP) Sorting ..............................................................45
2.4.2 Velocity–A First Pass ............................................................................47
2.4.3 Velocity Analysis ..................................................................................48
2.4.4 Moveout Corrections .............................................................................57
2.4.5 Residual Statics Corrections ...................................................................60
2.5 Stacking and Verification...................................................................................63
2.5.1 Stacking .............................................................................................63
2.5.2 Analyzing Stack with Shot/CMP Gathers ...................................................64
2.5.3 Muting Coherent and Incoherent Noise ......................................................65
2.6 Displaying Seismic Data....................................................................................66
2.6.1 Muting Low-Fold Data.............................................................................66
2.6.2 Converting from Time to Depth .................................................................68
2.6.3 Displaying ...........................................................................................69

3 Discussion .............................................................................................................71
3.1 Applications of Migration...................................................................................71
3.2 Critical Processing Steps..................................................................................73
3.3 Avoiding Pitfalls................................................................................................74

Appendices ................................................................................................................75
A. References.................................................................................................................75
B. Data included on CD-ROM............................................................................................77

iii
Preface
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The idea for this primer stemmed from the observation that more than ever before,
people in academia and industry are using shallow-reflection seismology as a
noninvasive tool for determining the physical properties and geometry of the upper
100 meters of the subsurface. Traditionally, workers using seismic methods for
academic or industrial purposes have been familiar with the use of the techniques
for hydrocarbon or crustal exploration. Advances in seismic methods have come
largely from “deep” exploration seismology because the drive underlying that
industry is profit maximization. Seismic techniques involved with imaging from
hundreds to thousands of meters below the surface of the earth, therefore, have had
substantial monetary support to obtain the best data possible.

Major use of shallow-seismic techniques today, however, is in the environmental


industry (for site characterization, contaminant detection, etc.) driven by minimizing
cost. Therefore, if the availability of better data reveals a need for more expensive
and extensive site cleanup, improving seismic data quality may be counter
productive (to the industry) and may not obtain a high priority for funding and quality
control.

The goal of this primer is to provide a basic near-surface seismic-reflection


processing guide for workers who have not had industry- or academic-supported
training or guidance but wish to maintain the integrity of seismic imaging as a tool for
near-surface exploration. This primer will focus on processing two small data sets
using standard common-midpoint (CMP) processing and will include many
significant processing pitfalls encountered in previous work.

G.S.Baker
May 1999

iv
1 IN T ROD U CT I ON
1.1 Organization
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

This document can either be printed or followed as an Adobe Acrobat PDF file (included on the
CD-ROM). As a PDF file, each item on the Contents page is linked to its corresponding
location, and the user can return to Contents by clicking the Page Number at the bottom-
center of every page. Additionally, all references in this tutorial are linked to the appropriate
location in Appendix A (the Reference list).

The main portion of this tutorial is Section 2, Processing Steps and Associated Pitfalls, which
focuses on individual processing procedures, each of which is broken into three parts:

i. The Process

• The basics of the process are discussed briefly with graphical examples when
appropriate.

ii. Applying the Process

• The procedure is applied to the Kansas or England example data with emphasis
on the most important parameters.

iii. Pitfalls Associated with the Process

• Potential processing pitfalls associated with the procedure are explained, using
examples where applicable.

The seismic data examples in this Primer were processed and displayed using Seismic
Processing Workshop (Parallel Geoscience Corporation) on a Macintosh G3 computer.

1
1.2 Near-Surface Seismic-Reflection
Data Processing
The first published examples of seismic reflections detected shallower than 50 m are from
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Pakiser and others at the U.S. Geological Survey (Pakiser and Mabey, 1954; Pakiser et al.,
1954; Pakiser and Warrick, 1956; Warrick and Winslow, 1960). Digital seismographs were not
yet available, and the method was not strongly pursued until the late 1970s and early 1980s.
A classic paper by Hunter et al. (1984) developed the optimum-window technique, first used
for constant-offset surveying but now adopted in typical common-midpoint (CMP) processing.
At that time, however, the high cost of seismographs, computers, and processing systems
limited the seismic investigation of the shallow subsurface.

The cost-per-channel in 1982 US dollars for a 12-bit seismograph (~66 dB of dynamic range)
was about $5000. Today, the cost-per-channel in 1999 US dollars for a 24-bit seismograph
(>100 dB of dynamic range) is about $1000. Thus, in the past 17 years the cost of
seismographs has come down about an order of magnitude, factoring in the importance of
dynamic range. A user can be fully equipped with a 48-channel, 24-bit seismograph, cables,
and 40-Hz geophones for around US $70,000. And now, for less than US $5,000, one can
purchase a high-performance microcomputer, laser printer, and seismic-data processing
software.

Shallow-reflection seismic surveying is now fairly commonplace in academia (for example, see
Special Section - Shallow Seismic Reflection Papers, 1998, Geophysics; Near Surface
Geophysics Special Issue, 1997, The Leading Edge) and in industry, as noted by the
significant amount of use by the environmental industry.

Processing shallow-seismic-reflection data is different from the seismic data processing done in
hydrocarbon exploration (e.g., Miller, 1992; Black et al., 1994; Baker et al., 1998) and can be
difficult for workers with little or no signal-processing background. Processing shallow-seismic
reflection data is most precarious when nonreflective coherent events are generated on final
stacked sections and can be misinterpreted as reflections. The goal of this tutorial is to attempt
to reduce the number of misinterpretations by avoiding pitfalls and to advocate the adoption of
a conservative approach to processing. Specifically, the processor should attempt to eliminate
(or aviod generating) coherent noise events, even at the expense of a final image of poorer
quality.

2
1.3 Processing Objectives
The first and possibly most important step in processing shallow-reflection data is to determine
objectives: Is the shallow reflection survey being performed to image a specific structure or
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

stratigraphic relationship, or is it being performed to determine some overall generality about the
physical properties of the subsurface? It is important to examine the objectives early and
keep them in mind throughout processing.

Typically, the clarity and resolution of the final product are of the highest importance.
Processes that tend to generate artifacts while increasing coherency such as f-k filtering, trace
mixing, or migration are appropriate for goal-oriented processing. However, these processes
can generate significant coherent events that have no geologic basis in the geometry of the
subsurface. Therefore, before performing any of the aforementioned processes, the seismic-
data processor must be absolutely certain that the resulting coherent events can be correlated
to reflections on the original shot or CMP gathers. A coherent event in a final stacked section
that cannot be corroborated by supporting evidence from minimally processed shot or CMP
gathers must be assumed to be artificial. The processor should adopt the precept that when
reflection hyperbolas are not seen in at least part of the raw data (or with minimal frequency
filtering or scaling), real corresponding reflections will not exist even “after sophisticated
processing.” One must remember that sophisticated processing techniques are derived from
the hydrocarbon exploration industry, whose goal is enhancing existing reflections, not
revealing reflections where they are not initially visible.

An example of how processing procedures can create coherent events with no substantiation
in the subsurface is shown in Figures 1.1 and 1.2. The seismic reflection section in Fig. 1.1
was generated by applying a processing flow created for use on an unrelated data set. Figure
1.2 is a correctly processed section of the seismic reflection data used in Fig. 1.1. Note that the
section in Fig. 1.1 has no apparent correlation with the section in Fig. 1.2, even though the
same seismic data were used for each.

3
Trace Number
1 10 20 30 1 15 30
0.0 0.0 0.0 0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Angular
unconformity? No
unconformity

Subhorizontal
Bedding bedding
dips to left?

Time (s)
0.1 0.1 0.1 0.1

Faults?
No faulting

0.2 0.2
0.2 0.2

Figure 1.1 Seismic data processed Figure 1.2 Correctly processed seismic
using processing steps from a different section of the data in Fig. 1.1. Reflections
seismic data flow. The section looks are discrete and subhorizontal.
quite good and appears to be interpret-
able although it has no connection with
the real subsurface.

By examining Figure 1.1, which is made up of coherent events that have no basis in the
subsurface, the importance of carefully identifying reflections throughout the entire process is
clear. The processes used to generate Figure 1.1 were from a processing flow containing f-k
filtering and trace mixing, both of which increased the coherency of the existing artifacts to the
point of generating a bogus seismic section.

4
1.4 Demonstration Data from England and Kansas
1.4.1 Processing Goals
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 1.4 is a final stacked section of the Kansas seismic data that will be used in this primer
and which is included on the CD-ROM.

Offset Along Profile (m)


5 17 30 42 54
0
Depth (m)

20

40

Figure 1.4 Final stacked depth section of the Kansas data.

The goal of collecting the data was to determine the depth to, and geometry of, bedrock.
The bedrock reflection at ~10 m was created by the interface between Holocene fluvial
deposits and Paleozoic basement rock (inferred from nearby borehole information). The
reflection is distinct, coherent, and shows the variability in the bedrock topography. This data
set was collected to determine whether there was a steep-sided Pleistocene glacial meltwater
channel carved into the bedrock, which was later buried by Holocene sediments. On the basis
of these data, it was determined that a buried meltwater channel did not exist in this portion of
the flood plain.

5
Offset Along Profile (m)
6 30 55
0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Two-Way Traveltime (s)

0.1

0.2

Figure 1.5 Final stacked section of the data from England used in this primer.

Figure 1.5 is a final stacked section of the England seismic data that will also be used in this
primer and is also included on the CD-ROM. The England data were collected in an alluvial
valley (Thames River), and the goal was to image bedrock stratigraphy and examine possible
structure and geometry. Establishing the depth to the reflections was not of primary interest.

6
1.4.2 Geometry and Equipment

Kansas Demonstration Data


The Kansas data were collected using a BISON 24096 96-channel, 24-bit seismograph with
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Mark Products L40A 100-Hz geophones. The source was a 30.06 rifle fired 25 cm downhole
into a prepunched hole. The profile was on a slight incline (3.0-m elevation change over
100 m). The geometry of the Kansas experiment was fairly simple: 96 single geophones were
deployed along the profile at 0.5-m spacing. The source began 1 m off the end of the line, and
then was triggered at 1-m increments through the line until 53 m (Figure 1.6a). At that point, the
first 48 channels were leapfrogged to the end of the line. The source continued through the rest
of the line at 1-m increments until it was 1-m off the end (Figure 1.6b). Thus, the data consist of
74 shot gathers with 96 traces in each. The data were recorded for 500 ms with a sample
interval of 0.25 ms. Additional information can be found in Appendix B.

Receiver locations 1-96


∆ ∆
X X
Source locations 1-53

Figure 1.6a The geometry of the Kansas data prior to leapfrogging.

Receiver locations 49-144


∆ ∆
X X
Source locations 54-74

Figure 1.6b The geometry of the Kansas data after leapfrogging.

7
England Demonstration Data
The England demonstration data set was collected using a Geometrics 2401 24-channel, 15-bit
seismograph with 100-Hz geophones. The source was a Buffalo gun (described in Pullan and
MacAulay, 1987). The profile was in alluvial surface deposits on a nearly horizontal surface.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The geometry of the England experiment was also simple: single geophones were deployed
along the profile at 2.5-m spacings, and the source was 12.5 m off-end. This geometry was
preserved by rolling the source and receivers along the profile for 20 shots at 2.5-m increments
(Fig. 1.7). Thus, the data consist of 20 shot gathers with 24 traces in each. The data were
recorded for 250 ms, with a sample interval of 0.5 ms. Additional information can be found in
Appendix B.

Source
X ∆ Receiver Locations 1-24 ∆
12.5m 60m
5

Roll

Figure 1.7 The geometry of the England data. Locations are 2.5 m apart.

8
2 PROCE S S I N G ST E P S AN D
A S S OCI AT E D PI T F AL L S
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

2.1 Processing Flowcharts

Processing flows are as widely variable as seismic data. In order to optimize any final
stacked section, the flow must be tailored to the specific problems of the data, such as specific
noise problems, a low signal-to-noise (S/N) ratio, etc. Below is a generalized processing flow
followed by the specific flows used for the demonstration data.

2.1.1 General Shallow-Seismic Processing Flowchart

Raw Data

Input Raw Data


Define Geometry
First Examination of Data

Preprocessed Data

Kill Noisy Traces


Mute Coherent Noise
Statics Corrections
Filtering & Scaling

S/N Enhanced Data

CMP Sort
Velocity Analysis
NMO Correction
Residual Statics
Stack

Stacked Data

Confirm Reflections
AttenuateCoherent Noise

Final Seismic Section

Figure 2.1 General processing flow for shallow reflection data.

9
2.1.2 Kansas Data Processing Flowchart

WAKE1001

SEG-2 Input
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

RAW DATA

Truncate Unused Data

Land Observer Notes TRUNCATED

Source Survey Notes Geometry Definition

Survey Notes
HEADERS Offset Sort

Kill Noisy Traces Offset Sorted

Timing Shifts Static Time Picks

PREPROC

Amplitude Equalization

EQUALIZED

Elevation Static Shifts Elevation Info

F-K Spectrum SHOTwSTATICS

FKSpectrum01 FK Mute Definition Apply F-K Filter

SHOTFK

CMP Sort

CMPSORTED

Mute Definition Apply Mute

Velocity Function MUTED

Smoothed Velocity Function NMO Correction

NMO

Apply NMO-Stretch Mute Stretch Mute Definition

NMOMUTED

Frequency Filtering

CMP Stack

STACK01

Remove Low-Fold

STACKFINAL

10
2.1.3 England Data Processing Flowchart
england2.sgy

SEG-Y Input
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

englandRAW
Land Observer Notes

Survey Notes Geometry Definition

Source Survey Notes


england_01

Velocity Function
Kill Traces

england_preproc Time Picks

Refraction Statics

Elevation Statics Apply Static Shifts Refraction Statics Shifts

englandSTATICS

Mute Definition Mute Refractions

Correct Residual Statics

Constant Velocity Stacks england_muted

englandCVSs

Velocity Function NMO Correction

CMP Sort

englandNMOed CMP Stack england_teststack

CMP Stack Optimization Statics Horizon Guide

Mute Definition england_stack Residual Statics Shifts

Mute Early Noise QC Static Shifts

englandstack01 englandNMOresid

Time Variant Butterworth

Automatic Gain Control

englandstack02

Remove Low-Fold Data

EnglandFINALstack

11
2.2 Preprocessing
2.2.1 Data Input and Seismic Data Formats
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

i. The Process

Most seismographs will record seismic data in SEG-Y (Barry et al., 1975) or SEG-2 (Pullan,
1990) seismic data formats. These are standard magnetic tape formats defined by the
Society of Exploration Geophysicists (SEG). The difference between the two is the change
in the size and format of the trace headers and in the storage of the trace-amplitude data.
Seismic-data processing software packages, for philosophical or computational reasons, will
normally convert from SEG standard formats to their own internal format for data processing
and manipulation. Thus, the first step in a processing flow for shallow seismic data typically
includes converting from SEG standard format to the specific format of the program.

ii. Applying the Process

This conversion process includes four important parameters. The first, data format, requires
knowledge of the seismograph used to collect the data. The processor must know whether
the seismograph records in (1) integer, (2) floating-point, or (3) compound format. Both
example data sets were recorded in integer format.

The three other important parameters, trace length, sample interval, and the number of
channels, can be retrieved easily from good observer’s notes. For the Kansas data, the trace
length was 0.5 s (500 ms), the sample interval was 0.25 ms, and 96 channels were recorded.
The England data trace length was 0.25 s (250 ms), the sample interval was 0.5 ms, and 24
channels were recorded per record.

iii. Pitfalls Associated with the Process

Some seismic processing packages will perform a data format conversion by retrieving all of
the important parameters from the trace headers without requiring user input. However, if the
trace headers were not stored correctly in the field by the seismograph, or if the processing
package does not automatically read the important parameters from the trace headers, the
information must be provided by the data processor.

Typically, a process in which the wrong data format is specified will not proceed to
completion. However, specifying an incorrect trace length, sample interval, or number of
channels will in most cases proceed to completion. When the processor specify a trace length

12
that is too long or too many channels, a stable processing package will insert zeroed data or
blank traces, and the mistake will be obvious when first examining the seismic data.
Conversely, the conversion process may proceed without error when the record length or
number of channels specified is too small, and some information will be lost.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A correct sampling interval is also critical. Even when an improper sampling interval does not
cause the processing package to abort, all of the depth and velocity information gleaned from
the seismic data during processing will be in error. The best way to confirm that the sampling
interval is correct is to identify the air wave and make sure its velocity is ~335 m/s.

The air wave is the energy that travels directly from the source to the receivers above ground;
this energy is the audible sound generated by the source. Air wave is identified by three
main criteria. First, air wave can be observed traveling across the seismogram without static
time shifts from near-surface velocity changes. Thus, the air wave is usually the most
coherent phase on a seismogram. Second, attenuation (or the inverse of attenuation, Q ) is
typically different for air than it is for consolidated or unconsolidated material. Because Q is
much higher for air, high frequencies are not attenuated as strongly as other seismic phases.
Thus, air wave usually contains higher frequencies than other phases. The third way to
identify air wave is to calculate the velocity of the apparent air-wave phase on one or more
field files. It should be in the 330 to 340 m/s range.

However, when the sampling rate is not entered correctly, the air-wave phase will yield a
much different velocity. Thus, the best way to check the sampling interval is to use the first
and second criteria described previously to identify air wave, and then check to see whether
the velocity is as expected.

13
2.2.2 Computing Time Considerations

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

One positive attribute of processing shallow seismic reflection data is that the total size of the
data set is typically small relative to exploration-scale data sets. Because of this, processors
have the ability to perform several steps of processing at one sitting. Indeed, a processing
flow that could take several days using a high-performance computer for exploration-scale
data may be performed in hours or minutes on a desktop computer when dealing with
shallow-seismic data.

ii. Applying the Process

For the Kansas data, 500 ms of data were collected. Collecting too much data in the field is
preferable to collecting too little, but in this case a cursory look at the example data showed
that the target bedrock reflection arrived at ~40-80 ms, and no visible reflections existed below
that depth (and were auxiliary to the goal). Thus, we can truncate the total record length of
the data from 500 ms to 200 ms. This will decrease computing time by almost 60%. As a
result a process that normally would take 10 minutes to execute will now take only 4 minutes.
Over the course of processing a more sizable data set, such time savings could prevent
days of processing.

Additionally, the processor might re-sample the seismic data to a larger time-sample interval to
reduce size. An important limiting factor if re-sampling data is its frequency content. When
seismic data are recorded using a sample interval of 0.125 ms, the maximum frequency that
will be accurately recorded--the Nyquist frequency--is :

Nyquist Frequency = 1 ,
2 x (sample interval in seconds)

or 4000 Hz. When resampling data, the processor would determine the maximum frequency
content of that data. A typical convention is that there should be at least 8 points per
wavelength for the highest frequency. Thus, if the maximum reflection frequency of the data
were 200 Hz, the minimum sampling rate would be:

Min. Rate = 1 ,
8 x (Maximum Frequency in Hz)

14
or 0.625 ms. Thus the data could (see Pitfalls, below) be re-sampled from 0.125 to 0.625 ms,
cutting processing time by 80% (i.e., one new sample for every five old samples). The
resampling process includes anti-aliasing filtering to prevent the high frequencies from aliasing
to lower ones.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

iii. Pitfalls Associated with the Process


Resampling is a simple process when the Nyquist frequency is kept in mind. Upon first
examining the data, the highest frequency content of the reflections should be determined.
Then, using the Nyquist concept, a minimum sampling frequency for resampling can be
chosen.

Source-to-Receiver Offset (m)


10 60 10 60
0.0
Time (s)

0.1

0.2

Before Resampling After Resampling

Figure 2.4 An example field file from the England data before and after improper resampling.

First examination of the England data showed that the maximum frequency content of the
reflections was ~300 Hz. In Figure 2.4, an example field file was resampled from the original
sampling interval of 0.25 ms to 2 ms. The resampled data (Figure 2.4, right panel) shows
degraded reflections because the Nyquist frequency for the new sample interval is 250 Hz;
therefore, much of the reflection information was lost.

15
2.2.3 Defining Geometry and Trace Headers

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Defining the geometry and correctly filling the trace headers is an important, and often tedious,
process. When the geometry and the trace headers are correctly defined and set, spatial and
temporal processing steps become possible. When trace headers are not correctly set, the
processing steps will not be properly executed.

ii. Applying the Process

Correctly filling the trace headers involves inserting into the proper locations information from
the observers’ notes (OB notes) written in the field during the collection of the seismic data.
Typically, a seismic data-processing software package will use a specific processing step to
create the headers.

iii. Pitfalls Associated with the Process

Correct trace headers are important because when the headers are incorrect, the processing is
likely to be incorrect. Spatial processes, such as f-k filtering and velocity determination, will
be incorrect when the spatial relationships among the traces is not correct. Additionally, CMP
sorting will be incorrect when the headers are not correct.

16
2.2.4 First Examination of the Data: Identifying Signal & Noise

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

After the trace headers have been correctly entered, the processor should always take the
time for a detailed first examination of the data to identify specific problems, obvious
reflections, and coherent noise. This sounds easy, but correctly identifying reflections (signal)
from the onset of data processing is not always straightforward, and misidentification will lead
to an incorrect seismic section.

ii. Applying the Process

Kansas Data

Trace Number
1 10 20 30 40 50 60 70 80 90
0.0

Reflection
Time (s)

0.1
Air Wave

Reflection

Ground
Roll
0.2

Noisy Noisy Noisy


Trace Trace Trace

Figure 2.5 A first examination of the Kansas data, with some phases identified.

A first look at a typical shot gather (unprocessed) from the Kansas data (Fig. 2.5) shows
several distinct features. First, noisy traces are evident (see Section 2.3.1). The second
prominent feature is the high-amplitude ground roll. Ground roll, which in vertical-component P-
17
wave seismic data is typically composed of Rayleigh waves, is identified by two main
characteristics. First, ground roll has a slow phase velocity (steep slope). Wave-equation
physics constrains the propagation velocity of Rayleigh waves as being slower than the
direct S-wave, which in turn must be slower than direct P-waves. The propagation velocity
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

of ground roll for a Poisson’s ratio of 1/4 is 54% of the P-wave velocity for a homogeneous,
isotropic medium.

The second characteristic of ground roll is that it is dispersive (i.e., shingled or ringy). Ground
roll propagates along the surface, and the depth of material affected is directly dependent on
the frequency of the ground roll. The high-frequency component of the ground roll interacts
with the very-near-surface material, whereas lower-frequency ground roll interacts with deeper
material as well as with shallow material. Therefore, ground roll will be dispersive when the
near-surface velocity structure is variable with depth (typically increasing with depth)
because different frequencies of ground roll will travel with varying velocities, depending on
the particular average velocity being sampled.

The third characteristic of ground roll is that it typically has a lower dominant frequency than
near-surface refractions or reflections. Ground roll has a different frequency-dependent rate of
attenuation than S-waves or P-waves. Therefore, for a given propagating distance, the high-
frequency component of ground roll is attenuated much faster than the P-wave reflections or
refrations and is recorded with a lower frequency content.

The final two important features to identify are coherent noise and reflections. These will be
discussed in the Pitfalls section.

18
England Data

Trace Number
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

1 10 20
0.0 0.0

Reflection(?)

Refraction

Reflection(?)

Reflection
Time (sec)

0.1 0.1

0.2 0.2

Air
Wave
Noisy
Trace
Figure 2.6 A first examination of the England data.

A first look at a typical unprocessed shot gather from the England data (Figure 2.6) shows
features similar to the Kansas data (noisy traces and strong reflections), but it also shows a
very strong refracted arrival and air wave. The air wave is a typical problem in shallow
reflection data (see Section 2.3.2) and is identified because its velocity will always be 330 to
340 m/s (with variations due to elevation, air pressure, temperature, and wind). Because of
the differences between the Kansas and England data, special considerations during
processing will be necessary. The most critical step for both, however, is correctly identifying
the reflections.

19
When first examining data, the initial step is to identify the main features (described for both
example data sets above). The next step is to examine the data using various filters and
gains to get a sense of features that might not be obvious on the raw data and to determine
the frequency content of the signal (which will be useful when resampling; see Section 2.2.2).
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Following are several panels of the same field file from the Kansas data with various filters and
gains applied, demonstrating the importance of this step.

Trace Number
1 20 40 60 80
0.0

AGC Gain
Time (s)

0.1

Gain + Med.-Pass Filter


Gain + Low-Pass Filter

Gain + V. High-Pass Filter


Gain + High-Pass Filter

Figure 2.7 Various filters and gains applied to a single field file from the Kansas data.
The top-left panel is the same raw, unprocessed data shown in Fig. 2.6. The top-right panel
is unfiltered data with an AGC gain applied. The remaining panels have the same AGC gain
applied, but with different band-pass filters. Details of the newly observed features are
shown in Fig. 2.8. Note the frequency content of the noisy traces.

20
Trace Number
1 10 20 30 40 50 60 70 80 90
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Time (s) 0.0

0.1

0.2

Refraction
Reflection

Air Wave

Direct Wave
1st Multiple
Reflection

Ground Roll

Figure 2.8 Field file from the Kansas data with detailed identification of phases after filtering
and gaining. The field file and processing are identical to Fig. 2.7, right-center panel. The
source pulse in this data appears as a doublet (i.e., two positive peaks per phase), and the first
peak is picked for interpretation. This is most evident on the direct wave, reflection, and
refraction, and with reversed polarity in the first multiple reflection.

21
iii. Pitfalls Associated with the Process

When identifying reflections, the processor must always remember that other forms of
coherent noise such as aliased ground roll or air wave, diffraction energy, or random coherency
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

may all look like reflection events. There are several checks to increase confidence in an
apparent reflection event:

1) Reflections should be visible on several records without much processing. If the


processor identifies a reflection-like event on only one shot gather and cannot find it on other
shot gathers, it should be discounted. Often a noise event at the time of recording may
generate an apparent reflection. It should be discounted, but not forgotten. Remember that a
48-trace shot gather will have one contributing trace in 48 CMP gathers. If the apparent
reflection has a high enough amplitude (or is incorrectly enhanced by processing), it may stack
and show up on 48 different traces on the final seismic section!

2) A true reflection should remain visible over a band of frequencies. Always use several
frequency filters with slight variations in pass-band frequencies on a questionable reflection.
If the apparent reflection is a product of aliasing, it will noticeably change its appearance for
different frequency ranges.

3) Reflections should be hyperbolic, and this can be checked directly by fitting a hyperbolic
curve through the event or picking three points on the event and calculating the fit. However,
reflection events will not be truly hyperbolic when they are generated by an undulating
surface, traveling through a strong, laterally-varying-velocity medium, or when severe
elevation statics problems exist. Therefore, deviations from hyperbolic moveout can be
observed. But remember, a non-hyperbolic reflection event from one of the aforementioned
causes should also be visible on adjacent shot gathers.

The most common error during the initial examination of the data is misinterpreting refractions as
reflections. When this is done, the processor will typically process the data to enhance what
is believed to be a reflection. Thus, correct segregation of reflections and refractions from the
onset is perhaps the most critical process in all of shallow seismic data processing.

22
2.3 Improving the Signal-to-Noise Ratio (S/N)
The goal of seismic processing is to enhance the signal-to-noise ratio (S/N) of the data. Three
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ways to improve S/N are:

1) Attenuating noise information in a domain in which signal and noise can be


separated. Muting is a way of attenuating noise that has different traveltime and offset
positions than reflections in the time-offset (t-x) domain. Frequency-wavenumber filtering is a
way of attenuating noise that has a different spatial relationship (slope) than reflections. It is
performed in the f-k domain. Frequency filtering is a way of attenuating noise that has a
different frequency content than the reflections and is done in the amplitude-frequency (or
frequency) domain. Each of these techniques assumes that S/N of the selection of data that
is being muted is significantly lower than the remaining information.

2) Correcting for spatial or temporal shifts of the traces. Spatial shifts in the data are
caused when the conditions in the subsurface violate the layered-earth assumption. These
spatial shifts can be corrected using migration when sufficient velocity information about the
region is known (see Section 3.1). Additionally, lateral velocity variations in the region above
the water table (the weathered zone) create temporal shifts in the shot gathers such that a
hyperbolic reflection event is distorted. Several correction techniques exist to compensate for
this effect. However, seismic processing for shallow data typically is used to retain
information from the weathered zone because it is within the region of interest. One type of
temporal static that needs to be corrected in shallow processing is due to source and receiver
elevation differences. Elevation statics are used to correct for temporal shifts caused by
deviations from the datum plane of the source and receivers during the recording process.

3) Stacking. Theoretically, S/N increases as the square-root of the fold of the seismic data.
This is based on the assumption that reflection information is embedded in random noise.
Thus, during stacking, the signal will increase in amplitude by a factor equal to the fold due to
constructive interference, and the random noise will sum to random noise with only slightly
higher amplitude. The higher the fold of the seismic data, the higher the S/N. However, this
assumption is typically violated by the addition of nonrandom (coherent) noise to the seismic
data, in which case the S/N ratio will not increase as rapidly as the square-root of the fold and,
in some cases in which the coherent noise is not properly removed, S/N will not increase at all
or will decrease with increasing fold. Stacking is covered in Section 2.5.

23
2.3.1 Killing Noisy Traces

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Simple but important, killing noisy traces should be one of the first processes applied to the
data (see Pitfalls, below). The process of “killing” refers to setting to zero all of the amplitude
values in a specified trace.

ii. Applying the Process

The noisy traces seen in Figures 2.5 and 2.6 could be selected and muted one at a time, but in
most cases a noisy trace will be due to a bad connection or bad geophone at a particular
receiver location that was not identified in the field. In this case, most processing packages
allow for all of the traces from a particular receiver location to be zeroed quickly and easily.
This was true for the England and Kansas data.

iii. Pitfalls Associated with the Process

Noisy traces must be killed for two reasons. First, even when a noisy trace appears to
contain some reflection information, it still has a lower S/N than the rest of the data and will
therefore only serve to decrease S/N of the final section. Removing any trace with a lower
S/N is almost always better than assuming that important information will be lost if the trace is
removed.

The second and most important reason noisy traces should be killed is more subtle. Some
noisy traces can contain data “spikes” in which a single sample has the maximum amplitude
and the adjacent samples are much smaller. This creates two problems: First, the spike will
appear to have infinite frequency and may cause frequency-related processes to behave
badly. When frequency filtering is applied, the spike will be convolved with the filter operator
and appear as a high-amplitude wavelet with the same frequency characteristics as the filter
operator. Second, because the amplitude of the spike is anomalously high, it will not “stack
out” under the assumption that it is random noise. Thus, if any process is applied that
produces spatial effects on the data (trace mixing, f-k filtering, migration, etc.), the single spike
will contaminate much more of the data; it may even appear as a continuous coherent event
on a stacked section.

24
2.3.2 Muting Coherent Noise

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A method for increasing S/N is to remove noise that has a different location than the signal in
the t–x (or shot) domain. Specifically, properly muting refractions, air wave, and ground roll all
increase S/N. For data in which an air-wave phase is dominant, a processor might consider
spatially (f-k ) filtering the data to remove the linear air-wave event. However, air wave is
typically a high-frequency (often 1 KHz or more), broad-band noise form, and is usually
aliased (Figure 2.9, below); thus, f-k filtering the air wave is likely to degrade the data (by
enhancing the aliased air wave) rather than improve the data. If the aliased air wave shown
in Fig. 2.9 is not removed successfully by some other means, it will stack constructively during
the stacking procedure and generate coherent noise on the final stacked section. The best
alternative is to surgically mute the air wave (see Applying the Process).

When muting in any domain (i.e, t-x, f-k, etc.) the edges of the muted region should be
tapered. A taper is used so that sequential data does is not abruptly set to zero, but rather
gradually is reduced. The size of the taper must be large enough to minimize processing
artifacts that occur at the edge of the muted region but small enough not to obscure signal.

True Air Wave


Velocity

Refraction

Apparent Aliased
Air Wave Velocity
Reflection

Figure 2.9 Example seismic data showing aliasing of air wave. The true velocity of the
air wave is fairly slow (steep slope), but the aliasing of the air wave yields events with an
apparent velocity closer to that of the reflection (aliased slope).

25
The key to muting is removing the portion of data in which S/N is much lower than S/N of the
rest of the data. For example, Figure 2.10 shows that the removal of information with the noise
cone where S/N is low can significantly enhance S/N of the data, even if the mute region
represents a significant portion of the data volume.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source-to-Receiver Offset (m)


-228 88 -228 88
0.0 0.0
REFRACTION MUTE
Time (sec)

NOISE
CONE
0.1 MUTE 0.1

Figure 2.10 An example from Baker et al., 1998 of shallow seismic data in which all of the
information within the noise cone is degraded by air wave of the same frequency content as the
reflections and thus was muted. Additionally, refractions were muted.

The result of muting such a large portion of the data can be surprising (Figure 2.11). Note that
although some reflection information was included in the muted region, S/N of the muted region
was too low to contribute any important information. Thus, following a conservative approach
to avoid contaminating the final stacked section by coherent noise, the processor could
attempt to mute all regions with low S/N, even if it includes a significant portion of the data.

26
Distance Along the Seismic Profile (m)
48 96 144
0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Processed
Time (sec)
0.05

0.10

0.0

noise-cone mute
Processed plus
Time (sec)

0.05

0.10

0.0
Data contained in
noise-cone mute
Time (sec)

0.05

0.10

Figure 2.11 The results of the severe noise-cone mute shown in


Figure 2.10. Note that some signal is contained in the muted portion (bottom
panel of the stacks) but is not of sufficiently high S/N to be worth
keeping (from Baker et al., 1998).

27
ii. Applying the Process

England Data
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The dominant coherent noise in the England data is composed of air wave and refractions.
The England data did not contain significant ground roll, and reflection information with good
S/N was observed within the noise cone.

Trace number
1 10 20 1 10 20
0.0 0.0
Time (sec)

0.1 0.1

0.2 0.2

Before Mute After Mute

Figure 2.12 A preprocessed shot gather from the England data before and after
muting the air wave and refractions. The mute taper length is 8 ms. The two noisy traces
(2 & 17) were also muted. The data are displayed with AGC (40-ms window) and a band-
pass frequency filter (250-300 Hz with 12 dB/octave slopes). Note that a portion of the
reflection at ~35 ms was muted at farther offsets. However, that portion of the reflection
interferes with the first-arriving refraction and thus has a distorted shape that would
degrade the stacking quality.

28
Kansas Data

Muting coherent noise within the Kansas data was accomplished with only one top-mute per
record. As previously mentioned, air wave propagates at a velocity of ~335 m/s. At the
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Kansas site, the near-surface unconsolidated material had a P-wave propagation velocity
slower than the air wave. The reflection energy of interest, therefore, occurs below the air
wave (examine Fig. 2.12 as a comparison). Thus, the coherent noise to be muted consisted
of refractions, direct wave, and air wave, and is located above the reflection of interest. Figure
2.13 shows a preprocessed common-midpoint gather before muting, during the mute-picking
process, and after muting.

Trace Number
1 10 20 30 40
0.0

Preprocessed
Time (s)

0.1

0.0
Mute Pick
Time (s)

0.1

0.0
Muted Record
Time (s)

0.1

Figure 2.13 A single preprocessed CMP-sorted gather


from the Kansas data, with mute picking shown and applied.
The mute taper was 8 ms.

29
iii. Pitfalls Associated with the Process

The pitfalls associated with muting in the t-x domain generally come from failure to mute
coherent noise either properly or entirely. Applying the mute process itself is straightforward.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Comparing the top and middle panels of Fig. 2.11 shows the effect of failing to remove
coherent noise completely in an attempt to keep all signal. Following is an example of the
England data, in which the refractions and air wave were not muted, demonstrating the
creation of coherent artifacts.

Source-to-Receiver Offset (m)


0 24 49 74
0.0

Low-frequency
stacked
refractions
Time (s)

0.1

High-frequency
stacked aliased
air wave

High-frequency
stacked
air wave
0.2

Figure 2.14 The England data processed without muting air wave or refractions.
The stacked, aliased airwave is moveout related and observed on low-fold CMP gathers.

Figure 2.14 shows the significant effects of not muting the coherent energy (compare with the
muted result, Fig. 1.5). Refractions stack to form coherent events. One hint that refractions
are being stacked is than frequency does not decrease with depth (i.e., low-frequency events
are seen earlier than higher frequency reflections) as one would expect with normal frequency-
dependent attenuation. Also, note the presence of coherent and incoherent air-wave noise.

30
2.3.3 Elevation and Refraction Statics Corrections

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Long-wavelength variation refers to variability with a wavelength typically greater than the
spread length (i.e., the distance covered by one “spread” of geophones). Elevation and
refraction statics corrections are time shifts applied to traces to adjust for long-wavelength
changes in elevation, or in thickness and propagation velocity of the weathering zone at
different sources and receivers. The weathering zone, however, is typically the region above
the water table, and it is sometimes the region of interest in shallow-reflection seismology.
Correcting for the weathering zone in shallow seismic data therefore often defeats the purpose
of determining the structure of very-near-surface stratigraphy, as is the case in the Kansas
data. The purpose of using the England data, however, was to examine subweathering
stratigraphy; thus, refraction statics were used. Short-wavelength variations, whose
wavelength variability is less than the spread length, may be corrected for by using residual
statics corrections (see Section 2.4.5).

ii. Applying the Process

Elevation Statics. The topography at the site in England is minimal; thus, no elevation
corrections were made. However, the Kansas data have an elevation change of 3.0 m from
one end to the other. The standard method for correcting for elevation changes is to perform
datum statics, whereby each source and receiver is shifted in time to a flat datum
(Figure 2.15). Two possible positions for the datum are common: First, the datum could be
located at the lowest portion of the profile. This works well because the velocity of the
material above the datum can be measured directly by calculating the velocity of the direct-
arriving energy. The second method locates the datum at the top of the highest location in the
profiles. This method is used when reflection events might be observed between the highest
location and the lowest location on the profile. However, calculating the optimum velocity to
use for the “empty material” above the lower elevations to the upper datum is difficult and
often requires a known, flat, horizontal reflection event in the subsurface to constrain the
values. Alternatively, a sloping datum can be used to minimize the amount of shifting
necessary along a profile having a relatively uniform sloping trend (Treadway et al., 1988).

31
Upper Datum
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Lower Datum
Figure 2.15 Statics corrections are calculated to move the apparent positions of the sources
and receivers to either the high datum (left), or the low datum (right). The arrows represent the
apparent shift in depth (positive or negative) of the source/receiver positions from the ground
surface to the datum.

For the Kansas data, the datum elevation was chosen as a low datum because no reflections
were evident above the datum level (at 275 m/s, about 10 ms). The velocity above the
datum is calculated by measuring the velocity of the direct-arriving P-wave energy
(Figure 2.16). The velocity of the direct arriving P-wave energy remained very consistent
along the profile at ~275 m/s. If the velocity of the very-near-surface layer varied over the
length of the line (confirmed by observing changes in the slope of the direct arrival), an
algorithm to correct for the elevation statics using a variable velocity could be applied, or
refraction statics could be used (see below).

Direct
Arrival Source-to-Receiver Offset (m)
0 7 17 27 37
0.0
Time (s)

0.1

0.2

Figure 2.16 A portion of a shot gather from the Kansas data, with an example of
a direct-arrival pick. The direct-arriving phase is measured to determine the velocity
used for elevation statics.

32
Refraction Statics. When the target of a near-surface seismic reflection survey is below the
weathering zone, long-wavelength variations of the thickness and velocity of the weathering
zone create static shifts that may degrade the final stacked section. However, the static shifts
can be partially corrected computationally by applying refraction statics corrections. In the
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

simplest method, the assumption underlying this process is that any deviations of the first-
arriving refractions from a straight line are caused by variations above the refracting interface,
not by undulations in the interface itself. If the assumption is valid, source and receiver static
shifts can be calculated from time picks of the first-arriving refraction. Shown in Figure 2.17 are
the time picks of the first-arriving refraction that were used in the calculation of the static shifts
during the refraction statics process.

England Shot Gather19

0.0
Time (s)

0.1

Figure 2.17 A portion of a shot gather from the England


data with the refraction time picks shown. Note the deviation
from a straight line in the refraction at the far offsets.

The far end of the England data profile was collected over a very-near-surface anomaly,
which can be observed in the far offsets of the last five shot gathers (Figure 2.18). Note in
Figure 2.18 that the static shifts caused by the very-near-surface variations affect the
reflections as well, and without correction they would degrade the final stacked section.
After refraction statics corrections are applied, such deviations are minimized.

33
England Shot Gathers 16-20
0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Raw England Data


Time (s)

0.1

0.2

0.0

With Refraction Statics


Time (s)

0.1

0.2

Figure 2.18 The last five shot gathers of the England data before (top) and after (bottom)
refraction statics. Note the reduction of the time shifts in the refractions and reflections at the far
offsets (e.g. circled regions).

34
iii. Pitfalls Associated with the Process

As previously mentioned, when a low datum is used to calculate statics corrections, it is


possible that reflection information will be shifted off the seismic section. The main problem
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

with elevation corrections, however, is choosing the correct average velocity from the surface
to the datum. If the velocity is incorrect, overcorrections or undercorrections will cause false
structure in the final stacked section (Figure 2.19).

Offset (m) Offset (m)


6 8 10.5 13 15.5 18 20.5 23 25.5 6 8 10.5 13 15.5 18 20.5 23 25.5
0.0
Time (ms)

0.1
Elevation

Under Corrected Over Corrected


Figure 2.19 A portion of the England data showing the effects of under- and
overcorrection of elevation statics due to incorrect velocity values. Typically, a plot of the
elevation should be shown along with a stacked section to ensure that a viewer can locate
apparent structure that may be due to incorrect elevation statics corrections.

When the velocity values used for elevation statics corrections are incorrect, apparent
structure will be proportional (overcorrected) or inversely proportional (undercorrected) to the
elevation change across the profile.

35
2.3.4 Filtering
i. The Process
We have already discussed one type of filtering; i.e., muting in the t-x domain. Two other
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

types of filters are used commonly: frequency-wavenumber (f-k) filtering and frequency
filtering. Frequency-wavenumber filtering is a method in which linear coherent noise can be
attenuated as long as the signal has a different spatial orientation (slope) than the noise.
Filtering in the f-k domain can be useful in some instances (see Applying the Process) in
which ground roll or other coherent noise cannot be attenuated without severely degrading
signal quality; otherwise f-k filtering is not recommended (see Pitfalls).

The other type of filtering, frequency filtering, is the mainstay of seismic processing.
Frequency filtering is used to attenuate noise at frequencies other than the dominant frequency
of the signal. When used with shallow-reflection data, this type of filtering is most often used
to attenuate ground roll (Figure 2.20).

Source-to-Receiver Offset (m)


20 30 40 50 60 70 80 90 20 30 40 50 60 70 80 90
0.0
Time (sec)

0.1

0
Amplitude (dB down)

-40

0 400 800 0 400 800


Frequency (Hz) Frequency (Hz)

Figure 2.20 Unfiltered and frequency filtered shot gathers (top) with associated frequency
spectrum (bottom) normalized to the maximum amplitude. The data on the right was band-pass
filtered from 290 to 420 Hz with 12 dB/octave rolloff.

36
In Figure 2.20 ground roll (low-frequency, high-amplitude events at greater time than the
refractions) has been attenuated by the frequency filter. Also, the filtered shot gather (right)
now has observable air wave (the high-frequency, steeply-dipping event cutting across the
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

reflections), which cannot be removed by frequency filtering. For shallow reflection data, air
wave--which in exploration seismology is usually filtered with a high-cut filter--typically has a
frequency content that overlaps signal and cannot be attenuated (also see Fig. 2.12).

Another way in which the frequency filtering of shallow data is dissimilar to deeper seismic
processing techniques is that inspection of the frequency spectrum is not an important tool for
choosing filter parameters. Note that in Fig. 2.20 the dominant frequency of the reflections in
the filtered data could not have been determined by looking at the frequency spectrum of the
raw data. Trial-and-error is the best method to determine frequency filters for shallow reflection
data, i.e., use the filter that best attenuates noise and brings out reflections. When using
frequency filters on high-frequency shallow reflection data, consider that the maximum and
minimum frequencies of signal have not been found until mostly noise is seen, so the
processor should first use a wide range of filters to determine the entire frequency content of
the signal.

ii. Applying the Process


No f-k filtering was applied to the England data. There was minimal unaliased coherent noise
on the unstacked data and stacked data. Frequency-wavenumber filtering was not seen as a
beneficial process because of the generation of artifacts.

Raw Data Processed Data

0
Amplitude (dB down)

Kansas Data

-40

0 400 800 0 400 800


Frequency (Hz) Frequency (Hz)
0
Amplitude (dB down)

England Data

-40

0 400 800 0 400 800


Frequency (Hz) Frequency (Hz)

Figure 2.21 Frequency spectra of the Kansas (top) and England (bottom) data.

37
Frequency filtering was applied to both data sets (Figure 2.21), although the results were not
as dramatic as those seen in the example in Figure 2.20. The dominant frequency of signal in
the Kansas data was ~100 Hz, and the dominant frequency of signal in the England data was
~210 Hz. Notice in Figure 2.21 that the result of processing, including frequency filtering, was
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

to enhance the portion of the frequency spectrum containing signal for each data set.

The Kansas data was amenable to f-k filtering for three reasons: First, S/N was fairly high for
the reflection of interest, so good control of the f-k filtering parameters could be determined.
Second, there was very little existing noise with the same spatial orientation (slope) as the
reflection, so the noise would not be significantly enhanced by f-k filtering. And third, ground
roll was coherent and not very dispersive and could therefore be f-k filtered with a fairly gentle
taper. The separation of the phases in the f-k domain is shown in Fig. 2.22, the picking of the
f-k mute region is shown in Fig. 2.23, and the filtered records are shown in Fig. 2.24.

Normalized Wavenumber (1/λ)


-1.0 -0.5 0.0 0.5 1.0
600

400
Frequency (Hz)

200

Air Wave
Aliased
Air Wave
Direct
Wave Reflection

Ground Roll Lower-Order


Ground Roll
Mode

Figure 2.22 An f-k spectrum (top) and an interpreted spectrum (bottom) for the
Kansas data shot gather shown in Fig. 2.8. Coherent information sloping down from
left to right plots with a negative wavenumber (left side of the spectrum). Notice that
aliased air wave has an apparent opposite slope (i.e., a positive wavenumber). The
segregation based on slope allows for f-k filtering to separate noise from signal.

38
400

Frequency (Hz)
300
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

200

100

0.0
-1.0 -0.5 0.0 0.5 1.0

Normalized Wavenumber (1/λ)

Figure 2.23 A portion of the Kansas data f-k spectrum in Fig. 2.22 with the f-k-mute
region shaded. The mute region should be symmetrical about the zero-wavenumber
axis, regardless of the data, to avoid f-k artifacts. The mute taper was 20 Hz.

Trace Number
1 15 30 45 60 75 90
0.0

Preprocessed Data
Time (s)

0.1

0.0
f-k Filtered Data
Time (s)

0.1

Figure 2.24 Kansas shot gather before (top) and after f-k filtering (bottom).
The reflection (red dashed hyperbola) is greatly enhanced, and ground roll is
attenuated with minimal artifacts.

39
iii. Pitfalls Associated with the Process

Frequency-wavenumber filtering is a process that can generate linear coherent noise, which
can be easily misinterpreted. Figure 2.25 shows the results of improperly applied f-k filtering
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

on a stacked section of the England data. The critical parameters in f-k filtering are to pick a
mute region that is symmetrical about the zero wavenumber axis and to have gentle tapering
on the edges of the muted region.

Offset (m)
6 8 10.5 13 15.5 18 20.5 23 25.5
Time (ms) 0

100

200
Offset (m) Offset (m)
6 8 10.5 13 15.5 18 20.5 23 25.5 6 8 10.5 13 15.5 18 20.5 23 25.5
0 0
Time (ms)

Time (ms)

100 100

200 200

Figure 2.25 Partially processed England stack without f-k filtering (top), and the two poorly f-k
filtered stacks (bottom left and right). Two distinctive characteristics suggest improper f-k filtering:
a “wormy” appearance (bottom left), and broken coherent events all of which have the same positive
or negative slope (~15°; circled bottom right).

40
The artifacts seen in the bottom panels of Figure 2.25 must be avoided. Notice in both cases
that linear coherent noise has been generated: In the bottom left panel of Figure 2.25 coherent
reflection events have been “smeared” into positions where they were not imaged, and in the
bottom right panel dipping coherent events now appear in place of originally flat-lying
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

reflections. Thus, it is critical that f-k filtering be used properly, as it is one of the processing
steps most likely to generate artifacts that can lead to improper interpretation.

Frequency filtering also has pitfalls. The most obvious pitfall would be filtering (removing)
signal instead of noise and degrading S/N. Because the trial-and-error method is the best
way to find the optimum filter for shallow reflection data, it is important to try all ranges of filters
and not to assume a priori that the reflection information in your data will be in a particular
frequency range.

An additional pitfall is in designing band-pass filters in which the rolloff on either side of the
pass band are too narrow (steep slopes). This will generate ringing–the Gibb’s effect–and
will greatly distort the data (Figure 2.26). Slopes should typically not exceed 12 dB/octave.

Source-to-Receiver Offset (m) Source-to-Receiver Offset (m)


6 30 55 6 30 55
0.0

0.1
Time (s)

0.2

Figure 2.26 Partially processed England data (left) with gentle (12 dB/octave)
tapers on the band-pass filter, and the same data with the same pass band, but with
steep tapers (21 dB/octave). To avoid the Gibbs effect, filter slopes should not be
steeper than 12 dB/octave.

41
2.3.5 Scaling

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Scaling is the process whereby amplitudes of data are increased (or at least normalized) for
display or processing purposes. Two factors affect the amplitudes of reflection data and
associated noise: the amplitude of a spherical wavefront is inversely proportional to its
distance from the source, and higher frequencies are attenuated faster than lower frequencies.
Through the combination of these two effects, reflections (which typically have higher
frequency and lower amplitude than ground roll and refractions) are attenuated with distance
from the source. Gain corrections are used, therefore, to enhance the visible appearance of
reflections, with a higher gain correction being needed for deeper reflections. Three main
types of gain correction are constant gain, automatic gain control (AGC), and spherical
divergence gain.

ii. Applying the Process

Both the England and Kansas data need gain corrections to make reflections visible. Because
no amplitude analysis is to be performed, no spherical divergence correction is performed.
Automatic-gain-control scaling is used in both cases (England data shown in Figure 2.27).

Source-to-Receiver Offset (m)


1 10 20 1 10 20
0.0 0.0
Time (sec)

0.1 0.1

Figure 2.27 A portion of an England shot gather (after muting) without


gain (left) and with AGC (right) using a 40-ms window length. Note the
reflections when AGC is applied.

42
iii. Pitfalls Associated with the Process

When any gain is applied, amplitude information is being altered. When true-amplitude
analysis is to be performed, AGC must not be applied because AGC is not a reversible linear
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

process, i.e., it is not possible to extract true-amplitude information from data that has had
AGC applied. In all cases, too little or too much gain is a problem, but mainly only during the
display of the data. The main pitfall associated with scaling comes from AGC.

The critical (and often only) parameter specified in the AGC process is the length of the AGC
window. This length is the time-band or window within which the amplitudes are normalized.
One factor associated with window length is that, within the window, the highest amplitude
information most strongly affects normalization. Thus, if the AGC window is too big (i.e., close
to the record length), very little change in the data will be observed (most display features on
processing packages already normalize each trace to its maximum amplitude). If the AGC
window is too small (i.e., approaching the dominant period of the data), everything will be
changed, and the data will be very noisy (Figure 2.28).

Source-to-Receiver Offset (m)


1 10 20 1 10 20
0.0 0.0

Noise at
Mute Taper
Time (sec)

0.1 0.1

Late Arriving
Reflections not
Properly Gained

0.2 0.2

Figure 2.28 The same England shot gather as in Figure 2.25, with a150-ms (left) and
5-ms (right) AGC window. An AGC window of 40 ms is shown in Fig 2.27.

43
Using too long an AGC window (Figure 2.28, left) is not a large problem, except that the
processor will have poorly gained data. Using an AGC window that is too short causes
signal to be overwhelmed by the enhanced noise. Additionally, very-small-amplitude
processing artifacts at the tapered edges of mutes will be disproportionately increased as
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

shown in Figure 2.28.

Another problem with AGC scaling is that quiet zones, approximately equal to the length of
the AGC window, can be generated by the AGC process both above and below high
amplitude events. This is easily checked by varying the length of the AGC window and
observing whether the size of the quiet zone changes. When the quiet zone varies with
AGC window length, the AGC needs to be adjusted to minimize the effect, or a time-varying
AGC window needs to be used. If the quiet zone remains, it should be addressed in the
verbal description of the data. There are many instances in which an AGC quiet zone has
been misinterpreted as a low-reflectivity zone, when in fact the subsurface geology has no
relationship to the AGC quiet zone (Figure 2.29).

80
Time (ms)

160

240

320
20 ms AGC 40 ms AGC 60 ms AGC 80 ms AGC

Figure 2.29 Four identical portions of stacked data. Each has had an AGC applied, with a
different AGC window length. The high-amplitude event at the center of each section create
AGC “shadows” or “quiet zones” in the incoherent noise above and below the event (bracketed
by the purple lines). The quiet zones, generated solely by the AGC process, change in size
depending on AGC window length.

44
2.4 Velocity

2.4.1 Common-Midpoint (CMP) Sorting


Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

i. The Process
The concept of the common midpoint (CMP) is the basis for CMP processing. In the past,
when seismographs with few channels and computational limitations were common,
processing and manipulating shot gathers (in which the seismogram consists of all traces with
a common source) to obtain an image of the subsurface was often successful because the
strata being targeted were horizontal. With increasing use of reflection seismology and
technology, however, spatial-processing algorithms applied to shot gathers began to break
down when strata were not horizontal or when nonlayered objects were to be imaged. As a
consequence, CMP processing was developed to take advantage of technology. The basis
for success is that CMP gathers ideally image the same point in the subsurface, whereas shot
gathers represent multiple image points on the subsurface (Figure 2.30).
3rd hot
etc hot
2n ot
h
dS

S
tS

.
1s

∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆
*** ***
Re-
Sort

Common
Midpoint

Shot Gathers CMP Gather

Figure 2.30 Graphical depiction of how a simple re-sorting process can generate
CMP gathers from shot gathers. Note that the paths included in the CMP gather
represent only a subset of the information in the shot gathers.

45
ii. Applying the Process

When the geometry-definition processing step has correctly generated the trace headers (the
only possible pitfall), re-sorting traces from common-shot to CMP gathers is a simple
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

reshuffling process. It is also possible to sort into common-receiver or common-offset gathers.

Trace Number
1 20 40 60 80
0.0
Time (s)

0.1

0.2

Trace Number
1 10 20 30 40
0.0
Time (s)

0.1

0.2

Figure 2.32 Three representative shot gathers (top) and CMP gathers (bottom) from the partially
processed Kansas data. The shot gathers and CMP gathers have only several traces in common;
however, similar phases can be identified on both. Except in cases of unusual geometry, CMP
gathers will have fewer traces per record than shot gathers, typically by about a factor of two. Note
that the fold of the Kansas data (fold is the number of traces per CMP gather) is variable along the
profile. Variable fold is related to the way in which shot gathers map into CMP gathers. It occurs on
most data collected as common-shotpoint data.

46
2.4.2 Velocity–A First Pass

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

At a point early in the processing flow, perhaps after preprocessing, when first-examining the
data, it is important to get an idea of stacking velocities for the reflections in the data.
These velocities will be used later to correct the moveout of the hyperbolic reflections (see
Section 2.4.4). The first pass is important because it not only allows the processor to
become more familiar with the data before more intensive processing begins, but it also gives
the processor some reference points from which to start velocity analysis later.

ii. Applying the Process


A simple velocity determination can be applied if the processor has access to a processing
package that allows the fitting of hyperbolas to the data. In Figure 2.33, hyperbolas are fit to
shot gathers of the England data. Recall that fitting hyperbolas to shot gathers will only yield
a correct stacking velocity when the subsurface is horizontally stratified and isotropic. If not, it
will only be a starting point for later detailed velocity analysis. The other feature to identify
during a first pass are multiples, which are arrivals of energy that have reflected several times
in the subsurface. The simplest method, which can be used during the first pass and may
lead to a modified processing strategy, is to identify whether any reflections have twice the
zero-offset intercept time (t0) and the same moveout velocity. A possible candidate might be
the first and second reflections in Figure 2.33, (0.36 x 2 = 0.72 ≈ 0.77), except that the
velocities are ~20% different. When velocity is within ~5%, a multiple may be indicated and
should be attenuated, either by predictive deconvolution (Sheriff and Geldart, 1995) or by
choosing a stacking velocity that will not constructively stack the multiple.

Trace number
1 10 20 1 10 20
0.0

t0 Velocity
(sec) (m/s)
Time (s)

.36 1524

.77 1941
0.1

1.21 2225

Figure 2.33 Preprocessed England shot gather without (left) and with (right) a preliminary velocity
interpretation of the prominent reflections. This could be accomplished on CMP gathers equally well.

47
2.4.3 Velocity Analysis
i. The Process
If identifying reflections is the most important process, then velocity analysis is the second
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

most important process. Reflections will not be correctly expressed on the final stacked
section without correct velocity analysis. Velocity analysis is the process by which the
stacking velocities of reflections are determined. The velocity-analysis procedure is
essentially a forward-modeling procedure in which trial-and-error is the key to success.

There are many methods of velocity analysis, such as using semblance velocity plots, se-
quentially picking hyperbolas on CMP gathers, using constant-velocity stacks (CVS), etc.
Perhaps the easiest method to use with shallow-reflection data is CVS plots. Because shal-
low-seismic data are typically at least an order of magnitude smaller in data volume than ex-
ploration-scale seismic data, it becomes possible and fairly efficient to use constant-velocity
stacks of the entire data set. Using the CVS method on the entire data set allows for the
faster picking of velocities for maximum coherence on entire reflections rather than piecemeal
velocity picking using other methods.

ii. Applying the Process


England Data
Velocity analysis on the England data was conducted using CVS plots of the entire data set.
Two things are important when generating CVS plots: First, the maximum and minimum
boundaries of the NMO velocities should be sufficient to include the velocities picked on the
first pass of velocity (Section 2.4.1) with generous overlap, typically +/- 30 %. Because
shallow seismic surveys may include data from very-low-velocity near-surface material
(<200 m/s) and higher-velocity bedrock (>1000 m/s) within the same section, it may be nec-
essary to perform two CVS analyses to cover the range properly. For the England data,
however, this was not necessary. The second important consideration when generating CVS
plots is ensuring that the velocity step-size is small enough between sequential CVS plots
(see Pitfalls). For some near-surface data, velocity increments as small as 5-10 m/s are
necessary to get the best stack.

Shown in Figure 2.34 are selected CVS panels from velocity analysis on the England data.
In exploration-scale seismic data processing, the velocity picks can be chosen at the time of
the reflection of interest. With shallow data, however, the velocity picks should be made to
bracket the reflection with the NMO velocity of interest (see Fig. 2.35). The stacking velocities
of sequential reflections on a single gather typically vary by more than 5% (e.g., Fig. 2.33);
thus, because of automatic linear interpolation between velocity picks, a stacked wavelet may
appear different than it did on the CVS plot when the NMO velocities for the top the bottom of
the reflected wavelet are sufficiently disparate.

48
Velocity= 1400 Velocity= 1500 Velocity= 1600 Velocity= 1700 Velocity= 1800 Velocity= 1900
0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Time (s)

0.1

0.2

Velocity= 2000 Velocity= 2100 Velocity= 2200 Velocity= 2300 Velocity= 2400 Velocity= 2500

Velocity= 2600 Velocity= 2800 Velocity= 3000 Velocity= 3200 Velocity= 3400 Velocity= 3600

Figure 2.34 Several CVS panels from the England data. The top two rows are incremented by
100 m/s and the bottom row are in increments of 200 m/s from left to right. Each reflection becomes
most coherent at different velocities, as expected. For the England data, CVS panels were generated
and velocities picked in increments of 10 m/s (not shown).

49
Offset Along Profile (m)
6 30 55
0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Time (s)
0.1

0.2

Figure 2.35 Preliminary stacked section of the England


data showing the timing of the velocity picks from CVS
velocity analysis. Notice that the main reflections are
bracketed rather than picked directly.

The England data did not contain significant lateral velocity variations (Figure 2.36), and as a
result smoothing was unnecessary, and picking velocities on the CVS plots at every fifth
CMP gather was sufficient. It is important to output NMO-corrected CMP gathers after CVS
velocity analysis before stacking to make certain that coherent stacked events are being
generated by visible hyperbolas on CMP gathers. Velocity analysis using CVS plots is a
time-saving method of analysis but requires this important quality-control step.

1200 m/s 3800 m/s

0.040
Time (s)

0.080

0.120

0.160
5 10 15 20 25 30 35
CMP Number

Figure 2.36 Stacking velocity field for the England data.

50
Kansas Data

Velocity analysis of the Kansas Data was performed by fitting hyperbolas directly to the
reflection of interest on CMP gathers. The Kansas data was analyzed at every fifth CMP
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

because of lateral velocity variations. Several of the analyzed CMP gathers are shown in
Figure 2.37.

CMP loc. 32.0 CMP loc. 42.0 CMP loc. 52.0


0.0
Time (s)

0.1

0.2

CMP loc. 62.0 CMP loc. 72.0 CMP loc. 82.0

CMP loc. 92.0 CMP loc. 102.0

Figure 2.37 Analyzed CMP gathers from the Kansas data, with every 20th CMP gather shown.

51
Using this direct method of velocity analysis works best and is more efficient when only a few
reflections are analyzed. With the Kansas data, the direct picking method is chosen because
there is only one reflection. However, picking hyperbolas directly typically creates greater
lateral variability in velocity picks (see Fig. 2.38) than if CVS or semblance methods are used.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Therefore, lateral smoothing is necessary so that apparent discontinuities are not created by
large NMO-velocity gradients. The Kansas velocity field was smoothed by two iterations of
a three-point running average (Figure 2.39). Figure 2.40 shows the resultant smoothed
velocity field.

330 m/s 400 m/s

0.050
Time (s)

0.055

0.060

20 40 60 80 100 120 140


CMP Number

Figure 2.38 A portion of the velocity field for the Kansas data prior to smoothing.
The vertical black lines represent the locations of the analyzed CMP gathers. Note the short-
wavelength variations due to picking variability.

400
Raw Picks
NMO Velocity (m/s)

380 1st Iteration


2nd Iteration
360

340

320

300
20 40 60 80 100 120

CMP Number
Figure 2.39 Graphical representation of the smoothing process performed on the
Kansas data. The raw CMP hyperbola picks were smoothed using two iterations of a three-
point running average.
52
340 m/s 390 m/s
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

0.050
Time (s)

0.055

0.060

20 40 60 80 100 120 140


CMP Number

Figure 2.40 The velocity field for the Kansas data after smoothing. The vertical black lines
represent the locations of the analyzed CMP gathers. Note that the short-wavelength variations due
to picking variability have been smoothed.

53
iii. Pitfalls Associated with the Process

There are two main pitfalls in velocity analysis. The first involves performing velocity
analysis without corroborating coherent stacked events with unstacked reflections. This pitfall
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

involves the improper identification of reflections as well as quality control shortcomings (see
Section 2.2.4 and 2.5.2).

The second pitfall in velocity analysis pertains specifically to shallow reflection data and
relates to NMO velocities. As mentioned elsewhere, P-wave velocity within the upper 100 m
of the subsurface can vary from as low as 130 m/s to as high as 5000 m/s. Normal-moveout
corrections associated with large velocity gradients at the near surface may cause distortion
effects (Miller and Xia, 1998).

Additionally, near-surface data are very sensitive to small changes in the NMO velocity. To
demonstrate this point, an example model (Fig. 2.41) is used to generate synthetic data
(Fig. 2.42). Using the correct NMO and stacking velocity of 200 m/s, an accurate image of the
model is produced (Fig. 2.43). However, when a velocity function with NMO velocity
variations of only +/- 6 m/s is used (Fig. 2.44) to generate the stacked section, an apparent
undulating reflection event results (Fig. 2.45). This example shows how apparent subsurface
structure can be generated by using small variations in NMO velocity when none exist.
Avoiding this pitfall in shallow-seismic velocity analysis, therefore, involves being sure that
the lateral velocity gradients in very-low-velocity data are a result of careful identification of the
true NMO velocity and not a function of CVS or semblance plots generated with velocity
intervals too large.

0.5 m

∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆
0 *

P-wave
Velocity
200 m/s
Depth (m)

8
P-wave
Velocity
500 m/s

Figure 2.41 A velocity model used to generate synthetic data


for demonstrating the effects of small changes in NMO velocity
(see Figs. 2.42-2.45).

54
Source-to-Receiver
Offset (m)
0 12.5 25.0
0.0
P-wave Reflection
1st Multiple
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Direct P-wave
1st Refraction
Dispersive Groundroll
Mode Conversions
Time (ms) Model Artifact

0.1

0.2

Figure 2.42 Synthetic shot gather (left) and interpretation (right) generated
from the velocity model in Fig. 2.41.

Trace Number
1 50 100 150 200 250 300
0.0
Time (ms)

0.1

0.2

Figure 2.43 A synthetic stacked section using the data in Fig. 2.42 with a constant NMO velocity
across the profile of 200 m/s.

55
194 m/s 206 m/s
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

0.0
Time (ms)

0.1

0.2
1 50 100 150 200 250 300
Trace Number

Figure 2.44 An NMO velocity function with variations of +/- 6 m/s across the profile.

Trace Number
1 50 100 150 200 250 300
0.0
Time (ms)

0.1

0.2

Figure 2.45 A synthetic stacked section using the velocity function in Fig. 2.44 instead of the
correct constant NMO velocity of 200 m/s. Note the significant variability that is generated using only
small variations in NMO velocity.

56
2.4.4 Moveout Corrections

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Moveout, or step-out, refers to traveltime differences between a recorded reflecting pulse as a


function of shot-to-receiver distance and the true, vertical temporal location of the reflector at an
offset distant of zero. Normal moveout (NMO) is the traveltime difference between the
recorded traveltime for a reflection of a source-receiver pair with some separation and the
traveltime from the same reflector if the source and receiver were at the same spatial position.
Normal-moveout corrections are applied to data using the velocity field generated during
velocity analysis.

Another type of moveout, dip moveout (DMO), is the traveltime difference caused when
recorded reflected events are generated by dipping reflectors not directly beneath the midpoint
of a source and receiver pair. Dipping reflectors will generate both NMO (if the source and
receiver are not coincident in space) and DMO. Both NMO and DMO corrections require
accurate velocity information in order to be applied correctly. NMO and DMO effects must be
corrected for, if they exist, to obtain a true spatial and temporal image of the subsurface.

NMO Corrections

Normal-moveout corrections are applied to CMP gathers to flatten hyperbolic reflection events
for stacking. The basic assumption for determining NMO corrections is that user-measured
stacking velocities are equal to the NMO velocities. This assumption is valid when structural
dips or lateral velocity changes are small. The degree to which the assumption is valid
determines the success of the CMP processing without more advanced procedures (see
Section 3.1). The NMO velocity is the velocity at which hyperbolic reflection events will be
correctly flattened. The NMO correction is applied to get the zero-offset traveltime to the
reflection (t0):

t02 = t x2 - x 2/V2rms, (1)

where tx is the recorded traveltime to the reflection with the source and receiver at a horizontal
offset x, and Vrms is the root-mean-square velocity to the interface. For the first reflecting
interface, Equation (1) is simplified because Vrms is equal to the velocity of the surface layer,
which can be measured from the direct-arriving energy on a CMP or walkaway seismogram
(see Section 2.3.3). For later reflection events, Vrms can be approximated from stacking
velocities on CMP gathers. Most seismic processing packages will apply the NMO correction
algorithm given the velocity function determined from velocity analysis.

57
There are several more sophisticated processing steps that can correct nonhyperbolic
moveout caused by either anisotropy or strong lateral velocity gradients (Yilmaz, 1987;
Sheriff and Geldart, 1995). In most instances, however, anisotropy effects will not completely
preclude a fairly good final stacked section, and the negative effects of strong lateral velocity
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

gradients can be overcome by increased sampling of the subsurface (i.e., smaller station
spacing; Baker et al., 1999).

DMO Corrections

Corrections for DMO are necessary when reflecting layers have significant dip. Dip moveout
corrections are made in place of prestack migration. However, the DMO correction is negligible
for small dip angles (< 10°) and is often not required in near-surface seismic data (see Section
3.1 for additional discussion).

ii. Applying the Process


When NMO corrections are applied, the amount of correction is related to depth (or time),
NMO velocity, and source-to-receiver offset. The two cases in which NMO corrections are
unstable are either when the velocity gradient is very large (Miller and Xia, 1998) or at source-
to-receiver offsets when NMO-stretch is an important issue (Miller, 1992).

Reflection information can be distorted by differential NMO between the top and the bottom of
the wavelet (Fig. 2.46). This NMO-stretch phenomenon is ubiquitous in near-surface CMP
reflection imaging and can only be avoided by using other processing methods (see Section
3.1). However, an NMO-stretch mute can be applied during the NMO-correction process to
mute the frequency-distorted reflection information, which will degrade the final stacked section.

Source-to-Receiver Offset (m) Source-to-Receiver Offset (m)


0 5 10 0 5 10
0.0
Time (s)

0.1

0.2

Before NMO Correction After NMO Correction

Figure 2.46 A synthetic CMP gather without (left) and with (right) an NMO correction.
Note that the near-offset wavelets after NMO retain a frequency similar to that of the
uncorrected data, whereas the far-offset wavelets undergo NMO stretch and decrease in
frequency by about a factor of two.

58
Both the Kansas data and England data have enough lateral coverage and the strata dips
gently enough that DMO is unnecessary. However, both data have NMO-stretch problems
at far offsets, and an NMO-stretch mute was applied. A good rule of thumb is to reject
information that has stretch greater than 30% during NMO corrections. Miller (1998)
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

suggested NMO-stretch mute as low as 10-15% when the high-frequency content of the
reflections is more important than increased S/N. The England and Kansas data had 30%
NMO-stretch mutes applied. This is in contrast to the common hydrocarbon-exploration
practice of muting at 50% stretch.

iii. Pitfalls Associated with the Process

The main pitfall associated with NMO corrections is the NMO-stretch mute. When the mute is
not severe enough, distorted low-frequency reflection information will be stacked into the final
seismic section. Conversely, too severe an NMO-stretch mute can attenuate reflection
information such that stacking will not increase S/N. Although a rule of thumb is suggested
(30%), a trial-and-error analysis of the exact limit of the stretch mute should be performed for
every data set.

59
2.4.5 Residual Statics Corrections

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Elevation statics and refraction statics are used to account for long-wavelength variability
where the wavelength of variation is greater than or equal to the receiver spread length
(discussed in Section 2.3.3). Residual statics corrections, conversely, are used to adjust
NMO-corrected data to account for near-surface variability with wavelengths less than the
spread length. Typically, this variability is due to localized short-wavelength velocity
variations, and because the variations may be less than a station spacing, it is impossible to
reconstruct the actual geometry of the variations (i.e., they cannot be imaged using the
existing experiment parameters). Therefore, the residual-statics-correction process is applied
to NMO-corrected CMP gathers to adjust the traveltime deviations and improve the coherence
of existing reflections. This process smooths the traveltime deviations using small time shifts
within a user-specified limit.

The basic assumption of the correction process is that near-surface variations can be
adjusted for by shifting entire traces, not samples within traces. Additionally, residual statics
corrections can be made while remaining surface consistent–a process that assumes time
delays depend on the surface position of the sources and receivers and not on the path of
energy in the subsurface.

Both residual statics corrections and surface-consistent residual statics corrections, however,
assume a near-vertical angle of incidence for the downgoing and upgoing energy at the
sources and receivers. In exploration-scale seismic data, this condition is typically true
because the velocity of the near-surface material is much slower than the velocity of the
material of interest (Yilmaz, 1987). With near-surface seismic reflection data, this condition of
near-vertical angles of incidence at the surface is not always true because (1) the low-
velocity near-surface is typically the volume being imaged, and (2) the spread-length to
reflection-depth ratio is greater. Thus, the downgoing and upgoing raypaths at the sources
and receivers are not vertical, and the process assumptions are invalid. Residual statics
correction of both types therefore should be made with care, and often they will not be as
robust as is observed in exploration-scale data.

When residual statics corrections are applied, the final stacked section can usually be further
improved by performing velocity analysis again after the residual statics are applied. And the
entire procedure–velocity analysis, NMO correction, residual statics corrections–can be
iterated until improvement in the data becomes negligible (see Fig. 2.1).

60
ii. Applying the Process

After the first pass of velocity analysis and NMO corrections were made on the England data,
the reflection events were observed to have residual short-wavelength variations Therefore,
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

residual statics corrections were calculated and applied (Figure 2.47). Two iterations of
velocity analysis, NMO correction, and residual statics were performed. Further iterations did
not produce significant improvement in the data. However, applying the two iterations of
residual statics corrections improved the final stacked section (Figure 2.48).

England CMP Gathers

0.0

Before Residual Statics


Time (s)

0.1

Correction
0.2

0.0
After Residual Statics
Correction
Time (s)

0.1

0.2

Figure 2.47 Representative NMO-corrected CMP gathers before (top) and


after (bottom) application of residual statics corrections and 2nd-iteration
velocity analysis.

61
England Stacked Data

0.0
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Time (s)
0.1

0.2

Before After

Figure 2.48 Stacked England data before (left) and after (right) two
iterations of velocity analysis and residual statics corrections. Note the
significant improvement of the reflection at ~215 ms, which was barely
visible prior to the process.

iii. Pitfalls Associated with the Process

Residual statics corrections should only be used on data with coherent reflections and a fairly
high S/N ratio; otherwise, application of the process may generate coherent artifacts on a final
stacked section. As with most other processes, the corrected CMP gathers and stacked
section should be compared to CMP gathers prior to correction, and coherent events should
be corroborated.

62
2.5 Stacking and Verification
2.5.1 Stacking
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

i. The Process

The process of stacking is straightforward: traces within NMO-corrected CMP gathers are
summed vertically (stacked), and the subsequent traces are positioned at the CMP locations.
Figure 2.49 shows a CMP gather from the England data, the resultant stacked trace, and its
final position in the stacked section.

CMP number
CMP loc. 13.0
5.5 10 15 20 25
0.0 0.0

0.1 0.1
Time (s)

Time (s)
Stack

0.2 0.2

NMO-corrected CMP
gather loc. 13.0 CMP loc. 13.0

Figure 2.49 An NMO-corrected CMP gather, stacked trace, and the position of the stacked trace in
the final England stacked section. Note that without the adjacent traces, it would be very difficult to
identify reflected energy on a single trace.

63
2.5.2 Analyzing Stack with Shot/CMP Gathers

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

This step is not only an intermediate step in the velocity-analysis process, but also a critical
final step before the interpretation or publication of data. This is the stage in which original
shot gathers, processed shot gathers, CMP gathers, NMO-corrected gathers, and the stacked
section must be analyzed in an integrated way to ensure that the only coherent events on the
stacked section are reflections.

ii. Applying the Process

Corroboration of reflections and coherent events on the stacked section of the England data
was demonstrated previously (Figure 2.49). Figure 2.50 shows a processed shot gather, an
NMO-corrected CMP gather, a stacked trace, and the Kansas stacked section.
This corroboration was performed on both data sets, at multiple locations along the profiles.

CMP number
Shot Record 14 CMP loc. 27
12 36.5 61.5 86.5 111.5
0.0 0.0
Time (s)

Time (s)
0.1 0.1

0.2 0.2
Processed Processed
Shot Gather NMO-corrected Stacked
Trace
CMP Gather
CMP loc. 27.0

Figure 2.50 A processed shot gather, an NMO-corrected CMP gather, and the Kansas
stacked section. Similar displays were used all along the profile to corroborate coherent
energy on the stacked section, with reflections observed on the shot gathers.

64
2.5.3 Muting Coherent and Incoherent Noise

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

After the major steps of the processing flow and the final analysis of coherent events have
been completed, uncorroborated coherent and incoherent noise must be muted on the post-
stack seismic data. Post-stack muting is typically used to remove noise created during
processing that has not been corroborated with the shot gathers.

ii. Applying the Process

A post-stack mute was applied to the final section of the England data (see Section 2.1.3).
This mute removed minor distorted coherent events at the edge of the section and incoherent
artifacts from previous filtering, scaling, and stacking (Figure 2.51).

Offset Along Profile (m)

1 20 40 60 1 20 40 60
0.0
Time (s)

0.1

0.2

Before After
Post-Stack Post-Stack
Mute Mute

Figure 2.51 England stacked section before (left) and after (right)
muting noise. The muted portions (arrows) are most likely artifacts
and noise from previous processing steps.

iii. Pitfalls Associated with the Process


The pitfall to avoid is failure to mute uncorroborated coherent or incoherent noise, either of
which may be misinterpreted by those not familiar with the processing routine. As a result, the
final seismic data also may be misinterpreted.

65
2.6 Displaying Seismic Data
2.6.1 Muting Low-Fold Data
i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Fold refers to the multiplicity of common-midpoint data. Theoretically, S/N will increase by
(fold)1/2 if all stacked nonreflection energy is incoherent. Although the assumption that S/N
increases with increasing fold is not always true, a strong relationship typically exists
between fold and S/N.

Shown in Figure 2.52 are the Kansas and England data with the associated fold diagram.
Clearly, as fold decreases at the edges of the profile the S/N of the reflection events
decreases. Muting the low-fold data at the edges of seismic sections will alleviate
misinterpretation such as horizon truncations or pinch-outs by people not familiar with seismic
processing.

Additionally, fold will typically be variable in near-surface CMP data because time and
equipment constraints often lead to field geometries which are untraditional by deeper
exploration standards. For instance, the leapfrogging technique used on the Kansas data
(see Section 1.4.2) created the uneven fold observed in Figure 2.52. If fold is not consistent
along the main portion of a seismic profile, a fold diagram should be included in the final
presentation to allow consideration of possible fold-related features on the section.

Kansas Data England Data


0.0 0.0
Time (s)

Time (s)

0.1 0.1

50 0.2

40
F
O 30
L 20 15
D
10 F 10
O
0
L
0 25 50 75
D 5
Distance Along Profile (m)
0
0 25 50 75
Distance Along Profile (m)

Figure 2.52 Stacked sections of the Kansas (left) and England (right) data with fold diagrams.
Low-fold regions at the edges of section correlate with low S/N portions data. Note that the
variable fold on the Kansas data does not correlate to any feature on the stacked section, and
the geometry of the reflection event can be considered not related to fold.

66
ii. Applying the Process

Determining the fold cutoff for the final stacked section depends on the fold variability, the S/N
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

of the data, and the purpose of the data. The final sections of the Kansas and England data
are show below with their associated fold diagrams (Figure 2.53).

Kansas Data England Data


0.0 0.0
Time (s)

0.1

Time (s)
0.1

0.2

40
F
O 30
L 20
D
10
10
F 8
0 25 50 75 O 6
Distance Along Profile (m) L
D 4
2

0 25 50
Distance Along Profile (m)

Figure 2.53 Stacked sections of the Kansas (left) and England (right) data with
low-fold data removed.

iii. Pitfalls Associated with the Process

The two related pitfalls associated with removing low-fold data are removing too much or too
little data. As mentioned previously, however, the amount to be removed is related to several
factors, and no rule of thumb exists; it is possible for high-quality data to have no data
removed at all.

67
2.6.2 Converting from Time to Depth

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Often, one goal of collecting seismic reflection data is to determine the depth to a reflector or
reflectors in the subsurface. When structural dips and lateral velocity gradients are small,
seismic time sections can be converted to depth sections using the velocity function calculated
during velocity analysis. The critical step is to convert stacking velocities to interval velocities
using Dix’s equation (Dix, 1955). However, the degree of accuracy of the resulting interval
velocities is directly affected by the accuracy of the stacking velocities and the amount of
structural dip and lateral velocity variation (see Pitfalls). When interval velocities have been
calculated, depth sections can be created by mapping the traveltime data sample-by-sample.
At this point, external data such as refraction, borehole, or outcrop information can be used to
better constrain the depth section, and interval velocities can be adjusted accordingly.

ii. Applying the Process to the England and/or Vermont Data

Depth conversion for the Kansas data was straightforward, as only one reflected interface
was observed, and the interval velocity information could be calculated from the direct arrivals.
Shown in Figure 2.54 is the depth-converted section of the Kansas data, without vertical
exaggeration. The depth-to-bedrock was constrained by nearby outcrop information.

Offset Along Profile (m)


5 14 24 34 44 54 64
0
Depth (m)

10

20

Figure 2.54 Kansas data depth section with no vertical exaggeration, showing
depth and geometry of the alluvium/bedrock contact.

iii. Pitfalls Associated with the Process

When using Dix’s equation to create a depth section by converting stacking velocities to
interval velocities, errors will result when the aforementioned assumptions are invalid. If lateral
velocity gradients or steep dips are expected, depth migration is the appropriate recourse;
however, a very accurate velocity model is required (see Section 3.1).

68
2.6.3 Displaying Seismic Data

i. The Process
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The methods used to display near-surface seismic data are the same as in other exploration-
scale processing, and the same critical points apply: sections should be labeled as time or
depth sections; depth sections should be displayed without vertical exaggeration, if possible;
and the type of display should reflect the purpose of the section. The five main types of
display are wiggle trace, variable area, variable density, variable area wiggle trace, and
variable density wiggle trace. Additionally, color or grayscale may be used for each type. Up
to this point, all figures in the primer have been variable area wiggle trace displays.

ii. Applying the Process

Shown in Figure 2.55 and 2.56 are several methods of display and aspect ratios for the
England and Kansas data. Coherency and detail are opposing factors in most cases.

(c)

(d)
(a)

(b)

(f)
(e)

Figure 2.55 Several types of displays and aspect ratios for the England data: (a)
variable area wiggle trace; (b) variable density; (c) variable area wiggle trace; (d) color
variable density; (e) variable area; and (f) wiggle trace. There are no processing
differences among the displays.

69
(a)
(b)
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

(d)
(c)

(e)

Figure 2.56 Several types of displays and aspect ratios for the Kansas data: (a) variable area wiggle
trace; (b) variable density; (c) wiggle trace; (d) variable area wiggle trace; and (e) color variable density.
There are no processing differences among the displays.

70
3 D I S CU S S I ON
3.1 Applications of Migration
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Migration is useful when lateral velocity variations or reflection dip angles are severe, or when
there is a large amount of diffracted energy. The migration process was developed for use on
hydrocarbon-exploration data in which the number of channels per record, the number of
traces per stack, and the S/N are high. Rarely are these conditions satisfied for near-surface
data. The size of the data set is important because migration is an inversion process, and it
generates numerical artifacts at the edges of data. The S/N ratio is likewise important for the
migration process because noise can generate substantial artifacts upon inversion. Black and
others (1994) considered the use of migration on near-surface data and concluded that
because reflection depths are typically small compared with spread length and propagation
velocities are low, the usefulness of migration is limited. For completeness, however, a brief
overview of migration and references for further information follow.

When the subsurface is composed of flat-lying homogeneous layers, stacked CMP gathers
that have been NMO-corrected to represent zero-offset data will present a true temporal
image of the subsurface. If, however, the subsurface consists of substantial lateral velocity
variations or steeply dipping interfaces, information recorded from a surface receiver will not
represent the structure directly beneath the receiver. Additionally, diffractions may occur on
recorded data. Diffractions appear as hyperbolic events resulting when wave energy
impinges on sharp objects or penetrates areas restricted by geometrical optics, and may
obscure reflections and disrupt reflection continuity.

Migration is an inversion process that can be applied to CMP data, either before or after
stacking, that ideally will reassign reflection data to its true (zero-offset) position. Thus, the
apparent angle of dipping reflections seen on unmigrated stacked data will be adjusted to the
true dip angle of the reflector in the subsurface. Additionally, data can be converted correctly
from time to depth, even when there are substantial lateral velocity variations. A summary of
the geometric configuration involved in migration is given in Sheriff and Geldart (1995).
Migration can also increase S/N of the data by removing or reducing the effect of diffractions, if
they exist, by collapsing their energy to a single point.

Migration works best when recorded events do not interfere (either constructively or
destructively) with one another. Short-period noise may result in a smeared effect (migration
smiles) on migrated data and usually occurs near the edges of the data, where S/N is reduced.

71
Current migration methods fall into two broad categories: diffraction-stack migration and
transformation migration. Each migration method is based on slightly different assumptions
concerning the data to be migrated. All migration algorithms, however, require accurate velocity
information, which can be obtained by the methods previously described (see Section 2.4.2).
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Diffraction-stack migration techniques are based on work by Hagedoorn (1954). This method
treats each reflector as a sequence of closely spaced diffracting points (Sheriff and Geldart,
1995). The most commonly used form of diffraction-stack migration is Kirchoff migration, which
uses an integral solution to the wave equation (Schneider, 1978). This method adjusts
amplitude values based on obliquity and divergence before transposing them from their
apparent position. Similar migration methods include wave-front smearing and common-
tangent (Sheriff, 1978).

Transformation migration techniques include frequency-wavenumber (Robinson, 1983),


Fourier-transform or Stolt migration (Stolt, 1978), phase-shifting or Gazdag migration (Gazdag,
1978; Gazdag and Sguazzero, 1984a; Gazdag and Sguazzero, 1984b), and Radon-
transformation or slant-stack migration (Hubral, 1980). All of these techniques transform the
data from the offset-time (x-t) domain to another domain prior to migration.

When migration is successful, two factors are important: 1) the size of the data set must be
sufficiently large to minimize the influence of numerical artifacts that occur at the edges of the
data, and 2) time and resources must be available to enable the rigorous process of obtaining
accurate velocity profiles and continuing through the iterative quality-control process that
ensures the correctness of the results.

72
3.2 Critical Processing Steps
All of the processing steps mentioned in this tutorial have the capacity either to generate the
appearance of coherent noise or to improve the data quality. Thus, all are important.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

However, there are several steps that are of particular importance:

1) Initial Muting of Coherent Noise: This step requires that the processor have a good
understanding of what constitutes signal and what is noise. The more noise that can be
removed early on, the better the results of subsequent processing steps.

2) Velocity Analysis: The velocity analysis, and subsequent velocity field, are critical to
producing a quality final stacked section. Failure to represent velocity gradients and signal
coherency can result in a velocity field that can attenuate existing reflection events or create
coherent noise when applied. Additionally, the accuracy of the velocity function is directly
related to the quality of a depth-converted section.

3) Confirmation: This is the most important step in shallow-seismic data processing. It is


relatively easy to generate coherent events on a stacked section that have no relation to the
subsurface. Misinterpretation of poorly processed shallow-seismic data not only creates
immediate problems that are directly related to the seismic data, but also adds to the already
strong “black-box” view of seismic imaging.

73
3.3 Avoiding Pitfalls
Various processing pitfalls have been detailed in this primer. However, each processing flow
has unique pitfalls that can result in lowering the S/N ratio of the data or creating coherent
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

noise that may be misinterpreted. As a result, a working knowledge of the particular pitfalls
associated with the individual processing steps is critical to becoming a good overall shallow-
reflection seismic-data processor. Steeples and Miller (1998) have written an excellent paper
specifically detailing pitfalls associated with near-surface reflection seismology.

74
A PPEN D I CES
A. References
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Baker, G.S., Steeples, D.W., and Drake, M., 1998, Muting the noise cone in near-surface
reflection data: An example from southeastern Kansas: Geophysics, vol 63, 1332-1338.

Baker, G.S., Schmeissner, C., Steeples, D.W., and Plumb, R.G., 1999a, Seismic reflections
from depths of less than two meters: Geophys. Res. Lett., 26 , no 2 , 279-282.

Barry, K.M., Cavers, D.A., and Kneale, C.W., 1975, Recommended standards for digital tape
formats: Geophysics, vol 40, 344-352.

Black, R., Steeples, D.W., and Miller, R.D., 1994, Migration of shallow reflection data:
Geophysics, vol 59, 402-410.

Dix, C. H., 1955. Seismic velocities from surface measurements: Geophysics vol 20, 68-86.

Gazdag, J., 1978, Wave equation migration with the phase-shift method: Geophysics, vol 43,
1342-1351.

Gazdag, J., and P. Squazzero, 1984a, Migration of seismic data: Proceedings of the IEEE 72,
No. 10, 1302-1315.

Gazdag, J., and P. Squazzero, 1984b, Migration of seismic data by phase-shift plus
interpolation: Geophysics, vol 49, 124-131.

Hagedoorn, J. G., 1954, A process of seismic reflection interpretation: Geophysical


Prospecting, vol 2, 85-127.

Hubral, P., 1980, Slant stack migration: Geophysical Prospecting, vol 25, 728-745.

Hunter, J.A., Pullan S.E., Burns, R.A., Gagne, R.M., and Good, R.S., 1984, Shallow seismic
reflection mapping of the overburden-bedrock interface with the engineering
seismograph-Some simple techniques: Geophysics, vol 49, 1381-1385.

Miller, R.D., 1992, Normal moveout stretch mute on shallow-reflection data: Geophysics, vol
57, 1502-1507.

Miller, R.D., and Xia, J., 1998, Large near-surface velocity gradients on shallow seismic
reflection data: Geophysics, vol 63, 1348-1356.

Pakiser, L.C., and Mabey, D.R., 1954, Mapping shallow horizons with reflection seismograph:
Science, vol 119, 740.
75
Pakiser, L.C., Mabey, D.R., and Warrick, R.E., 1954, Mapping shallow horizons with reflection
seismograph: AAPG Bull., vol 38, 2382-2394.

Pakiser, L.C., and Warrick, R.E., 1956, A preliminary evaluation of the shallow reflection
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

seismograph: Geophysics, vol 21, 388-405.

Pullan, S.E., 1990, Recommended standard for seismic (/radar) data files in the personal
computer environment: Geophysics, vol 55, 1260-1271.

Pullan, S. E., and MacAulay, H.A., 1987, An in-hole shotgun source for engineering seismic
surveys: Geophysics, 52, 985-996.

Robinson, E.A., 1983, Migration of geophysical data: International Human Resources


Development Corp., Boston, MA.

Schneider, W. A., 1978, Integral formulation for migration in two and three dimensions:
Geophysics, vol 43, 961-980.

Sheriff, R.E., 1978, A first course in geophysical exploration geophysics: International Human
Resources Development Corp., Boston, MA.

Sheriff, R.E. and Geldart, L.P., 1995, Exploration Seismology, 2nd Edition: Cambridge
University Press, New York, 592 p.

Steeples, D.W. and Miller, R.D., 1998, Avoiding pitfalls in shallow seismic reflection surveys:
Geophysics, vol 63, 1213-1224.

Stolt, R., H., 1978, Migration by Fourier transform: Geophysics, vol 43, 23-48.

Treadway, J. A., Steeples, D. W., and Miller, R .D., 1988, Shallow seismic study of a fault
scarp near Borah Peak, Idaho: J. Geophys. Res., vol 93, 6325-6337.

Warrick, R.E., and Winslow, J.D., 1960, Application of seismic methods to a groundwater
problem in northeastern Ohio: Geophysics, vol 25, 505-519.

76
B. Data included on CD-ROM
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Five main components are included on the CD-ROM:

1) A file [READ.ME] that is an ASCII text file containing information about the contents of the
CD-ROM.

2) A folder [Acrobat] that contains Adobe Acrobat 3.0 reader (freeware) information and
installers for Macintosh, IBM compatible (Windows), and UNIX platforms. Acrobat can be
used to interactively navigate through the digital version of this dissertation.

3) A file [Primer.pdf] that contains this document in Adobe Acrobat 3.0 PDF format. Each item
on the Contents page is linked to the appropriate location within the document. Also, the
page number at the bottom-center of each page is linked back to the Contents page.

4) A folder [ENGLAND] that contains a READ.ME file with details, raw SEGY data, raw
SEGY data with correct header information, and a final stacked section (SEGY).

5) A folder [KANSAS] that contains a READ.ME file with details, raw SEG2 data, raw SEGY
data with correct header information, and a final stacked section (SEGY).

77
Processing Near-Surface Seismic-Reflection Data: A Primer
Supplementary material
Supplementary material for this book can be accessed at http://link.aip.org/link/mm/
doi=10.1190/1.9781560802020.supp&filename=261E_sup.zip.
Downloaded 06/22/14 to 134.153.184.170. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

You might also like