Two-point spectral correlation

Before proceeding with the creation of the filter bank, we need to calculate the two-point spectral correlation. We first need to define a whitening window via which the data will be whiten before transforming to the frequency domain. The length of the whitening windows is set to be twice the length of a segment in sample unit, that is twice the psd_segment_length value multiplied by sample_rate .

There are two types of whitening window that can be used: Hann or Tukey window. The default type used is a Tukey window whose flat bit corresponds to a fraction of a segment’s length, window_fraction , that is used for PSD estimation. The Tukey window can be created using the lal.CreateTukeyREAL8Window module:

# Create whitening window
if wtype == 'hann': window = lal.CreateHannREAL8Window(seg_len)
elif wtype == 'tukey': window = lal.CreateTukeyREAL8Window(seg_len, window_fraction)
else: raise ValueError("Can't handle window type %s" % wtype)

Once the window is built, a new frequency plan is created on which a forward transform on the data will be performed during the spectral correlation. The FFT plan has a length equal to that of the whitening window and is created using the CreateForwardREAL8FFTPlan module as follows:

# Create FFT plan
fft_plan = lal.CreateForwardREAL8FFTPlan(len(window.data.data), 1)

We can finally compute and return the two-point spectral correlation function for the whitened frequency series, fft_plan , from the window applied to the original time series using the REAL8WindowTwoPointSpectralCorrelation module:

# Perform two point spectral correlation
spec_corr = lal.REAL8WindowTwoPointSpectralCorrelation(window, fft_plan)

Computing the filter bank

The filter bank consists of a series of band-pass filters, each filter being associated to a specific channel. Individual filter’s length is determined by its bandwidth, band , and the frequency separation in the PSD, fd_psd.delta_f (that is the inverse of a segment’s length), including one additional element to account for zero frequency:

# Determine length of individual filters
filter_length = int(2*band/fd_psd.delta_f)+1

We initialise two arrays, the first array, lal_filters , stores the filter’s frequency series along with the metadata for each channel (see the COMPLEX16FrequencySeries structure), the other array, fdb , will store the corresponding filter’s time series:

# Initialise array to store filter's frequency series and metadata
lal_filters = []
# Initialise array to store filter's time series
fdb = []

In order to produce the filter bank, a frequency domain channel filter function is created for each channel using the lalburst.CreateExcessPowerFilter module. Each filter corresponds to a frequency band equals to the bandwidth of a single channel. The filter is nominally a Hann window twice the channel’s width, centred on the channel’s centre frequency. This makes a sum across channels equivalent to constructing a Tukey window spanning the same frequency band. This trick is one of the ingredients that allows us to accomplish a multi-resolution tiling using a single frequency channel projection. Prior to normalisation, the filter is divided by the square root of the PSD frequency series in order to de-emphasizing frequency bins with high noise content, this is also called “over whitening”. The number of samples in the window is odd, being one more than the number of frequency bins in twice the channel width. This gets the Hann windows to super-impose to form a Tukey window.

# Loop for each channels
for i in range(nchans-1):
    # Define the central frequency of the channel from the beginning of the frequency band
    channel_flow = fmin + i*band + 0.5*band
    # Create excess power filter
    lal_filter = lalburst.CreateExcessPowerFilter(channel_flow, band, lal_psd, spec_corr)
    # Append filter to array
    lal_filters.append(lal_filter)
    # Append filter's spectrum to another array
    fdb.append(Spectrum.from_lal(lal_filter))

The suitability of the two point spectral correlation used for the creation of each filter can be checked via the inner product function, XLALExcessPowerFilterInnerProduct :

# Testing spectral correlation on filter
print lalburst.ExcessPowerFilterInnerProduct(lal_filter, lal_filter, spec_corr, None)

The filter’s data, frequency separation between 2 samples, and first and last frequencies of any given channel’s filter can be displayed as followed:

# Print data of first channel's filter
print lal_filter.data.data
# Print frequency separation between 2 values in the first channel's filter
print lal_filter.deltaF
# Print first frequency of the first channel's filter
print lal_filter.f0
# Print last frequency of the first channel's filter (equal to twice the channel's bandwidth)
print lal_filter.f0+(len(lal_filter.data.data)-1)*lal_filter.deltaF

Inner products calculation

As mentioned in Section 2.2.7 of Brady et al. (2007) , the final quantity recorded for each trigger event is the root-sum-squared strain, or \(h_\mathrm{rss}\) . In order to measure that quantity, four inner products between filters are required. The inner products can be calculated using the lalburst.ExcessPowerFilterInnerProduct module which takes two arbitrary channel filters, the two-point spectral correlation and the PSD frequency series as input parameters. The first set of inner products to be computed corresponds to whitened and unwhitened (PSD subtracted from filter) inner products of input filters with themselves, this is necessary to compute the normalizations:

# Computer whitened inner products of input filters with themselves
white_filter_ip = numpy.array([lalburst.ExcessPowerFilterInnerProduct(f, f, spec_corr, None) for f in lal_filters])
# Computer unwhitened inner products of input filters with themselves
unwhite_filter_ip = numpy.array([lalburst.ExcessPowerFilterInnerProduct(f, f, spec_corr, lal_psd) for f in lal_filters])

The second set of inner products corresponds to whitened and unwhitened (PSD subtracted from filter) filter inner products between input adjacent filters. The returned array index is the inner product between the filter of the same index, and its (array) adjacent filter, assumed to be the frequency adjacent filter. These two inner products are needed for the unwhitened mean square sum (hrss):

# Computer whitened filter inner products between input adjacent filters
white_ss_ip = numpy.array([lalburst.ExcessPowerFilterInnerProduct(f1, f2, spec_corr, None) for f1, f2 in zip(lal_filters[:-1], lal_filters[1:])])
# Computer unwhitened filter inner products between input adjacent filters
unwhite_ss_ip = numpy.array([lalburst.ExcessPowerFilterInnerProduct(f1, f2, spec_corr, lal_psd) for f1, f2 in zip(lal_filters[:-1], lal_filters[1:])])

Initialise event list and determine stride boundaries

First of all, we create a table similar than the one made by the LIGO Scientific Collaboration (LSC) where all the information will be stored. Such table is commonly know as lsctables . A pre-defined LSC table can be constructed using New function from the glue.ligolw.lsctables module. We use the SnglBurstTable function for the type of data to be stored and define all the columns we wish to record.

# Create event list for single burst table
event_list = lsctables.New(lsctables.SnglBurstTable,
                           ['start_time','start_time_ns','peak_time','peak_time_ns',
                            'duration','bandwidth','central_freq','chisq_dof',
                            'confidence','snr','amplitude','channel','ifo',
                            'process_id','event_id','search','stop_time','stop_time_ns'])

Loop over analysing blocks

The Excess Power method can lead to moderately-large computational requirements, and it has been found that the computational efficiency of this implementation can be improved upon by considering blocks of data that are much longer than the longest signal time duration. The entire time series is therefore split into separate blocks. We use the length (in sample unit) of the same segments used for PSD estimate to define the duration of each block. The first and last indexes of the first block are initialised as follows:

# Define time edges for
t_idx_min, t_idx_max = 0, seg_len

We then start by looping over each block of data until reaching the last segment which should cover up to the last sample of the data chunk being analysed:

# Loop over each segment
while t_idx_max <= len(ts_data):

Over the next subsections, we describe how each block are being analysed. Once a block has be processed, we switch to the next block by incrementing the block’s first and last indexes by the length of a segment (or here equivalently length of a block) in sample unit:

t_idx_min += int(seg_len * (1 - window_fraction))
t_idx_max += int(seg_len * (1 - window_fraction))

Computing block’s frequency series

The first calculations to be done within the block loop is to identify the first and last timestamps of the given block which can easily be determined by the first timestamp of the entire data chuk, the sampling rate and the first and last indexes of the block:

# Define first and last timestamps of the block
start_time = ts_data.start_time + t_idx_min/float(ts_data.sample_rate)
end_time   = ts_data.start_time + t_idx_max/float(ts_data.sample_rate)

Note

In case an impulse response test is being performed, the time series data, which is originally created as a series of zero-magnitude element, is modified such that the bin in the middle of the block has a non-zero magnitude:

# Debug for impulse response
for i in range(t_idx_min, t_idx_max):
    ts_data[i] = 1000. if i == (t_idx_max - t_idx_min)/2 else 0.

A temporary time series, tmp_ts_data , for the block itself is calculated using the pycbc.types.timeseries.TimeSeries module such that only the data covered by the block are stored along with the metadata for which the start epoch is redefined to the start timestamp of the block. The duration of the times series is equal to the duration of a block and the time separation between sample is equivalent to the inverse of the sampling rate:

# Model a withen time series for the block
tmp_ts_data = types.TimeSeries(ts_data[t_idx_min:t_idx_max]*window.data.data,
                               delta_t=1./ts_data.sample_rate,epoch=start_time)

The block time series is then converted into frequency series, fs_data , which can be easily achieved using the pycbc.types.timeseries.TimeSeries.to_frequencyseries module. The returned frequency series has a maximum frequency equal to the Nyquist limit (half the sampling rate) and a frequency separation between bins equivalent to the inverse of the duration of a block:

# Convert times series to frequency series
fs_data = tmp_ts_data.to_frequencyseries()

We note that the variance of the block frequency series can be computed as follows:

print "|- Frequency series data has variance: %s" % fs_data.data.std()**2

Finally, the frequency data are whitened by the overall PSD of the entire analysed data chunk:

# Whitening (FIXME: Whiten the filters, not the data)
fs_data.data /= numpy.sqrt(fd_psd) / numpy.sqrt(2 * fd_psd.delta_f)

Computing time-frequency map

In order to map the time-frequency space and explore tiles with different bandwidth and duration, the excess power algorithm will construct “virtual” wide bandwidth channels by summing the samples from multiple channels, and correcting for the overlap between adjacent channel filters. Section 2.2.1 of Brady et al. (2007) gives a clear overview on how this process is working.

Filtered time series data are created for each channel where the channel’s filter extracted from the filter bank is applied to the block’s frequency series and converted to time series. In order for the channel’s filter to be appropriately applied to the entire block’s frequency series, that means to be applied at the same frequency range where the channel is located, one therefore needs to build a filter’s frequency series that has the same length than the block’s frequency series, which is equivalent to the length of the PSD. The series is initialised with zero magnitude for all bins:

# Initialise 2D zero array
tmp_filter_bank = numpy.zeros(len(fd_psd), dtype=numpy.complex128)

We then create a 2D time-frequency map, tf_map , that will store for each channel the corresponding filtered time series whose length is equivalent to the length of a segment in sample unit:

# Initialise 2D zero array for time-frequency map
tf_map = numpy.zeros((nchans, seg_len), dtype=numpy.complex128)

We then loop over all the frequency channels and first re-initialise the filter frequency series, tmp_filter_bank , by applying zero everywhere:

# Loop over all the channels
for i in range(nchans):
    # Reset filter bank series
    tmp_filter_bank *= 0.0

While in the loop, we determine the first and last frequency of each channel. Subsequently, for the bins present in the frequency range of the channel, we re-define their values based on the magnitudes from the original channel’s filter extracted from the filter bank:

# Index of starting frequency
f1 = int(filter_bank[i].f0/fd_psd.delta_f)
# Index of last frequency bin
f2 = int((filter_bank[i].f0 + 2*band)/fd_psd.delta_f)+1
# (FIXME: Why is there a factor of 2 here?)
tmp_filter_bank[f1:f2] = filter_bank[i].data.data * 2

The resulting block’s filter for a given channel is then formatted into a “proper” frequency series using the pycbc.types.frequencyseries.FrequencySeries module where the frequency separation between bins is included as metadata. That frequency series will be used as a template waveform to filter the actual data from the channel:

# Define the template to filter the frequency series with
template = types.FrequencySeries(tmp_filter_bank, delta_f=fd_psd.delta_f, copy=False)

Finally, we use the pycbc.filter.matchedfilter.matched_filter_core module to filter the channel from the block’s frequency series. This will return both a time series containing the complex signal-to-noise matched filtered against the data, and a frequency series containing the correlation vector:

# Create filtered series
filtered_series = filter.matched_filter_core(template,fs_data,h_norm=None,psd=None,
                                             low_frequency_cutoff=filter_bank[i].f0,
                                             high_frequency_cutoff=filter_bank[i].f0+2*band)

The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise. The filtered time series is stored in the time-frequency map and can be used to produce a spectrogram of the segment of data being analysed:

# Include filtered series in the map
tf_map[i,:] = filtered_series[0].numpy()

Loop over virtual wide bandwidth channels

In order to perform a multi-resolution search, tiles of many different bandwidths and durations will be scanned. Tiles of different bandwidths are explored by summing multiple channels, therefore creating “virtual” wide bandwidth channels whose total bandwidth is the sum of all the summed channels. The summation of channels is done by looping through the total number of channels by steps of power of 2. Indeed, the excess power algorithm assumed the data to be sampled to a power of 2, it is therefore necessary to create virtual channels that also have total bandwidth sampled to power of 2 so that each time-frequency map can cover the entire parameter space:

# Loop through powers of 2 up to number of channels
for nc_sum in range(0, int(math.log(nchans, 2)))[::-1]:

Tile creation

For a given virtual channel, the nc_sum variable as previously defined in the loop corresponds in fact to the logarithm base 2 of the number of narrow band channels being summed, which can be recovered as follows:

# Calculate total number of summed channels
nc_sum = 2**nc_sum

When summing multiple channels into one single virtual channel, one is essentially undersampling the data. The undersampling rate, us_rate , can be computed from the full bandwidth of the virtual channel and the time separation between samples from the original time series. Indeed, one over twice the full bandwidth of the virtual channel gives the samples’ spacing in time domain which, divided by the samples’s spacing of the actual time series data, provides the under sampling rate of the virtual channel:

# Compute full bandwidth of virtual channel
df = band * nc_sum
# Compute minimal signal's duration in virtual channel
dt = 1.0 / (2 * df)
# Compute under sampling rate
us_rate = int(round(dt / ts_data.delta_t))

Before resampling the filtered time series for each channel, we first define a clipping region in the data that can be used to remove window corruption, this is non-zero if the window_fraction variable is set to a non-zero value:

# Clip the boundaries to remove window corruption
clip_samples = int(psd_segment_length * window_fraction * ts_data.sample_rate / 2)

Given a specific virtual channel’s bandwidth, a new time-frequency map of the channels is initialised based on the initial block’s time-frequency map, tf_map . The length in frequency space (number of rows) remains equal to the total number of narrow band channels. However, the filtered time series are re-calculated based on the undersampling rate such that the time’s spacing between samples is equivalent to the smallest detectable signal in the virtual channel, that is the inverse of twice the full bandwidth of the virtual channel:

# Undersample narrow band channel's time series
# Apply clipping condition because [0:-0] does not give the full array
tf_map_temp = tf_map[:,clip_samples:-clip_samples:us_rate] \
              if clip_samples > 0 else tf_map[:,::us_rate]

We then initialise a final time-frequency map that will store the actual tile’s time series. The number of tiles will depend on the full bandwidth of the explored virtual channel, which can be determined from the total number of narrow band channels, nchans , and the number of summed channels in a virtual channel, nc_sum :

# Initialise final tile time-frequency map
tiles = numpy.zeros((nchans/nc_sum,tf_map_temp.shape[1]))

Once the tile’s time-frequency map is created, one can then loop over each tile and process the inner channels:

# Loop over tile index
for i in xrange(len(tiles)):

Summing narrow band channels

For a given tile, a number of calculations will be performed in order to obtain the adequate time series. The first calculation is to sum all the filtered time series of all the narrow band channels constituting the tile, that is sum all the channels that have indexes between nc_sum*i and nc_sum*(i+1) where i is the tile index. For instance, for the first tile (i=0) composed of nc_sum equal to 64 channels, we sum the narrow band channels from index 0 to 63. The absolute value for each sample is kept and stored:

# Sum all inner narrow band channels
ts_tile = numpy.absolute(tf_map_temp[nc_sum*i:nc_sum*(i+1)].sum(axis=0))

Tile’s normalisation

The next calculation to be performed within the same loop is to compute the withened inner product of the filter associated to the last channel of each tile with itself. The inner products are necessary for \(\mu^2\) normalisation. The magnitude of each filter’s inner product is then multiplied by the total number of narrow band channels that compose the tile:

# Define index of last narrow band channel for given tile
n = (i+1)*nc_sum - 1
n = n-1 if n==len(lal_filters) else n
# Computer withened inner products of each input filter with itself
mu_sq = nc_sum*lalburst.ExcessPowerFilterInnerProduct(lal_filters[n], lal_filters[n], spec_corr, None)

Warning

We note that the last tile has one narrow band channel less than the others. This is due to the fact that the filters in the filter bank are twice the bandwidth of a channel, therefore the frequency range of the last channel would end beyond the maximum frequency of the filter bank. Consequently, we added a condition, n = n-1 if n==len(lal_filters) else n , where we checked if the tile is the final one and subtract unity from the n index for the last channel present in the tile.

In order to properly normalised the tile though, we also need to compute and sum the whitened inner products between adjacent filters for all the narrow band channels present within the tile:

# Loop over the inner narrow band channels
for k in xrange(0, nc_sum-1):
    # Computer whitened filter inner products between input adjacent filters
    mu_sq += 2*lalburst.ExcessPowerFilterInnerProduct(lal_filters[n-k],lal_filters[n-1-k],spec_corr,None)

The last calculation is the normalisation of the tile’s time series which can easily be done as follows:

# Normalise tile's time series
tiles[i] = ts_tile.real**2 / mu_sq

Loop over multiple tile durations

For any given tile of bandwidth \(B\) , a real-valued signal can be represented without loss of information as a discrete real-valued time series with a sample rate equal to \(2B\) , which is equivalent to a spacing in time domain of \(\Delta t=1/(2B)\) . Let’s say one wants to explore tiles with duration of signal equal to \(T\) , the total number of real-valued samples sufficient to encode all the information contained in such signal of bandwidth \(B\) and duration \(T\) can therefore be determined as follows:

\[j = \frac{T}{\Delta t} = \frac{T}{\frac{1}{2B}} = 2BT\]

We call \(j\) the total number of degrees of freedom, one can refer to Section 2.2.5 of Brady et al. for a deeper description of this variable. Therefore, by changing \(j\) , one can explore multiple tile’s duration for any given bandwidth. To start the exploration, we calculate the maximum number of degrees of freedom, max_dof , from the maximum duration we want to explore in the tile. The maximum duration can be manually defined using the max_duration argument whose default value is None. If no value is provided, the maximum number of degress of freedom is set to 32:

# Define maximum number of degrees of freedom and check it larger or equal to 2
max_dof = 32 if max_duration==None else int(max_duration / dt)

The exploration of multiple tile’s duration is then performed by looping over multiple degrees of freedom spaced on a power of 2 basis:

# Loop through multiple degrees of freedom
for j in [2**l for l in xrange(0, int(math.log(max_dof, 2)))]:

Moving average filtering

Given the number of degrees of freedom, j , and the time period, dt , of each individual tile, one can determine the duration of the expected signal as follows:

# Duration is fixed by the NDOF and bandwidth
duration = j * dt

A moving average , or running mean, filter is then applied to the data in order to smooth out short-term fluctuations and highlight longer-term signals for any given tile. The smoothing is done by summing a quantity of samples equal to the number of degrees of freedom. However, it is important to remember that consecutive samples cannot be summed as they are not independent to each other. Indeed, independent samples are separated by a time interval equal to the e-folding time through which the correlation between samples has decreased by a factor of e .

It has been demonstrated by Leith (1973) that the number of degrees of freedom is only half of the number of e-folding times of data. This means, for example, that in a set of 32 consecutive samples, only every other samples are independent, leading to a total of 16 samples only. In order to sum a set of independent samples of length equal to the number of degrees of freedom, one therefore needs to construct a filter such that only next-to-adjacent samples are summed and the total number of independent samples still equals the number of degrees of freedom. This can be built as follows:

sum_filter = numpy.array([1,0] * (j-1) + [1])

The number of valid samples from a filtering of data series with length N from a filter M is equal to N - M + 1. The length of the filtered time series is therefore:

tlen = tiles.shape[1] - sum_filter.shape[0] + 1

Note

As an example, let’s assume the data and filter’s arrays to be equal to [1,2,3,4,5] and [1,0,1] respectively. The length of the filtered data will be equal to 5-3+1=3 and for which the value for each sample can be calculated as follows:

  • sample 1 = 1 * 1 + 0 * 2 + 1 * 3 = 4

  • sample 2 = 1 * 2 + 0 * 3 + 1 * 4 = 6

  • sample 3 = 1 * 3 + 0 * 4 + 1 * 5 = 8

The filtered time series for each tile will be stored a dof_tiles array which is initialised as follows:

dof_tiles = numpy.zeros((tiles.shape[0], tlen))

For each tile, we finally use the scipy.signal.fftconvolve module to do the filtering. Indeed, it has been found to be much faster to calculate the convolution via an FFT when the sample size of the original time series is greater than 100:

for f in range(tiles.shape[0]):
    # Sum and drop correlate tiles
    dof_tiles[f] = fftconvolve(tiles[f], sum_filter, 'valid')

Note

Below we show a little script that runs and compares two running mean calculations using the standard numpy.mean module and the more sophisticated numpy.convolve module:

#!/uar/bin/env python
import numpy
values = numpy.random.uniform(0, 1, 100)
# Straight forward running mean
n = 10
mean = []
for i in range(len(values) - n + 1):
    mean.append(values[i:i+n].sum() / n)
# Convolution based running mean
mfilter = numpy.ones(n) / float(n)
mean_convolve = numpy.convolve(values, mfilter, "valid")
print mean, mean_convolve

Trigger finding

The energy of each tile in the time-frequency space is calculated and compare to a user-defined threshold value. After defining a tile false alarm probability threshold in Gaussian noise and using the number of degrees of freedom for each tile, one can define a energy threshold value above which a burst trigger can be identified by comparing the energy threshold with the tile’s energy in the time-frequency map. A tile energy time frequency map plot similar to Figure 5 in Pustelny et al. (2013) can then be made which plots the outlying tile energies present in the data.

In order to find any trigger in the data, we first need to set a false alarm probability threshold in Gaussian noise above which signal will be distinguished from the noise. Such threshold can be determined by using the /inverse survival function/ method from the scipy.stats.chi2 package.

threshold = scipy.stats.chi2.isf(tile_fap, j)
print "Threshold for this level: %f" % threshold
#if numpy.any(dof_tiles > threshold):
    #plot_spectrogram(dof_tiles.T)
    #import pdb; pdb.set_trace()

Once the threshold is set, one can then run the trigger_list_from_map() function to quickly find the trigger signal from the dof_tiles array that

# Since we clip the data, the start time needs to be adjusted accordingly
window_offset_epoch = fs_data.epoch + psd_segment_length * window_fraction / 2
trigger_list_from_map(dof_tiles, event_list, threshold, window_offset_epoch, filter_bank[0].f0 + band/2, duration, current_band, df, dt, None)
for event in event_list[::-1]:
    if event.amplitude != None:
        continue
    etime_min_idx = float(event.get_start()) - float(fs_data.epoch)
    etime_min_idx = int(etime_min_idx / tmp_ts_data.delta_t)
    etime_max_idx = float(event.get_start()) - float(fs_data.epoch) + event.duration
    etime_max_idx = int(etime_max_idx / tmp_ts_data.delta_t)
    # (band / 2) to account for sin^2 wings from finest filters
    flow_idx = int((event.central_freq - event.bandwidth / 2 - (band / 2) - flow) / band)
    fhigh_idx = int((event.central_freq + event.bandwidth / 2 + (band / 2) - flow) / band)
    # TODO: Check that the undersampling rate is always commensurate
    # with the indexing: that is to say that
    # mod(etime_min_idx, us_rate) == 0 always
    z_j_b = tf_map[flow_idx:fhigh_idx,etime_min_idx:etime_max_idx:us_rate]
    # FIXME: Deal with negative hrss^2 -- e.g. remove the event
    try:
        event.amplitude = measure_hrss(z_j_b, unwhite_filter_ip[flow_idx:fhigh_idx], unwhite_ss_ip[flow_idx:fhigh_idx-1], white_ss_ip[flow_idx:fhigh_idx-1], fd_psd.delta_f, tmp_ts_data.delta_t, len(filter_bank[0].data.data), event.chisq_dof)
    except ValueError:
        event.amplitude = 0

print "Total number of events: %d" % len(event_list)

Extracting GPS time range

We use the LIGOTimeGPS structure from the =glue.lal= package to /store the starting and ending time in the dataset to nanosecond precision and synchronized to the Global Positioning System time reference/. Once both times are defined, the range of value is stored in a semi-open interval using the segment module from the =glue.segments= package.

# Starting epoch relative to GPS starting epoch
start_time = LIGOTimeGPS(analysis_start_time or gps_start_time)
# Ending epoch relative to GPS ending epoch
end_time = LIGOTimeGPS(analysis_end_time or gps_end_time)
# Represent the range of values in the semi-open interval
inseg = segment(start_time,end_time)

Prepare output file for given time range

xmldoc = ligolw.Document()
xmldoc.appendChild(ligolw.LIGO_LW())

ifo = channel_name.split(":")[0]
proc_row = register_to_xmldoc(xmldoc, __program__, __dict__, ifos=[ifo],version=glue.git_version.id, cvs_repository=glue.git_version.branch, cvs_entry_time=glue.git_version.date)

# Figure out the data we actually analyzed
outseg = determine_output_segment(inseg, psd_segment_length, sample_rate, window_fraction)

ss = append_search_summary(xmldoc, proc_row, ifos=(station,), inseg=inseg, outseg=outseg)

for sb in event_list:
    sb.process_id = proc_row.process_id
    sb.search = proc_row.program
    #sb.ifo, sb.channel = channel_name.split(":")
    sb.ifo, sb.channel = station, setname

xmldoc.childNodes[0].appendChild(event_list)
fname = make_filename(station, inseg)

utils.write_filename(xmldoc, fname, gz=fname.endswith("gz"), verbose=True)

Fake trigger map generator

In order to test and/or debug some parts of the full analysis procedure, it may sometimes be more suitable to generate fake trigger maps as they are faster to compute and we can figure out whether any issue occuring at a later stage of the analysis might originate from the excess power part or not. Chris Pankow wrote a fake trigger map generator that can be used for that exact purpose. We have included the script in GDAS both as executable and as module.