A brief introduction to Radio Interferometry

Một phần của tài liệu Structure in galaxy clusters revealed through sunyaev zel’dovich observations a multi aperture synthesis approach (Trang 66 - 70)

Radio interferometers consist of several antennas whose output signals are brought to- gether in a correlator. In the case of very long baseline interferometry (VLBI), these antennas can be placed thousands of kilometres apart, spanning whole continents. In this thesis, I will focus on connected-element arrays such as CARMA/SZA, JVLA and ALMA/ACA whose maximum baselines, the physical distance between two antennas, range from less than ten metres to several kilometres and whose operation is based on mechanical dishes rather than omni-directional antennas, as is the case for LOFAR.

The largest mechanically movable single-dish telescope is the Green Bank telescope with 100 m ×110 m, followed by the Effelsberg telescope, which measures 100 m in diam- eter. Building larger single dish telescopes is unfeasible from engineering considerations, so that one must turn to interferometers if one requires higher angular resolution mea- surements than those currently offered by single-dish telescopes at a given frequency.

Before delving into the full principle of aperture synthesis, one may start by examining a simple two-element system Fig. (4.1). The subsequent discussion follows the approach of Taylor et al. (1999) and Thompson, Moran & Swenson (2001). Suppose that the astro- nomical source under investigation is a long way away such that the incoming signal can be described by the plane wave approximation. In order to simplify the following discussion, I will neglect polarization which would require a matrix notation. In addition, a homo- geneous array will be considered. One may also imagine a situation in which the signal is not attenuated along its path or, in other words, that the space between the emission region and the observer is empty. Please note that this assumption will break down in reality due to absorption, emission and re-emission processes. These can have their origin in astrophysical sources, or indeed, through the effect of the Earth’s atmosphere through which the radiation must pass before reaching our telescopes.

In the two-element interferometer set-up, one can see from Fig. (4.1) that there will be a geometric path delay of τg = (Bãs0)/c between the antenna ant1 and antenna ant2 separated by a baseline B, since the plane wave reaches ant1 before ant2. This delay is compensated for by an instrumental path delay τi in one of the antenna’s signal paths, Fig. 4.1 (4), such that the source can be tracked. In order to introduce the concept of time-domain fringes, first consider the case where τi = 0 and V1(t) and V2(t) denote the time-varying voltage outputs of ant1 and ant2 respectively. These voltages are then multiplied in a correlator, giving the cross-correlation in the form of a cosine fringe pattern

R12(τg) = 1

2*V1(t)V2(t)+= V1V2

2 cos(2πντg). (4.1)

The astronomical fringe rate depends on the Earth rotation and is thus known. This fringe rate is different from contributions from the atmosphere, ground-spill or other terrestrial interference. The angular pattern of the normalized reception pattern, the primary beam A, of a single dish consists of a main lobe and lower amplitude side-lobes away from the pointing centre, s0. For a monochromatic point source passing above the 2-element interferometer as the Earth rotates, under the assumption that the effect of bandwidth

4.1. A brief introduction to Radio Interferometry 59

Figure 4.1: The workings of a radio interferometer: Radiation from the tracking centre in direction .s0 is received at ant1 and ant2 with a time delay that is accounted for by an instrumental time delay added to the signal of one of the antennas (4) after initial amplification (1), frequency down-conversion through mixing with a local oscillator signal (2) and additional phase corrections (3) as well as potential other phase corrections (4) to both antennas. The signals are then cross-correlated in a complex correlator (5). The projected baseline is shown in the uv-coordinate system defined by (u,v,w) whose normal is in the tracking centre direction and thus parallel to the sky coordinate axis (n). The latter is orthogonal to the tangent plane in wich (l, m) lie. (SZA image credit: CARMA, sketch made after example in Thompson, Moran & Swenson (2001))

can be neglected, one can write an expression for the corresponding visibilities Vi,j(u, v), exploiting the geometry in Fig. 4.1 and applying the small-angle approximation

Vi,j(u, v) =, , A(l, m)Bν(l, m)e−2πi(ul+vm)dldm, (4.2) where A(l, m) is the antenna response and Bν(l, m) is the source brightness distribution.

We can therefore recover both, the amplitude and the phase, since Vi,j is complex. This means that, if we had complete uv-coverage, one would recover the source brightness distribution through a 2D Fourier Transform for a source small in extent such that the small angle approximation holds. Complete uv-coverage would, however, imply an infinite number of baselines which is clearly not possible for interferometric arrays. Interferometric arrays are designed in such a way as to optimize the uv-coverage for a given number of telescopes, producing a characteristic sampling function whose inverse Fourier Transform gives the synthesized beam. The sampling function, S(u, v), is zero at all points in the uv-plane except for at the interferometrically sampled uv points, giving

Idirty=, , V(u, v)S(u, v)e2πi(ul+vm)dudv . (4.3) The noise in the image plane for a homogeneous array made up of N antennas is

σs = 2kTsys

/q/cAeff1N(N −1)npνtint

, (4.4)

with a correlator efficiency /c, a quantization efficiency, /q, number of polarizations, np, bandwidth per polarization,∆ν, on-source integration time,tint, system temperature,Tsys as well as an effective areaAeff, which itself is related to the rms surface accuracy through the Ruze formula and the geometrical dish area.

’Aperture Synthesis Imaging Simulator’

In order to verify the proper workings of the earlier versions of CASA’s interferometric simulator (simdata in CASA V. 3.2 and below), I wrote an interferometric simulator code, that can simulate the uv-coverage and synthesized beam, given a set of antenna positions.

In addition, the workings of CASA’s natural and uniform weighting were examined with my simulator. The results from the two simulation packages were found to aggree when set to the same observing conditions. This cross-check was, at the time, important since several mock interferometric simulations were used to make predictions for ALMA, JVLA and CARMA/SZA proposals for which accurate predictions on the trade-off between uv- coverage, observation time and required sensitivity were crucial.

My simulator had the advantage to earlier CASA simulators in that flagging due to ele- vation limits could more easily be identified and manual noise addition was more intuitive.

The simulator code was extensively used in the initial testing phase of my visibility fitting

4.1. A brief introduction to Radio Interferometry 61

Figure 4.2: Simulator comparisons. Top: A uv-coverage comparison of simulated SZA array observations via my simulator (green) and CASA (red) with the central uv-coverage being shown as a zoomed in image (top right). CASA exploits the symmetry in the uv- plane such that only half the visibilities are diplayed. Bottom: Weighting comparison between my simulator (green) and CASA (red) for natural (left) and uniform (right) weighting.

code (Chapter 6). In the latest CASA versions, the aforementioned difficulties have now been rectified through the option to display flagged visibilities on top of the non-flagged data as well as through the availability of separate ’simobserve’ and ’simanalyze’ tasks that separate the aperture synthesis and imaging steps. The first step in my simulator/CASA comparison lies in the examination of the uv-coverage for a given set of antenna positions and subsequent Earth sysnthesis.

The SZA is taken as an example array and the corresponding CASA and my simulated uv points agree very well (Fig. 4.2). In order for the visibilities to be transformed back to the image plane, they need to be gridded onto a regular grid before being subjected to the Fast Fourier Transform (FFT) algorithm that is computationally much faster than conventional fourier transforming.

Before inversion, the visibilities can be weighted taking into account factors such as ta- pering, system temperature and single visibility accumulation time or, indeed, the density of weights in the uv-plane. Uniform weighting applies the same weight to all visibilities by applying a weighting factor proportional to the inverse of the number of visibilities

within a specified small sub-set of the uv-plane such as a box or circular region. It thus maximizes the attainable angular resolution at the cost of decreasing sensitivity. Natural weighting does not apply such a weighting factor and therefore retains information of the intrinsic density variations in the uv-plane. As such, it results in maximum sensitivity but does not give as high an angular resolution as the uniform weighting case (Fig. 4.2).

Due to the fact that incomplete sampling introduces sidelobes in the final dirty image, de-convolution algorithms such as CLEAN or MEM need to be applied to generate cleaned images. The clean algorithm is introduced via its application to bolometer single-dish data in section 4.3.5, whose basic principles follow the interferometric clean method, the most important difference lying in the interferometric clean beam peak normalization to 1.

Một phần của tài liệu Structure in galaxy clusters revealed through sunyaev zel’dovich observations a multi aperture synthesis approach (Trang 66 - 70)

Tải bản đầy đủ (PDF)

(220 trang)