We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The present work discusses the use of a weakly-supervised deep learning algorithm that reduces the cost of labelling pixel-level masks for complex radio galaxies with multiple components. The algorithm is trained on weak class-level labels of radio galaxies to get class activation maps (CAMs). The CAMs are further refined using an inter-pixel relations network (IRNet) to get instance segmentation masks over radio galaxies and the positions of their infrared hosts. We use data from the Australian Square Kilometre Array Pathfinder (ASKAP) telescope, specifically the Evolutionary Map of the Universe (EMU) Pilot Survey, which covered a sky area of 270 square degrees with an RMS sensitivity of 25–35 $\mu$Jy beam$^{-1}$. We demonstrate that weakly-supervised deep learning algorithms can achieve high accuracy in predicting pixel-level information, including masks for the extended radio emission encapsulating all galaxy components and the positions of the infrared host galaxies. We evaluate the performance of our method using mean Average Precision (mAP) across multiple classes at a standard intersection over union (IoU) threshold of 0.5. We show that the model achieves a mAP$_{50}$ of 67.5% and 76.8% for radio masks and infrared host positions, respectively. The network architecture can be found at the following link: https://github.com/Nikhel1/Gal-CAM
The Australian SKA Pathfinder (ASKAP) radio telescope has carried out a survey of the entire Southern Sky at 887.5 MHz. The wide area, high angular resolution, and broad bandwidth provided by the low-band Rapid ASKAP Continuum Survey (RACS-low) allow the production of a next-generation rotation measure (RM) grid across the entire Southern Sky. Here we introduce this project as Spectral and Polarisation in Cutouts of Extragalactic sources from RACS (SPICE-RACS). In our first data release, we image 30 RACS-low fields in Stokes I, Q, U at 25$^{\prime\prime}$ angular resolution, across 744–1032 MHz with 1 MHz spectral resolution. Using a bespoke, highly parallelised, software pipeline we are able to rapidly process wide-area spectro-polarimetric ASKAP observations. Notably, we use ‘postage stamp’ cutouts to assess the polarisation properties of 105912 radio components detected in total intensity. We find that our Stokes Q and U images have an rms noise of $\sim$80 $\unicode{x03BC}$Jy PSF$^{-1}$, and our correction for instrumental polarisation leakage allows us to characterise components with $\gtrsim$1% polarisation fraction over most of the field of view. We produce a broadband polarised radio component catalogue that contains 5818 RM measurements over an area of $\sim$1300 deg$^{2}$ with an average error in RM of $1.6^{+1.1}_{-1.0}$ rad m$^{-2}$, and an average linear polarisation fraction $3.4^{+3.0}_{-1.6}$ %. We determine this subset of components using the conditions that the polarised signal-to-noise ratio is $>$8, the polarisation fraction is above our estimated polarised leakage, and the Stokes I spectrum has a reliable model. Our catalogue provides an areal density of $4\pm2$ RMs deg$^{-2}$; an increase of $\sim$4 times over the previous state-of-the-art (Taylor, Stil, Sunstrum 2009, ApJ, 702, 1230). Meaning that, having used just 3% of the RACS-low sky area, we have produced the 3rd largest RM catalogue to date. This catalogue has broad applications for studying astrophysical magnetic fields; notably revealing remarkable structure in the Galactic RM sky. We will explore this Galactic structure in a follow-up paper. We will also apply the techniques described here to produce an all-Southern-sky RM catalogue from RACS observations. Finally, we make our catalogue, spectra, images, and processing pipeline publicly available.
We present observations of the Mopra carbon monoxide (CO) survey of the Southern Galactic Plane, covering Galactic longitudes spanning $l = 250^{\circ}$ ($-110^{\circ}$) to $l = 355^{\circ}$ ($-5^{\circ}$), with a latitudinal coverage of at least $|b|<1^\circ$, totalling an area of $>$210 deg$^{2}$. These data have been taken at 0.6 arcmin spatial resolution and 0.1 km s$^{-1}$ spectral resolution, providing an unprecedented view of the molecular gas clouds of the Southern Galactic Plane in the 109–115 GHz $J = 1-0$ transitions of $^{12}$CO, $^{13}$CO, C$^{18}$O, and C$^{17}$O.
Asymmetric emission of gravitational waves during mergers of black holes (BHs) produces a recoil kick, which can set a newly formed BH on a bound orbit around the centre of its host galaxy, or even completely eject it. To study this population of recoiling BHs we extract properties of galaxies with merging BHs from Illustris TNG300 simulation and then employ both analytical and numerical techniques to model unresolved process of BH recoil. This comparative analysis between analytical and numerical models shows that, on cosmological scales, numerically modelled recoiling BHs have a higher escape probability and predict a greater number of offset active galactic nuclei (AGN). BH escaped probability $>$40% is expected in 25$\%$ of merger remnants in numerical models, compared to 8$\%$ in analytical models. At the same time, the predicted number of offset AGN at separations ${>}5$ kpc changes from 58$\%$ for numerical models to 3$\%$ for analytical models. Since BH ejections in major merger remnants occur in non-virialised systems, static analytical models cannot provide an accurate description. Thus we argue that numerical models should be used to estimate the expected number density of escaped BHs and offset AGN.
We study the correlation between the non-thermal velocity dispersion ($\sigma_{nth}$) and the length scale (L) in the neutral interstellar medium (ISM) using a large number of Hi gas components taken from various published Hi surveys and previous Hi studies. We notice that above the length-scale (L) of 0.40 pc, there is a power-law relationship between $\sigma_{nth}$ and L. However, below 0.40 pc, there is a break in the power law, where $\sigma_{nth}$ is not significantly correlated with L. It has been observed from the Markov chain Monte Carlo (MCMC) method that for the dataset of L$\gt$ 0.40 pc, the most probable values of intensity (A) and power-law index (p) are 1.14 and 0.55, respectively. Result of p suggests that the power law is steeper than the standard Kolmogorov law of turbulence. This is due to the dominance of clouds in the cold neutral medium. This is even more clear when we separate the clouds into two categories: one for L is $\gt$ 0.40 pc and the kinetic temperature ($T_{k}$) is $\lt$250 K, which are in the cold neutral medium (CNM) and for other one where L is $\gt$0.40 pc and $T_{k}$ is between 250 and 5 000 K, which are in the thermally unstable phase (UNM). Most probable values of A and p are 1.14 and 0.67, respectively, in the CNM phase and 1.01 and 0.52, respectively, in the UNM phase. A greater number of data points is effective for the UNM phase in constructing a more accurate estimate of A and p, since most of the clouds in the UNM phase lie below 500 K. However, from the value of p in the CNM phase, it appears that there is a significant difference from the Kolmogorov scaling, which can be attributed to a shock-dominated medium.
We present a method for identifying radio stellar sources using their proper-motion. We demonstrate this method using the FIRST, VLASS, RACS-low and RACS-mid radio surveys, and astrometric information from Gaia Data Release 3. We find eight stellar radio sources using this method, two of which have not previously been identified in the literature as radio stars. We determine that this method probes distances of $\sim$90pc when we use FIRST and RACS-mid, and $\sim$250pc when we use FIRST and VLASS. We investigate the time baselines required by current and future radio sky surveys to detect the eight sources we found, with the SKA (6.7 GHz) requiring $<$3 yr between observations to find all eight sources. We also identify nine previously known and 43 candidate variable radio stellar sources that are detected in FIRST (1.4 GHz) but are not detected in RACS-mid (1.37 GHz). This shows that many stellar radio sources are variable, and that surveys with multiple epochs can detect a more complete sample of stellar radio sources.
The advent of time-domain sky surveys has generated a vast amount of light variation data, enabling astronomers to investigate variable stars with large-scale samples. However, this also poses new opportunities and challenges for the time-domain research. In this paper, we focus on the classification of variable stars from the Catalina Surveys Data Release 2 and propose an imbalanced learning classifier based on Self-paced Ensemble (SPE) method. Compared with the work of Hosenie et al. (2020), our approach significantly enhances the classification Recall of Blazhko RR Lyrae stars from 12% to 85%, mixed-mode RR Lyrae variables from 29% to 64%, detached binaries from 68% to 97%, and LPV from 87% to 99%. SPE demonstrates a rather good performance on most of the variable classes except RRab, RRc, and contact and semi-detached binary. Moreover, the results suggest that SPE tends to target the minority classes of objects, while Random Forest is more effective in finding the majority classes. To balance the overall classification accuracy, we construct a Voting Classifier that combines the strengths of SPE and Random Forest. The results show that the Voting Classifier can achieve a balanced performance across all classes with minimal loss of accuracy. In summary, the SPE algorithm and Voting Classifier are superior to traditional machine learning methods and can be well applied to classify the periodic variable stars. This paper contributes to the current research on imbalanced learning in astronomy and can also be extended to the time-domain data of other larger sky survey projects (LSST, etc.).
To explore the role environment plays in influencing galaxy evolution at high redshifts, we study $2.0\leq z<4.2$ environments using the FourStar Galaxy Evolution (ZFOURGE) survey. Using galaxies from the COSMOS legacy field with ${\rm log(M_{*}/M_{\odot})}\geq9.5$, we use a seventh nearest neighbour density estimator to quantify galaxy environment, dividing this into bins of low-, intermediate-, and high-density. We discover new high-density environment candidates across $2.0\leq z<2.4$ and $3.1\leq z<4.2$. We analyse the quiescent fraction, stellar mass and specific star formation rate (sSFR) of our galaxies to understand how these vary with redshift and environment. Our results reveal that, across $2.0\leq z<2.4$, the high-density environments are the most significant regions, which consist of elevated quiescent fractions, ${\rm log(M_{*}/M_{\odot})}\geq10.2$ massive galaxies and suppressed star formation activity. At $3.1\leq z<4.2$, we find that high-density regions consist of elevated stellar masses but require more complete samples of quiescent and sSFR data to study the effects of environment in more detail at these higher redshifts. Overall, our results suggest that well-evolved, passive galaxies are already in place in high-density environments at $z\sim2.4$, and that the Butcher–Oemler effect and SFR-density relation may not reverse towards higher redshifts as previously thought.
The Australian SKA Pathfinder (ASKAP) is being used to undertake a campaign to rapidly survey the sky in three frequency bands across its operational spectral range. The first pass of the Rapid ASKAP Continuum Survey (RACS) at 887.5 MHz in the low band has already been completed, with images, visibility datasets, and catalogues made available to the wider astronomical community through the CSIRO ASKAP Science Data Archive (CASDA). This work presents details of the second observing pass in the mid band at 1367.5 MHz, RACS-mid, and associated data release comprising images and visibility datasets covering the whole sky south of $\delta_{\text{J2000}}=+49^\circ$. This data release incorporates selective peeling to reduce artefacts around bright sources, as well as accurately modelled primary beam responses. The Stokes I images reach a median noise of 198 $\mu$Jy PSF$^{-1}$ with a declination-dependent angular resolution of 8.1–47.5 arcsec that fills a niche in the existing ecosystem of large-area astronomical surveys. We also supply Stokes V images after application of a widefield leakage correction, with a median noise of 165 $\mu$Jy PSF$^{-1}$. We find the residual leakage of Stokes I into V to be $\lesssim 0.9$–$2.4$% over the survey. This initial RACS-mid data release will be complemented by a future release comprising catalogues of the survey region. As with other RACS data releases, data products from this release will be made available through CASDA.
As the scale of cosmological surveys increases, so does the complexity in the analyses. This complexity can often make it difficult to derive the underlying principles, necessitating statistically rigorous testing to ensure the results of an analysis are consistent and reasonable. This is particularly important in multi-probe cosmological analyses like those used in the Dark Energy Survey (DES) and the upcoming Legacy Survey of Space and Time, where accurate uncertainties are vital. In this paper, we present a statistically rigorous method to test the consistency of contours produced in these analyses and apply this method to the Pippin cosmological pipeline used for type Ia supernova cosmology with the DES. We make use of the Neyman construction, a frequentist methodology that leverages extensive simulations to calculate confidence intervals, to perform this consistency check. A true Neyman construction is too computationally expensive for supernova cosmology, so we develop a method for approximating a Neyman construction with far fewer simulations. We find that for a simulated dataset, the 68% contour reported by the Pippin pipeline and the 68% confidence region produced by our approximate Neyman construction differ by less than a percent near the input cosmology; however, they show more significant differences far from the input cosmology, with a maximal difference of 0.05 in $\Omega_{M}$ and 0.07 in w. This divergence is most impactful for analyses of cosmological tensions, but its impact is mitigated when combining supernovae with other cross-cutting cosmological probes, such as the cosmic microwave background.
With the advent of deep, all-sky radio surveys, the need for ancillary data to make the most of the new, high-quality radio data from surveys like the Evolutionary Map of the Universe (EMU), GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended, Very Large Array Sky Survey, and LOFAR Two-metre Sky Survey is growing rapidly. Radio surveys produce significant numbers of Active Galactic Nuclei (AGNs) and have a significantly higher average redshift when compared with optical and infrared all-sky surveys. Thus, traditional methods of estimating redshift are challenged, with spectroscopic surveys not reaching the redshift depth of radio surveys, and AGNs making it difficult for template fitting methods to accurately model the source. Machine Learning (ML) methods have been used, but efforts have typically been directed towards optically selected samples, or samples at significantly lower redshift than expected from upcoming radio surveys. This work compiles and homogenises a radio-selected dataset from both the northern hemisphere (making use of Sloan Digital Sky Survey optical photometry) and southern hemisphere (making use of Dark Energy Survey optical photometry). We then test commonly used ML algorithms such as k-Nearest Neighbours (kNN), Random Forest, ANNz, and GPz on this monolithic radio-selected sample. We show that kNN has the lowest percentage of catastrophic outliers, providing the best match for the majority of science cases in the EMU survey. We note that the wider redshift range of the combined dataset used allows for estimation of sources up to $z = 3$ before random scatter begins to dominate. When binning the data into redshift bins and treating the problem as a classification problem, we are able to correctly identify $\approx$76% of the highest redshift sources—sources at redshift $z > 2.51$—as being in either the highest bin ($z > 2.51$) or second highest ($z = 2.25$).
We present the third data release from the Parkes Pulsar Timing Array (PPTA) project. The release contains observations of 32 pulsars obtained using the 64-m Parkes ‘Murriyang’ radio telescope. The data span is up to 18 yr with a typical cadence of 3 weeks. This data release is formed by combining an updated version of our second data release with $\sim$3 yr of more recent data primarily obtained using an ultra-wide-bandwidth receiver system that operates between 704 and 4032 MHz. We provide calibrated pulse profiles, flux density dynamic spectra, pulse times of arrival, and initial pulsar timing models. We describe methods for processing such wide-bandwidth observations and compare this data release with our previous release.
The putative host galaxy of FRB 20171020A was first identified as ESO 601-G036 in 2018, but as no repeat bursts have been detected, direct confirmation of the host remains elusive. In light of recent developments in the field, we re-examine this host and determine a new association confidence level of 98%. At 37 Mpc, this makes ESO 601-G036 the third closest FRB host galaxy to be identified to date and the closest to host an apparently non-repeating FRB (with an estimated repetition rate limit of $<$$0.011$ bursts per day above $10^{39}$ erg). Due to its close distance, we are able to perform detailed multi-wavelength analysis on the ESO 601-G036 system. Follow-up observations confirm ESO 601-G036 to be a typical star-forming galaxy with H i and stellar masses of $\log_{10}\!(M_{\rm{H\,{\small I}}} / M_\odot) \sim 9.2$ and $\log_{10}\!(M_\star / M_\odot) = 8.64^{+0.03}_{-0.15}$, and a star formation rate of $\text{SFR} = 0.09 \pm 0.01\,{\rm M}_\odot\,\text{yr}^{-1}$. We detect, for the first time, a diffuse gaseous tail ($\log_{10}\!(M_{\rm{H\,{\small I}}} / M_\odot) \sim 8.3$) extending to the south-west that suggests recent interactions, likely with the confirmed nearby companion ESO 601-G037. ESO 601-G037 is a stellar shred located to the south of ESO 601-G036 that has an arc-like morphology, is about an order of magnitude less massive, and has a lower gas metallicity that is indicative of a younger stellar population. The properties of the ESO 601-G036 system indicate an ongoing minor merger event, which is affecting the overall gaseous component of the system and the stars within ESO 601-G037. Such activity is consistent with current FRB progenitor models involving magnetars and the signs of recent interactions in other nearby FRB host galaxies.
Next-generation astronomical surveys naturally pose challenges for human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. While a significant fraction of the data preparation and analysis will be taken care of by automated pipelines, crucial steps of knowledge discovery can still only be achieved through various level of human interpretation. As the number of sources in a survey grows, there is need to both modify and simplify repetitive visualisation processes that need to be completed for each source. As tasks such as per-source quality control, candidate rejection, and morphological classification all share a single instruction, multiple data (SIMD) work pattern, they are amenable to a parallel solution. Selecting extragalactic neutral hydrogen (Hi) surveys as a representative example, we use system performance benchmarking and the visual data and reasoning methodology from the field of information visualisation to evaluate a bespoke comparative visualisation environment: the encube visual analytics framework deployed on the 83 Megapixel Swinburne Discovery Wall. Through benchmarking using spectral cube data from existing Hi surveys, we are able to perform interactive comparative visualisation via texture-based volume rendering of 180 three-dimensional (3D) data cubes at a time. The time to load a configuration of spectral cubes scale linearly with the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or 34 Gigabytes) each loading in under 5 min. We show that parallel comparative inspection is a productive and time-saving technique which can reduce the time taken to complete SIMD-style visual tasks currently performed at the desktop by at least two orders of magnitude, potentially rendering some labour-intensive desktop-based workflows obsolete.
Having developed the necessary mathematics in chapters 4 to 6, chapter 7 returns to physics Evidence for homogeneity and isotropy of the Universe at the largest cosmological scales is presented and Robertson-Walker metrics are introduced. Einstein’s equations are then used to derive the Friedmann equations, relating the cosmic scale factor to the pressure and density of matter in the Universe. The Hubble constant is discussed and an analytic form of the red-shift distance relation is derived, in terms of the matter density, the cosmological constant and the spatial curvature, and observational values of these three parameters are given. Some analytic solutions of the Friedmann equation are presented. The cosmic microwave background dominates the energy density in the early Universe and this leads to a description of the thermal history of the early Universe: the transition from matter dominated to radiation dominated dynamics and nucleosynthesis in the first 3 minutes. Finally the horizon problem and the inflationary Universe are described and the limits of applicability of Einstein's equations, when they might be expected to break down due to quantum effects, are discussed.
Geodesics are introduced and the geodesic equation analysed for the geometries introduced in chapter 2, using variation principles of classical mechanics. Geodesic motino on a sphere is described as well as the Coriolis effect and the Sagnac effect. Newtonian gravity is derived as the non-relativistic limit of geodesic motion in space-time. Geodesics in an expanding universe and heat death is described. Geodesics in Schwarzschild space-time are treated in detail: the precession of the perihelion of Mercury; the bending of light by the Sun; Shapiro time delay; black holes and the event horizon. Gravitational waves and gravitational lensing are also covered.