We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Asymmetric emission of gravitational waves during mergers of black holes (BHs) produces a recoil kick, which can set a newly formed BH on a bound orbit around the centre of its host galaxy, or even completely eject it. To study this population of recoiling BHs we extract properties of galaxies with merging BHs from Illustris TNG300 simulation and then employ both analytical and numerical techniques to model unresolved process of BH recoil. This comparative analysis between analytical and numerical models shows that, on cosmological scales, numerically modelled recoiling BHs have a higher escape probability and predict a greater number of offset active galactic nuclei (AGN). BH escaped probability $>$40% is expected in 25$\%$ of merger remnants in numerical models, compared to 8$\%$ in analytical models. At the same time, the predicted number of offset AGN at separations ${>}5$ kpc changes from 58$\%$ for numerical models to 3$\%$ for analytical models. Since BH ejections in major merger remnants occur in non-virialised systems, static analytical models cannot provide an accurate description. Thus we argue that numerical models should be used to estimate the expected number density of escaped BHs and offset AGN.
We study the correlation between the non-thermal velocity dispersion ($\sigma_{nth}$) and the length scale (L) in the neutral interstellar medium (ISM) using a large number of Hi gas components taken from various published Hi surveys and previous Hi studies. We notice that above the length-scale (L) of 0.40 pc, there is a power-law relationship between $\sigma_{nth}$ and L. However, below 0.40 pc, there is a break in the power law, where $\sigma_{nth}$ is not significantly correlated with L. It has been observed from the Markov chain Monte Carlo (MCMC) method that for the dataset of L$\gt$ 0.40 pc, the most probable values of intensity (A) and power-law index (p) are 1.14 and 0.55, respectively. Result of p suggests that the power law is steeper than the standard Kolmogorov law of turbulence. This is due to the dominance of clouds in the cold neutral medium. This is even more clear when we separate the clouds into two categories: one for L is $\gt$ 0.40 pc and the kinetic temperature ($T_{k}$) is $\lt$250 K, which are in the cold neutral medium (CNM) and for other one where L is $\gt$0.40 pc and $T_{k}$ is between 250 and 5 000 K, which are in the thermally unstable phase (UNM). Most probable values of A and p are 1.14 and 0.67, respectively, in the CNM phase and 1.01 and 0.52, respectively, in the UNM phase. A greater number of data points is effective for the UNM phase in constructing a more accurate estimate of A and p, since most of the clouds in the UNM phase lie below 500 K. However, from the value of p in the CNM phase, it appears that there is a significant difference from the Kolmogorov scaling, which can be attributed to a shock-dominated medium.
We present a method for identifying radio stellar sources using their proper-motion. We demonstrate this method using the FIRST, VLASS, RACS-low and RACS-mid radio surveys, and astrometric information from Gaia Data Release 3. We find eight stellar radio sources using this method, two of which have not previously been identified in the literature as radio stars. We determine that this method probes distances of $\sim$90pc when we use FIRST and RACS-mid, and $\sim$250pc when we use FIRST and VLASS. We investigate the time baselines required by current and future radio sky surveys to detect the eight sources we found, with the SKA (6.7 GHz) requiring $<$3 yr between observations to find all eight sources. We also identify nine previously known and 43 candidate variable radio stellar sources that are detected in FIRST (1.4 GHz) but are not detected in RACS-mid (1.37 GHz). This shows that many stellar radio sources are variable, and that surveys with multiple epochs can detect a more complete sample of stellar radio sources.
The advent of time-domain sky surveys has generated a vast amount of light variation data, enabling astronomers to investigate variable stars with large-scale samples. However, this also poses new opportunities and challenges for the time-domain research. In this paper, we focus on the classification of variable stars from the Catalina Surveys Data Release 2 and propose an imbalanced learning classifier based on Self-paced Ensemble (SPE) method. Compared with the work of Hosenie et al. (2020), our approach significantly enhances the classification Recall of Blazhko RR Lyrae stars from 12% to 85%, mixed-mode RR Lyrae variables from 29% to 64%, detached binaries from 68% to 97%, and LPV from 87% to 99%. SPE demonstrates a rather good performance on most of the variable classes except RRab, RRc, and contact and semi-detached binary. Moreover, the results suggest that SPE tends to target the minority classes of objects, while Random Forest is more effective in finding the majority classes. To balance the overall classification accuracy, we construct a Voting Classifier that combines the strengths of SPE and Random Forest. The results show that the Voting Classifier can achieve a balanced performance across all classes with minimal loss of accuracy. In summary, the SPE algorithm and Voting Classifier are superior to traditional machine learning methods and can be well applied to classify the periodic variable stars. This paper contributes to the current research on imbalanced learning in astronomy and can also be extended to the time-domain data of other larger sky survey projects (LSST, etc.).
To explore the role environment plays in influencing galaxy evolution at high redshifts, we study $2.0\leq z<4.2$ environments using the FourStar Galaxy Evolution (ZFOURGE) survey. Using galaxies from the COSMOS legacy field with ${\rm log(M_{*}/M_{\odot})}\geq9.5$, we use a seventh nearest neighbour density estimator to quantify galaxy environment, dividing this into bins of low-, intermediate-, and high-density. We discover new high-density environment candidates across $2.0\leq z<2.4$ and $3.1\leq z<4.2$. We analyse the quiescent fraction, stellar mass and specific star formation rate (sSFR) of our galaxies to understand how these vary with redshift and environment. Our results reveal that, across $2.0\leq z<2.4$, the high-density environments are the most significant regions, which consist of elevated quiescent fractions, ${\rm log(M_{*}/M_{\odot})}\geq10.2$ massive galaxies and suppressed star formation activity. At $3.1\leq z<4.2$, we find that high-density regions consist of elevated stellar masses but require more complete samples of quiescent and sSFR data to study the effects of environment in more detail at these higher redshifts. Overall, our results suggest that well-evolved, passive galaxies are already in place in high-density environments at $z\sim2.4$, and that the Butcher–Oemler effect and SFR-density relation may not reverse towards higher redshifts as previously thought.
The Australian SKA Pathfinder (ASKAP) is being used to undertake a campaign to rapidly survey the sky in three frequency bands across its operational spectral range. The first pass of the Rapid ASKAP Continuum Survey (RACS) at 887.5 MHz in the low band has already been completed, with images, visibility datasets, and catalogues made available to the wider astronomical community through the CSIRO ASKAP Science Data Archive (CASDA). This work presents details of the second observing pass in the mid band at 1367.5 MHz, RACS-mid, and associated data release comprising images and visibility datasets covering the whole sky south of $\delta_{\text{J2000}}=+49^\circ$. This data release incorporates selective peeling to reduce artefacts around bright sources, as well as accurately modelled primary beam responses. The Stokes I images reach a median noise of 198 $\mu$Jy PSF$^{-1}$ with a declination-dependent angular resolution of 8.1–47.5 arcsec that fills a niche in the existing ecosystem of large-area astronomical surveys. We also supply Stokes V images after application of a widefield leakage correction, with a median noise of 165 $\mu$Jy PSF$^{-1}$. We find the residual leakage of Stokes I into V to be $\lesssim 0.9$–$2.4$% over the survey. This initial RACS-mid data release will be complemented by a future release comprising catalogues of the survey region. As with other RACS data releases, data products from this release will be made available through CASDA.
As the scale of cosmological surveys increases, so does the complexity in the analyses. This complexity can often make it difficult to derive the underlying principles, necessitating statistically rigorous testing to ensure the results of an analysis are consistent and reasonable. This is particularly important in multi-probe cosmological analyses like those used in the Dark Energy Survey (DES) and the upcoming Legacy Survey of Space and Time, where accurate uncertainties are vital. In this paper, we present a statistically rigorous method to test the consistency of contours produced in these analyses and apply this method to the Pippin cosmological pipeline used for type Ia supernova cosmology with the DES. We make use of the Neyman construction, a frequentist methodology that leverages extensive simulations to calculate confidence intervals, to perform this consistency check. A true Neyman construction is too computationally expensive for supernova cosmology, so we develop a method for approximating a Neyman construction with far fewer simulations. We find that for a simulated dataset, the 68% contour reported by the Pippin pipeline and the 68% confidence region produced by our approximate Neyman construction differ by less than a percent near the input cosmology; however, they show more significant differences far from the input cosmology, with a maximal difference of 0.05 in $\Omega_{M}$ and 0.07 in w. This divergence is most impactful for analyses of cosmological tensions, but its impact is mitigated when combining supernovae with other cross-cutting cosmological probes, such as the cosmic microwave background.
With the advent of deep, all-sky radio surveys, the need for ancillary data to make the most of the new, high-quality radio data from surveys like the Evolutionary Map of the Universe (EMU), GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended, Very Large Array Sky Survey, and LOFAR Two-metre Sky Survey is growing rapidly. Radio surveys produce significant numbers of Active Galactic Nuclei (AGNs) and have a significantly higher average redshift when compared with optical and infrared all-sky surveys. Thus, traditional methods of estimating redshift are challenged, with spectroscopic surveys not reaching the redshift depth of radio surveys, and AGNs making it difficult for template fitting methods to accurately model the source. Machine Learning (ML) methods have been used, but efforts have typically been directed towards optically selected samples, or samples at significantly lower redshift than expected from upcoming radio surveys. This work compiles and homogenises a radio-selected dataset from both the northern hemisphere (making use of Sloan Digital Sky Survey optical photometry) and southern hemisphere (making use of Dark Energy Survey optical photometry). We then test commonly used ML algorithms such as k-Nearest Neighbours (kNN), Random Forest, ANNz, and GPz on this monolithic radio-selected sample. We show that kNN has the lowest percentage of catastrophic outliers, providing the best match for the majority of science cases in the EMU survey. We note that the wider redshift range of the combined dataset used allows for estimation of sources up to $z = 3$ before random scatter begins to dominate. When binning the data into redshift bins and treating the problem as a classification problem, we are able to correctly identify $\approx$76% of the highest redshift sources—sources at redshift $z > 2.51$—as being in either the highest bin ($z > 2.51$) or second highest ($z = 2.25$).
We present the third data release from the Parkes Pulsar Timing Array (PPTA) project. The release contains observations of 32 pulsars obtained using the 64-m Parkes ‘Murriyang’ radio telescope. The data span is up to 18 yr with a typical cadence of 3 weeks. This data release is formed by combining an updated version of our second data release with $\sim$3 yr of more recent data primarily obtained using an ultra-wide-bandwidth receiver system that operates between 704 and 4032 MHz. We provide calibrated pulse profiles, flux density dynamic spectra, pulse times of arrival, and initial pulsar timing models. We describe methods for processing such wide-bandwidth observations and compare this data release with our previous release.
The putative host galaxy of FRB 20171020A was first identified as ESO 601-G036 in 2018, but as no repeat bursts have been detected, direct confirmation of the host remains elusive. In light of recent developments in the field, we re-examine this host and determine a new association confidence level of 98%. At 37 Mpc, this makes ESO 601-G036 the third closest FRB host galaxy to be identified to date and the closest to host an apparently non-repeating FRB (with an estimated repetition rate limit of $<$$0.011$ bursts per day above $10^{39}$ erg). Due to its close distance, we are able to perform detailed multi-wavelength analysis on the ESO 601-G036 system. Follow-up observations confirm ESO 601-G036 to be a typical star-forming galaxy with H i and stellar masses of $\log_{10}\!(M_{\rm{H\,{\small I}}} / M_\odot) \sim 9.2$ and $\log_{10}\!(M_\star / M_\odot) = 8.64^{+0.03}_{-0.15}$, and a star formation rate of $\text{SFR} = 0.09 \pm 0.01\,{\rm M}_\odot\,\text{yr}^{-1}$. We detect, for the first time, a diffuse gaseous tail ($\log_{10}\!(M_{\rm{H\,{\small I}}} / M_\odot) \sim 8.3$) extending to the south-west that suggests recent interactions, likely with the confirmed nearby companion ESO 601-G037. ESO 601-G037 is a stellar shred located to the south of ESO 601-G036 that has an arc-like morphology, is about an order of magnitude less massive, and has a lower gas metallicity that is indicative of a younger stellar population. The properties of the ESO 601-G036 system indicate an ongoing minor merger event, which is affecting the overall gaseous component of the system and the stars within ESO 601-G037. Such activity is consistent with current FRB progenitor models involving magnetars and the signs of recent interactions in other nearby FRB host galaxies.
Next-generation astronomical surveys naturally pose challenges for human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. While a significant fraction of the data preparation and analysis will be taken care of by automated pipelines, crucial steps of knowledge discovery can still only be achieved through various level of human interpretation. As the number of sources in a survey grows, there is need to both modify and simplify repetitive visualisation processes that need to be completed for each source. As tasks such as per-source quality control, candidate rejection, and morphological classification all share a single instruction, multiple data (SIMD) work pattern, they are amenable to a parallel solution. Selecting extragalactic neutral hydrogen (Hi) surveys as a representative example, we use system performance benchmarking and the visual data and reasoning methodology from the field of information visualisation to evaluate a bespoke comparative visualisation environment: the encube visual analytics framework deployed on the 83 Megapixel Swinburne Discovery Wall. Through benchmarking using spectral cube data from existing Hi surveys, we are able to perform interactive comparative visualisation via texture-based volume rendering of 180 three-dimensional (3D) data cubes at a time. The time to load a configuration of spectral cubes scale linearly with the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or 34 Gigabytes) each loading in under 5 min. We show that parallel comparative inspection is a productive and time-saving technique which can reduce the time taken to complete SIMD-style visual tasks currently performed at the desktop by at least two orders of magnitude, potentially rendering some labour-intensive desktop-based workflows obsolete.
Investigating rare and new objects have always been an important direction in astronomy. Cataclysmic variables (CVs) are ideal and natural celestial bodies for studying the accretion process of semi-detached binaries with accretion processes. However, the sample size of CVs must increase because a lager gap exists between the observational and the theoretical expanding CVs. Astronomy has entered the big data era and can provide massive images containing CV candidates. CVs as a type of faint celestial objects, are highly challenging to be identified directly from images using automatic manners. Deep learning has rapidly developed in intelligent image processing and has been widely applied in some astronomical fields with excellent detection results. YOLOX, as the latest YOLO framework, is advantageous in detecting small and dark targets. This work proposes an improved YOLOX-based framework according to the characteristics of CVs and Sloan Digital Sky Survey (SDSS) photometric images to train and verify the model to realise CV detection. We use the Convolutional Block Attention Module to increase the number of output features with the feature extraction network and adjust the feature fusion network to obtain fused features. Accordingly, the loss function is modified. Experimental results demonstrate that the improved model produces satisfactory results, with average accuracy (mean average Precision at 0.5) of 92.0%, Precision of 92.9%, Recall of 94.3%, and $F1-score$ of 93.6% on the test set. The proposed method can efficiently achieve the identification of CVs in test samples and search for CV candidates in unlabeled images. The image data vastly outnumber the spectra in the SDSS-released data. With supplementary follow-up observations or spectra, the proposed model can help astronomers in seeking and detecting CVs in a new manner to ensure that a more extensive CV catalog can be built. The proposed model may also be applied to the detection of other kinds of celestial objects.
Gamma-ray bursts (GRBs) and double neutron star merger gravitational-wave events are followed by afterglows that shine from X-rays to radio, and these broadband transients are generally interpreted using analytical models. Such models are relatively fast to execute, and thus easily allow estimates of the energy and geometry parameters of the blast wave, through many trial-and-error model calculations. One problem, however, is that such analytical models do not capture the underlying physical processes as well as more realistic relativistic numerical hydrodynamic (RHD) simulations do. Ideally, those simulations are used for parameter estimation instead, but their computational cost makes this intractable. To this end, we present DeepGlow, a highly efficient neural network architecture trained to emulate a computationally costly RHD-based model of GRB afterglows, to within a few percent accuracy. As a first scientific application, we compare both the emulator and a different analytical model calibrated to RHD simulations, to estimate the parameters of a broadband GRB afterglow. We find consistent results between these two models, and also give further evidence for a stellar wind progenitor environment around this GRB source. DeepGlow fuses simulations that are otherwise too complex to execute over all parameters, to real broadband data of current and future GRB afterglows.
The International VLBI Service for Geodesy and Astrometry (IVS) regularly provides high-quality data to produce Earth Orientation Parameters (EOP), and for the maintenance and realisation of the International Terrestrial and Celestial Reference Frames, ITRF and ICRF. The first iteration of the celestial reference frame (CRF) at radio wavelengths, the ICRF1, was adopted by the International Astronomical Union (IAU) in 1997 to replace the FK5 optical frame. Soon after, the IVS began official operations and in 2009 there was a significant increase in data sufficient to warrant a second iteration of the CRF, ICRF2. The most recent ICRF3, was adopted by the IAU in 2018. However, due to the geographic distribution of observing stations being concentrated in the Northern hemisphere, CRFs are generally weaker in the South due to there being fewer Southern Hemisphere observations. To increase the Southern Hemisphere observations, and the density, precision of the sources, a series of deep South observing sessions was initiated in 1995. This initiative in 2004 became the IVS Celestial Reference Frame Deep South (IVS-CRDS) observing programme. This paper covers the evolution of the CRDS observing programme for the period 1995–2021, details the data products and results, and concludes with a summary of upcoming improvements to this ongoing project.
We present a comparison between the performance of a selection of source finders (SFs) using a new software tool called Hydra. The companion paper, Paper I, introduced the Hydra tool and demonstrated its performance using simulated data. Here we apply Hydra to assess the performance of different source finders by analysing real observational data taken from the Evolutionary Map of the Universe (EMU) Pilot Survey. EMU is a wide-field radio continuum survey whose primary goal is to make a deep ($20\mu$Jy/beam RMS noise), intermediate angular resolution ($15^{\prime\prime}$), 1 GHz survey of the entire sky south of $+30^{\circ}$ declination, and expecting to detect and catalogue up to 40 million sources. With the main EMU survey it is highly desirable to understand the performance of radio image SF software and to identify an approach that optimises source detection capabilities. Hydra has been developed to refine this process, as well as to deliver a range of metrics and source finding data products from multiple SFs. We present the performance of the five SFs tested here in terms of their completeness and reliability statistics, their flux density and source size measurements, and an exploration of case studies to highlight finder-specific limitations.
We investigate the diversity in the sizes and average surface densities of the neutral atomic hydrogen (H i) gas discs in $\sim$280 nearby galaxies detected by the Widefield ASKAP L-band Legacy All-sky Blind Survey (WALLABY). We combine the uniformly observed, interferometric H i data from pilot observations of the Hydra cluster and NGC 4636 group fields with photometry measured from ultraviolet, optical, and near-infrared imaging surveys to investigate the interplay between stellar structure, star formation, and H i structural parameters. We quantify the H i structure by the size of the H i relative to the optical disc and the average H i surface density measured using effective and isodensity radii. For galaxies resolved by $>$$1.3$ beams, we find that galaxies with higher stellar masses and stellar surface densities tend to have less extended H i discs and lower H i surface densities: the isodensity H i structural parameters show a weak negative dependence on stellar mass and stellar mass surface density. These trends strengthen when we limit our sample to galaxies resolved by $>$2 beams. We find that galaxies with higher H i surface densities and more extended H i discs tend to be more star forming: the isodensity H i structural parameters have stronger correlations with star formation. Normalising the H i disc size by the optical effective radius (instead of the isophotal radius) produces positive correlations with stellar masses and stellar surface densities and removes the correlations with star formation. This is due to the effective and isodensity H i radii increasing with mass at similar rates while, in the optical, the effective radius increases slower than the isophotal radius. Our results are in qualitative agreement with previous studies and demonstrate that with WALLABY we can begin to bridge the gap between small galaxy samples with high spatial resolution H i data and large, statistical studies using spatially unresolved, single-dish data.
The latest generation of radio surveys are now producing sky survey images containing many millions of radio sources. In this context it is highly desirable to understand the performance of radio image source finder (SF) software and to identify an approach that optimises source detection capabilities. We have created Hydra to be an extensible multi-SF and cataloguing tool that can be used to compare and evaluate different SFs. Hydra, which currently includes the SFs Aegean, Caesar, ProFound, PyBDSF, and Selavy, provides for the addition of new SFs through containerisation and configuration files. The SF input RMS noise and island parameters are optimised to a 90% ‘percentage real detections’ threshold (calculated from the difference between detections in the real and inverted images), to enable comparison between SFs. Hydra provides completeness and reliability diagnostics through observed-deep ($\mathcal{D}$) and generated-shallow ($\mathcal{S}$) images, as well as other statistics. In addition, it has a visual inspection tool for comparing residual images through various selection filters, such as S/N bins in completeness or reliability. The tool allows the user to easily compare and evaluate different SFs in order to choose their desired SF, or a combination thereof. This paper is part one of a two part series. In this paper we introduce the Hydra software suite and validate its $\mathcal{D/S}$ metrics using simulated data. The companion paper demonstrates the utility of Hydra by comparing the performance of SFs using both simulated and real images.
The report of a detection of an absorption profile centred at 78 MHz in the continuum radio background spectrum by the EDGES experiment and its interpretation as the redshifted 21 cm signal of cosmological origin has become one of the most debated results of observational cosmology in recent times. The cosmological 21 cm has long been proposed to be a powerful probe for observing the early Universe and tracing its evolution over cosmic time. Even though the science case is well established, measurement challenges posed on the technical ground are not fully understood to the level of claiming a successful detection. EDGES’s detection has naturally motivated a number of experimental attempts worldwide to corroborate the findings. In this paper, we present a precision cross-correlation spectrometer HYPEREION purpose-designed for a precision radio background measurement between 50–120 MHz to detect the absorption profile reported by the EDGES experiment. HYPEREION implements a pre-correlation signal processing technique that self-calibrates any spurious additive contamination from within the system and delivers a differential measurement of the sky spectrum and a reference thermal load internal to the system. This ensures an unambiguous ‘zero-point’ of absolute calibration of the purported absorption profile. We present the system design, measurement equations of the ideal system, systematic effects in the real system, and finally, an assessment of the real system output for the detection of the absorption profile at 78 MHz in the continuum radio background spectrum.
We present a systematic search for radio counterparts of novae using the Australian Square Kilometer Array Pathfinder (ASKAP). Our search used the Rapid ASKAP Continuum Survey, which covered the entire sky south of declination $+41^{\circ}$ ($\sim$$34000$ square degrees) at a central frequency of 887.5 MHz, the Variables and Slow Transients Pilot Survey, which covered $\sim$$5000$ square degrees per epoch (887.5 MHz), and other ASKAP pilot surveys, which covered $\sim$200–2000 square degrees with 2–12 h integration times. We crossmatched radio sources found in these surveys over a two–year period, from 2019 April to 2021 August, with 440 previously identified optical novae, and found radio counterparts for four novae: V5668 Sgr, V1369 Cen, YZ Ret, and RR Tel. Follow-up observations with the Australian Telescope Compact Array confirm the ejecta thinning across all observed bands with spectral analysis indicative of synchrotron emission in V1369 Cen and YZ Ret. Our light-curve fit with the Hubble Flow model yields a value of $1.65\pm 0.17 \times 10^{-4} \rm \:M_\odot$ for the mass ejected in V1369 Cen. We also derive a peak surface brightness temperature of $250\pm80$ K for YZ Ret. Using Hubble Flow model simulated radio lightcurves for novae, we demonstrate that with a 5$\sigma$ sensitivity limit of 1.5 mJy in 15-min survey observations, we can detect radio emission up to a distance of 4 kpc if ejecta mass is in the range $10^{-3}\rm \:M_\odot$, and upto 1 kpc if ejecta mass is in the range $10^{-5}$–$10^{-3}\rm \:M_\odot$. Our study highlights ASKAP’s ability to contribute to future radio observations for novae within a distance of 1 kpc hosted on white dwarfs with masses $0.4$–$1.25\:\rm M_\odot$, and within a distance of 4 kpc hosted on white dwarfs with masses $0.4$–$1.0\:\rm M_\odot$.