JWST Master Class Workshops

In February I will attend a James Webb Space Telescope (JWST) Master Class at the European Space Agency (ESA) in Madrid to learn about the tools required to submit observing proposals. The purpose of this meeting to ‘teach the teachers’ so that a number of us from across Europe can organise workshops to train astronomers how to submit proposals in preparation for the first call with a deadline on 1 May 2020. More information about the Masterclass here.

If you would like to attend one of the workshops in Spain you have two options. There will be one at the Instituto de Astrofísica de Canarias (IAC) in Tenerife on 12-13 March 2020 and one at the Centro de Astrobiología (CAB) in Madrid on 16-17 March 2020. You can find out more and register here for the IAC meeting or here for the CAB meeting.

The Webb telescope is an extremely exciting prospect for astronomy and I look forward to learning more about the technology, science goals and how to prepare for observations.

Herschel all sky viewer

Over the last few days I have been working on an all sky viewer for the Herschel Extragalactic Legacy Project (HELP). The easiest way to view it is to look at it through the dedicated viewer here:

http://hedam.lam.fr/HELP/dataproducts/dmu31/dmu31_HiPS/viewer/

There you can click on objects in the HELP ‘A list’ and see their spectral energy distributions and have links to the full measurements from the Virtual Observatory at susseX (VOX). I’ll write some more about that another day. You can also view the individual elements using Aladin Desktop. Details of how to do so are here.

I think we can improve this site so please let me know any issues you have with it. It would be good to optimise it and work on viewing though a phone. You can also embed the viewer in websites such as below:

It is a great way to easily visualise all the data we have made over the last few years and we will continue to develop it with new datasets as we make them.

Observing run at Roque de los Muchachos, 13 – 15 June 2019

This weekend I was observing at the William Herschel Telescope at the Roque de los Muchachos observatory on La Palma, Canaria shown. You can see a picture of the telescope in figure 1 below. We were using the Long-slit Intermediate Resolution Infrared Spectrograph (LIRIS) instrument which can be used for both imaging and spectroscopy.

Figure 1. The William Herschel Telescope, La Palma, Canaria.

While at the top of the mountain I also had the chance to see the Gran Telescopio Canarias, the biggest telescope in the world. It was extremely impressive. What I found most remarkable is how easily it appears to move around. It weighs 400 tonnes but is so finely balanced that it can be turned by hand if necessary.

sFigure 2. Inside the Gran Telescopio Canarias.

Another highlight was seeing the Isaac Newton Telescope, first built in 1967 and operated at Herstmonceaux castle in Sussex which is historically related with the University of Sussex, where I was working until a few weeks ago. The telescope was then moved to La Palma when the bad English weather and seeing became limiting (I could have told them that would happen). This telescope had wooden interiors and a different kind of mounting which together showed how astronomy has changed over the last decades. The Isaac Newton Telescope uses an equatorial mount like the telescope I have used as an amateur hobbyist.

We were looking at lensed galaxies in the near infrared. A lensed galaxy is where a galaxy lies in front of another one and due to the effects of general relativity it ‘bends’ the light from the galaxy behind it and makes it look slightly brighter and a different shape than it would without the foreground galaxy being there. Figure 3 shows an extreme example imaged by the Hubble Space Telescope.

Figure 3.  A strongly lensed galaxy in the Einstein ring configuration. These are rare and the lenses we were looking at were typically in an Einstein cross configuration.

This is part of a bigger project to look for interesting events in lensed galaxies that I will blog about later. We had a sample of these lenses that have been found in large area surveys of the sky. Imaging the objects is relatively simple. We set the telescope to look at the position of the object then check the image to see if we are looking at the right place. Typically the objects we are interested in are very faint so we need to look at nearby bright stars to check the position against a map of stars. Then we ‘integrate’ by taking large numbers of, for instance 30 second, exposures to be summed later in order to collect more photons over a larger time such as an hour. This allows us to observe extremely faint objects. Some of the objects we were looking at had a redshift greater than 2. This means the Universe has expanded by a factor of 3 (given by redshift plus one) since the galaxy emitted the photons we are measuring. The light from objects at that distance has travelled for around ten billion years to reach us.

This was my first observing run and there is clearly an enormous amount of expertise required to make good observations. One aspect that hadn’t occurred to me is that it is quite stressful. The telescope time is very valuable so you can’t be wasting time not taking measurements but trying to understand the telescope commands. Equally, it is vital to check that everything is working and the data is good. Those two concerns mean that you have to be quite alert and careful when setting up the runs. Afterwards you are typically just letting the telescope take measurements. At five in the morning these two things can be quite a challenge.

Seeing images of these distant galaxies fresh off the telescope reminded me how remarkable modern astronomy is and will keep me going through the long winter of tapping away at keyboards and moving numbers around on a screen.

XI Día de Nuestra Ciencia, Santa Cruz de Tenerife, 28 May 2019

My two months on Duolingo tell me that the title of this post means the day of our science 11. As if it had been intentionally organised that way my first day at the Instituto de Astrofísica de Canarias (IAC) coincided with the yearly celebration of all the excellent scientific research that takes place here. It was a perfect introduction to the department. It was striking to see all aspects of modern astronomy covered from solar system to stars to galaxies to cosmology. In addition to the huge variation in fields there was also a large variation in method from instrument development to observation to theory. I was particularly impressed by the telescopes the IAC has access to including the ‘jewel in the crown’ the Gran Telescopio Canarias:

Figure 1. The biggest telescope in the world.

Hopefully I will go observing to the site in the next couple of weeks and will blog about it then. The day was also a chance to meet my new colleagues and discuss ideas for research. I’m excited to work on new things in astronomy and get started immediately. Some of our ideas will involve Herschel far infrared data so the work we did at Sussex will also be useful but I’m already looking at new papers and will write about our projects in the coming weeks. In the mean time I need to finish all my bureaucracy including applying for my Physical Person Certificate.  I’m quietly confident that I’m going to pass. In the mean time here is the latest addition to my collection of group photos:

Figure 2. One day I aim to publish a Where’s Wally style compilation of group photos in which Wally gradually ages.

You can read a full summary of all the talks in English here.

SPICA conference, Crete, Greece, 20-23 May 2019

This week I have been in Crete for a conference on the Space Infrared Telescope for Cosmology and Astrophysics (SPICA). SPICA is a proposed far infrared telescope that will have a slightly scaled down Herschel style mirror that crucially will be cooled to below 8K meaning it will offer a huge increase in sensitivity compared to the larger but warm (80K) Herschel mirror. The cold mirror means that the telescope will be six orders of magnitude more sensitive than Herschel and will no longer be limited by the telescope’s own emission but will be limited by the true far infrared background. Figure 1 shows the current design for the instrument aiming for a 2030 (ish) launch.

Figure 1. The SPICA telescope design.

There is currently no instrument observing the far infrared region of the electromagnetic spectrum that lies between the James Webb Space Telescope (JWST, 2021 launch) and the Atacama Large Millimetre Array (ALMA). The instrument will feature three instruments covering imaging, spectroscopy, and polarimetry over the range 12-230 micrometers.

The week in Crete has involved investigating all the possible science that can be done with the instrument. This is critical to the designers of the telescope when making engineering decisions about the three instruments. When designing an experiment it is critical to iterate between thinking about key science questions that could be answered by conceivable technology and then designing possible configurations. It is then possible to go back to work on possible measurements and how the designs could be improved and so on until we converge on a compromise between world class science, cost, and technological feasibility.

Figure 2. The obligatory group photo.

SPICA is an extremely exciting prospect for investigating the far-infrared universe in a time where no other instrument will be working in that region. The Poly Aromatic Hydrocarbon (PAH) features in galaxy spectra at those wavelengths contain a wealth of information about star formation and the redshift of objects. There are three proposals for the European Space Agency’s ‘M5’ call which will be decided in around a year. The competition will be fierce but SPICA would certainly have a huge impact on many areas of astronomy.

Herschel ten years after launch

Over the last two days I have been attending a celebration of the ten year anniversary (1512h 14 May 2008) of the the launch of the Herschel space observatory. It has been a pleasure to visiting the European Space Astronomy Centre just outside Madrid. The centre is dotted with numerous models of European Space Agency telescopes.

I gave a short talk on the Herschel Extragalactic Legacy Project (HELP) that I have been working on for the last few years. You can download the talk here. you can also see a full video of all the talks (my talk starts at 2:19:33):

Herschel was launched together with Planck. There is still an ongoing connection between the two instruments and it was fascinating to see the engineering behind launching two satellites together.

It was inspiring to see the great range of science that was done with the instrument. I have to admit I knew very little about the aspects outside my own field of extragalactic astronomy (extraextragalactic?). The whole enterprise has been ongoing for around thirty years and clearly occupied the large part of a number of people’s careers. It is testament o their hard work that the data is still leading to new scientific results.

Can artificial intelligence tell if somebody is lying?

For the last three months, I have been working with researchers from across the physical sciences at the University of Sussex on some software to classify videos of court cases as either deceitful or truthful. Building on work by Yusufu Shehu, we have constructed a neural network (a type of software partly inspired by neurons in the human brain) that can classify these videos with an accuracy of over 80%. Figure 1 shows our network design. For comparison human ability to spot lying is often below 60%. The software could in theory be trained on other data sets and we are actively looking for people in commercial areas who might be interested.

Figure 1. Diagram of our network. It has over thirty million parameters and takes around two hours to train on a laptop. After training classification is a matter of seconds.

One interesting aspect of neural networks is that we don’t tell the network how to work, but rather we train it. This means that the network could be trained on other features. Any combination of video, audio, and text can in principal be classified into any sets where there is information in the data. Some possible ideas for how this might be applied in other situations are classifying phone conversations according to the emotional tone or likelihood of a successful sale. Classifying music into genres, speakers by gender or age, specific speakers for security reasons. Figure 2 shows some clips form the court case videos. A court case is a very particular environment so we are interested to apply the software to other settings.

Figure 2. Sample screenshots of court case training data taken from Pérez-Rosas et al. (2015)

A paper in 2018 by Krishnamurthy et al. developed a ‘multi-modal neural network’ and trained it on 120 court case videos taken from the Miami University deception detection database (Lloyd et al. 2019). They showed that it was possible to detect deception with an accuracy of 75%, a significant improvement on human performance. We have further developed their model and achieved improved results. In the multi-modal network video, text transcripts, audio and ‘micro-features’ are treated independently and then the results are combined to get a final probability. Figure 3 shows how the network is designed.

Figure 3. Overview of the network design (taken from Krishnamurthy et al.) Our network can be applied to any subset of these inputs and be used to classify a large number of different features in addition to deception.

We are interested to have conversations with potential industry partners who might wish to take this forward with us. Please don’t hesitate to get in touch if you think this research could be useful to you. We are interested in applying these networks both to video and also to audio only. We see particular possibility for collaboration with an industrial partner in an area that relies on large volumes of audio data from, for instance, telephone calls.

LSST meeting, Tucson USA, 13-17 August 2018

I spent the week in Tucson Arizona for the Large Synoptic Survey Telescope (LSST) annual meeting. They had a special focus meeting on deblending which is particularly relevant for me. It was great to see the work being done on the pipeline deblender SCARLETT (Melchior et al. 2018 – https://arxiv.org/abs/1802.10157).

Meeting attendees in Tucson, Arizona.

My work on deblending is concentrating on more expensive and robust methods that could not be applied in the pipeline which must be run essentially every night on incoming data. It was clear that other methods must be developed for each given science case. There will have to be more work on resolved objects for instance. The real challenge is that the objects are not point sources. It is this combination of resolved and confused images that makes deblending such a challenge.

The conference was also a chance to find out more about the project as a whole including updates on the construction. The telescope is really taking shape and images from the El Peñón peak of Cerro Pachón and it is extremely exciting to see all the work, by scientists and engineers, going into the project’s success.

The Herschel Extragalactic Legacy Project (HELP) – beta data release

“Is it not curious, that so vast a being as the whale should see the world through so small an eye, and hear the thunder through an ear which is smaller than a hare’s? But if his eyes were broad as the lens of Herschel’s great telescope; and his ears capacious as the porches of cathedrals; would that make him any longer of sight, or sharper of hearing? Not at all.- Why then do you try to “enlarge” your mind? Subtilize it.” – Moby Dick.

For the last year I have been working on the Herschel Extragalactic Legacy Project (HELP), an EU funded project to use far infrared imaging from the Herschel Space Observatory to understand galaxy formation and evolution. We are gearing up for our first data release, DR1 on 1 October but we are making a lot of the data available now for beta testing.

We are very keen for the astronomical community to start using this huge dataset comprising 170 million galaxies over 1270 square degrees of extragalactic sky and indeed using and developing the code used to produce it. We have released all the code to perform the reduction on GitHub in the spirit of open science and reproducibility. The data can be accessed as raw data files from the Herschel Database at Marseille (HeDaM) and queried from a dedicated Virtual Observatory server. Although Herschel imaging has been the main focus of the project, we have taken public data from many different instruments spanning all the way for ultraviolet to radio data. Tying together these different data sets is a major challenge and will be required to make the most of the upcoming wide surveys such as from the Large synoptic Survey Telescope (optical), the Euclid space telescope (optical) and the Square Kilometre Array (radio).

We are also in the process of setting up mirrors here at Sussex and I plan to blog more about that soon. There is a vast amount of data and we are working on squeezing every last ounce of science out of all the public data from a wide array of different instruments which make up the full multi-wavelength data we have collated.

If you have any questions about how to use this database please leave a comment or email me.

Statistical Challenges in 21st Century Cosmology, 20-25 May, Valencia, Spain

Last week I was in Valencia for a conference on statistical methods in modern cosmology. The week began with a summer school for PhD students and a few postdocs on machine learning, sparsity and Bayesian methods. I was familiar with the Baysian methods but sparsity (dealing with data matrices where the majority of elements are zero) was completely new and I am looking forward to implementing some of the Machine Learning methods perhaps for the Herschel Extragalactic Legacy Project or for work I am about to do for Public Health England (more about that in a later blog post).

The introductory lecture by Stephane Maillat (Ecole Normale Superieure) gave an overview of neural network approaches to scientific problems. One particularly striking example was calculating molecule energies to higher accuracy than Density Functional Theory (DFT) in very short times. My PhD research used DFT heavily and we were always limited by computer resources. The fact that a neural network can learn how to predict ground state energies without including any physics in the model (!) was remarkable to say the least. We are certainly entering a brave new world.

There were however some dissenting voices. Neural networks and machine learning in general needs some work to make results more reliable. Google has started work on Tensor Flow probability which aims to assign some measure of errors to results. These methods also in general require a representative sample. Often we know that our samples are not representative and we aim to model selection biases. I think these issues both need to be addressed before ‘classical’ methods such as Bayesian inference are consigned to history.

I also presented a poster on ongoing work on deblending. Now that we have a prototype algorithm I need to get on with implementing and testing. It was great to see talks by Peter Melchior (Princeton) and Rachel Mandelbaum (Princeton) which both brought attention to the problem of blending for pretty much all science cases from the Large Synoptic Survey Telescope (LSST) and the space telescope Euclid. Clearly this problem is not going to go away and analysis of galaxy images will be limited by blending issues in the near future.

You can see the poster here.

I would recommend any PhD students or post docs to attend future summer schools and conferences. It was excellent to see so many researchers from around the world working on problems related to my research. The summer school offered an excellent introduction to modern statistical methods that can be quite simple to implement and may help you with your research.