Skip to main content


By January 1, 2021April 28th, 2021EXIST

Since its 2003 move to Exeter, the Met Office has been one of the region’s key STEMM employers. 

Alongside its operational requirements of providing global weather forecasting, last year it created a small team to explore more innovative ways of using and storing the massive amounts of data created.

Alberto Arribas heads up this Informatics Lab team.  A small mixed group of around 10 people, their expertise includes programmers, designers, developers and academics, all recruited from within existing Met Office positions. Their role is to make the Met Office’s science and data available to the outside world in formats that are as useful as possible, and to explore ways to improve the existing internal work processes.

Their working environment is deliberately cluttered and creative.  There’s a small polystyrene rhino covered in computer-controlled LEDs– an offshoot contribution to Paignton Zoo’s Great Big Rhino Project.  Scattered across the central large table there are programmable LEGO® robots, an Amazon Echo (to be explained later), and piles of gadgets and papers.  It’s clearly a creative, busy working environment, but what is it that they do here?

“An important thing to know is that we are, IT-wise, completely separate from the Met Office.  None of our laptops connect up to the central IT infrastructure.  We use the guest wifi the same as any visitor to the building would. This gives us the opportunity to explore what’s possible, and share and download anything we think is interesting without a risk of bringing down the supercomputers.  We cannot put any of the Met Office’s computers at risk.  We’re here to look at how to improve things, but at its core, 70% of what the Met Office does here cannot change.  The weather needs to be continually monitored, and the forecasts produced.  We’re here to see what is possible with the remaining 30%.”

Working with open-ended research, the team carefully structure their week – they spend Monday to Wednesday on projects, but save Thursdays for open days – an opportunity to meet with people and groups interested in their work – and Friday on admin and inbox clearance.  They’re also looking forward to hosting the graphical web conference ( in November – sponsored by organisations including Google, Amazon, the Met Office, and Exeter City Council, it’s an international conference for the visual representation of information on the web.

For one of their current projects, they’re playing around with an Amazon Echo device.  Although not yet available in the UK, this small tower sits in your home, and responds to any voiced question as long as you use its name, ‘Alexa’.  Like similar devices on your smartphone, you can ask it specific, and random, questions – ‘what’s on TV tonight?’, ‘how big is Jupiter?’  But you can also write your own plug-ins. The team have been exploring how to combine their weather forecasts and other relevant information to provide a personalised and dynamic answer. 

For example, asking the best time for you personally to go for a run: Alexa checks location, checks your preferences or calendar, checks the weather, and asks you for any other information that she may need to find out what slot is best for you. What’s interesting is that this is not scripted dialogue; the Lab has created the functionality for Alexa to recognise when additional information is needed and request it seamlessly.

This work leads naturally on to the concept of a personalised weather forecast – an understanding of what temperatures you personally find too hot or cold, or whether you’re likely to be affected by the pollen count.  Once your devices know who you are, they can adjust the data accordingly.

 Dealing with big data is an issue. Every single day, the Met Office creates more than 40TB of data, (this will soon be 300TB/day thanks to the new supercomputer). Its archives now store nearly 1 exabyte (18 zeros).

 “There are other organisations who struggle with how best to share data efficiently – for example, netflix and youtube.  For them, it’s about how to get the best quality film into your home, whereas we’re trying to get the best quality weather forecast into a commercial airline’s cockpit.”

A focus for the team has been on how to transmit such large amounts of data. Previously, there has been an attempt to achieve lossless compression.  With this, users receive all data, but this includes a lot of data the user doesn’t need.  Now the group uses the term lossy compression – where you can decide which data you need and which you do not – for example, do you need to know the temperature with 5 decimal places? A decision gets made about what data needs to be transmitted.  As an example, by moving to lossy compression, data packets used in web visualisation have dropped from 5GB to 10MB.

Other projects work around the concept of data gravity – a metaphor to explain that data is heavy and difficult to move.  Rather than getting data sent to you, which can take days or even weeks, and you mining it for your needs, you could send a small program to the data which is almost instantaneous.  It’s about not getting slowed down by unnecessarily moving data around.

“We need to transform the current situation in science where it takes so long to find and move data that, by the time you have it, you’ve almost forgotten why you needed it in the first place.”