Although the public part of the worldwide experiment is coming to an end this month, the world’s greatest extraterrestrial hunt is far from finished.

IN 1995, THE computer scientist David Gedye had an idea that could only originate at a cocktail party. What if the world’s personal computers were linked together on the internet to create a virtual supercomputer that could help with SETI, the search for extraterrestrial intelligence? The network would be able to sort through the massive amounts of data being collected by radio telescopes, seeking signals that might point to an alien civilization around another star. A distributed supercomputer sounded outlandish at the time, but within four years, Gedye and his collaborator, computer scientist David Anderson, had built the software to make it a reality. They called it SETI@home.

On Tuesday, researchers at the Berkeley SETI Research Center announced they would stop distributing new data to SETI@home users at the end of March. It marks the culmination of an unprecedented 20-year experiment that engaged millions of people from almost every country on earth. But all experiments must come to an end, and SETI@home is no exception. So far, the researchers at Berkeley have only been able to analyze small portions of the SETI@home data. They had to hit pause on the public-facing part of the experiment to analyze the full two decades of radio astronomy data they’ve collected to see what they might find.

“For 20 years, there’s been this fight between keeping the project running and getting the results out to the scientific community,” says Eric Korpela, the director of SETI@home. “At this point, we can’t even be sure that we haven’t found anything because we’ve been doing most of our data analysis on small test databases rather than the whole sky.”

Officially launched at Berkeley on May 17, 1999, the SETI@home initiative helped address one of the biggest challenges in the search for extraterrestrial intelligence: noise. Professional alien hunters are in the business of searching for weak radio signals in a vast sky washed out by interference from satellites, TV stations, and astrophysical phenomena like pulsars. This means they are fundamentally grappling with a big data problem—they’re looking for a single signal sent by ET floating on a vast ocean of radio flotsam.

Filtering through all this data requires computing power—and lots of it. More processors crunching data from outer space means a more sensitive analysis of more signals. By borrowing unused processing power from personal computers around the world, SETI@home could plow through radio telescope data faster than ever before. When a computer was idle, the SETI@Home program launched a screensaver that showed a field of colorful spikes that represented signals collected at the Arecibo radio telescope in Puerto Rico as it scanned the cosmos. And for anyone who downloaded the software, it meant that if ET called Earth, it could very well be your own CPU that picked up the phone.



It didn’t take long for the idea to catch on. SETI@home quickly grew into what its collaborator, the nonprofit Planetary Society, has called its “most successful public participation project ever undertaken.” As WIRED reported in 2000, within months of SETI@home’s launch, more than 2.6 million people in 226 countries were volunteering their spare processing power to parse the mounds of data generated by alien-hunting radio telescopes. Together, they ran about 25 trillion calculations per second, which made SETI@home more than twice as powerful as the best supercomputer in the world at that time.

“We didn’t anticipate how fast it would grow,” says Dan Werthimer, who helped create SETI@home and now serves as its chief scientist. “It grew exponentially and I think it’s because people are really excited about the question of whether we’re alone. It’s not very often that people can participate in a science project that has these sorts of profound implications.”

Over the last 20 years, the army of SETI@home screensavers has parsed billions of signals collected at Arecibo and selected those that seemed the most likely to have been generated by an extraterrestrial intelligence. Once the program parsed this data, it was shipped off to Berkeley where the data was further processed to filter out signals from satellites, TV stations, and other sources of interference, to match the data with historical observations, and then to determine if a followup was warranted.

In the early days of the SETI@home program, the internet connection at Arecibo wasn’t fast enough to push out data onto the internet directly, so the SETI@home team had to record the data on 35 gigabyte tapes that were mailed to Berkeley and then uploaded to the internet. Today, the data is piped over the internet to SETI@home’s servers in California, which are equipped with terabytes of storage to handle the data for processing.

When the software stops pushing out new data to users at the end of March, the Berkeley SETI@home team will continue to work through the backlog of data generated by the program over the next few months. The team is small—there are only four full-time employees—and it has struggled to stay on top of managing the public-facing part of the SETI@home program while also publishing research on the data that has been collected. So far, the team has only been able to deeply analyze portions of the dataset. Getting a solid understanding of what it contains will require looking at all the data in aggregate.

“SETI@home volunteers only have access to 100 seconds of data from the telescope, so they can’t see this global picture over 20 years,” says Werthimer. “If you see an interesting signal in the sky, it needs to be there when you go back and look again. That’s what we’re going to be looking for.”

Although the public-facing portion of the SETI@home experiment may be coming to a close, Korpela says the project isn’t dead; it’s hibernating. After the data analysis is wrapped up, he says, SETI@home could possibly be relaunched using data from other telescopes like the MeerKAT array in South Africa or the FAST telescope in China. Korpela says it would probably take a year or more to stand up a successor to the program’s first iteration, but he hasn’t ruled it out as a possibility.

In the meantime, Breakthrough Listen will be carrying the torch for massive public-facing SETI projects. Founded in 2015 with a $100 million donation from the Russian-born billionaire Yuri Milner, Breakthrough Listen is dedicated to collecting and analyzing massive amounts of radio data to search for signs of extraterrestrial intelligence. Like SETI@home, Breakthrough is also being shepherded by the Berkeley SETI Research Center, but its data firehose would overwhelm a distributed computing program like SETI@home to search through it all. Instead, to parse through the data it uses massive banks of GPUs at the Green Bank Telescope in West Virginia running advanced search algorithms.

“Developing these new algorithms and bringing them on site is really the way to crack this problem today,” says Steve Croft, Breakthrough Listen’s project scientist at the Green Bank Telescope. “It’s just not feasible anymore to go over the internet to individual users.”

Each day, the telescopes around the world that contribute to Breakthrough Listen generate more than 100 terabytes of raw data. Even if there were enough people volunteering their computers to analyze it, the internet connections at the telescopes can’t push the data onto the net fast enough. As Croft says, it was time to “bring the computers to the data” and do as much processing of radio signals on site as possible.

Compared to SETI@home, Breakthrough is trading the sensitivity of its analysis for breadth. It’s searching for signals across billions of frequencies and millions of stars, but isn’t as well equipped to recognize artificial signals that fall outside its range of search parameters. Still, Breakthrough is taking a page out of SETI@home’s book and making as much of its search data as possible available to the public. In February, the initiative dumped a whopping 2 petabytes of data online that anyone can use to conduct their own search for intelligent signals. At the same time, Croft says the initiative is also working on moving its data analysis efforts to the cloud, which will allow it to run sophisticated machine learning algorithms over the data without having to build out its own bespoke alien-hunting hardware or algorithms.

“SETI@home was kind of a pioneer for cloud computing by doing processing on other people’s computers,” Croft says. “Cloud computing is just a glorified version of that.”

That two of the biggest SETI research programs in history have embraced public contributions to data analysis bodes well for the future of the search for intelligent life beyond Earth. If we do hear from ET, it won’t be a conversation between individuals, but entire species. SETI@home was the first planet-wide effort to connect with our galactic neighbors. Now we just have to keep sifting through the data to see if anyone left us a message.

Article: https://www.wired.com/story/setihome-is-over-but-the-search-for-alien-life-continues/
Note from Nighthawk.NZ:

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 
Powered by OrdaSoft!