Switch to the dark mode that's kinder on your eyes at night time.

Switch to the light mode that's kinder on your eyes at day time.

Switch to the dark mode that's kinder on your eyes at night time.

Switch to the light mode that's kinder on your eyes at day time.

Music of the spheres

Please Share This!

Think of it as cosmic synesthesia. Listen to a picture of the galaxy, sounds that jingle and chime, bellow and creak in a mix that sounds at times vaguely New Age, at others like John Cage performing amid a forest nor’easter.

In a new project to make images of space more accessible, Kimberly Kowal Arcand, a visualization researcher from the Center for Astrophysics | Harvard & Smithsonian, and a team of scientists and sound engineers worked with NASA to turn images of the cosmos into music. The team uses a new technique called data sonification that takes the information captured from space telescopes and translates it into sound.

Already, the project has turned some of the most famous deep-space images, like the center of the Milky Way and the “Pillars of Creation,” into music. The main goal is to make the visuals more accessible to the blind or visually impaired, said Arcand, who’s made accessibility a focus of her career. She works on virtual reality and 3D simulations that can help the visually impaired experience space through other senses, such as touch. She’s found it’s helped able-bodied people experience images in new ways.

“When you help make something accessible for a community, you can help make it accessible for other communities as well. That kind of more rounded thought into the design of the materials and the intended outcomes for audiences can have very positive benefits,” Arcand said.

The audio interpretations are created by translating the data scripts from the telescopes into notes and rhythms. In some cases, the data are assigned specific musical instruments, like piano, violin, and bells. The melodies are harmonized to produce something cohesive and enjoyable. The different volumes, pitches, and drones are based on factors such as brightness, distance, and what the image depicts. Brighter stars, for instance, get more intense notes, and the sound depicting an exploding star grows loud, then slowly calms as the blast wave diminishes.

“It is a way of taking the data and trying to mimic the various parts of the image into sounds that would express it, but also still be pleasing together so that you could listen to it as one mini composition, one mini symphony,” Arcand said.

Many of the renditions combine data from multiple telescopes, including NASA’s Chandra X-ray Observatory (which is headquartered at the CfA), the Hubble Space Telescope, and the Spitzer Space Telescope. The devices capture different types of light — X-ray or infrared, for instance — so the data in these sonifications produce different types of sound. These separate layers can be listened to together or as solos.

So far, the team has released two installments from the data sonification project. Here are a few examples.

Source link The Harvard


Please Share This!

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top
Close

Ad Blocker Detected!

Refresh

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

To use social login you have to agree with the storage and handling of your data by this website. %privacy_policy%

Add to Collection

No Collections

Here you'll find all collections you've created before.