An AR experience that transforms your sounds into beautiful visuals.
Make sounds and watch them come alive on your screen in real-time.
We need access to your microphone to hear sounds.
EchoSpace uses your microphone to detect audio and drive visuals.
Your audio is processed locally and is never stored.
We need access to your camera for AR overlay.
EchoSpace overlays sound visuals onto your real-world camera view.
Your camera feed stays on-device and is only used for display.
Pick a mode in Settings to change the visual pattern.
Soft particles that pulse with speech/pitch texture
Waves that snap and surge on rhythmic hits
Center ring expands/contracts with loudness
Quiet rooms produce cleaner, more readable patterns.
Choose a theme in Settings for different palettes.
Blue hues
Orange & Yellow hues
Green hues
Control how strongly visuals react to sound.
You can adjust this anytime in Settings.
Ready to start creating sound visuals.
Experiment with different sounds and discover unique patterns.
Clap or speak to preview how EchoSpace responds.
AI runs on-device to detect common sounds (ex: sirens/alarms). No audio is stored.
Sound Detected
We detected an important sound nearby.