The Mersivity Symposium held last December brought together experts from diverse fields to discuss the integration of humanistic artificial intelligence (AI) and advanced technologies. The event featured researchers, authors, economists, and innovators from prestigious institutions, including the University of Toronto (U of T), the Massachusetts Institute of Technology (MIT), and the Institute of Electrical and Electronics Engineers.
Translating thoughts into words
One of the symposium’s highlights was a presentation by two young researchers from MIT, Akarsh Aurora and Akash Anand. They painted a vision of a future where thoughts could be directly translated into words, eliminating the need for traditional input methods like a keyboard and mouse.
Imagine a world where turning on lights or changing TV channels requires a thought. This technology holds profound promise, particularly for individuals with speech impediments or those who have lost their communication ability. The research shared by Aurora and Anand provided a glimpse into how such a futuristic endeavor might be realized.
From brain signals to words
To convert thoughts into words, researchers must first capture the raw data of thoughts, which primarily consist of electrical signals in the brain. Traditional methods involve implanting electrodes directly into the brain, but this approach comes with drawbacks, such as interfering with brain tissue and requiring periodic replacement.
An alternative, non-invasive approach involves using brain-computer interfaces (BCIs) like the electroencephalogram (EEG). An EEG, resembling a cap with wires and electrodes, is placed on the scalp to detect electrical brain activity without invasive procedures.
Deciphering EEG signals to reconstruct words
Aurora and Anand’s research involved recording EEG data from participants while they listened to the first chapter of “Alice in Wonderland.” The goal was to reconstruct the words participants heard solely from EEG data.
However, converting an EEG signal into words is no easy feat. EEG data is prone to “noise” caused by various factors, such as blinking. The researchers used AI to clean up the EEG signals by applying filters to tackle this challenge.
Their approach included using a pre-trained AI model to generate candidate word sequences based on EEG data. The AI-generated sequence was considered accurate if it matched the sequence the participant heard during the recording.
While the results aren’t yet perfectly accurate, they represent a significant step toward translating neural signals into continuous language.
The symposium wasn’t solely focused on brain-computer interfaces. Artist Reid Godshaw showcased how AI can be creatively harnessed to amplify artistic expression. He displayed a series of impactful artworks created with AI to raise awareness about veganism. These digital artworks emphasized the similarities between animals and humans while shedding light on the ethical concerns within the meat industry.
Revolutionizing accessibility with the “Freehicle” concept
Steve Mann, an engineer and U of T professor introduced the concept of a “Freehicle” during his presentation. This revolutionary idea envisions a vehicle that can adapt to land and water, emphasizing the importance of accessibility and improving the quality of life, especially for individuals with disabilities.
The Mersivity Symposium served as a platform for groundbreaking research and discussions at the intersection of technology and humanity. The integration of AI and advanced technologies, such as brain-computer interfaces, demonstrates the potential for positive change in various fields.
The symposium attendees departed with a sense that they were on the precipice of significant technological advancements that could reshape how we interact with the world. The popular internet meme declares, “The future is now, old man,” reflecting the rapid pace of innovation showcased at this forward-looking event.