We are delighted to announce a series of enhancements to our Wisdom-SDK in our new version 1.5. These are designed to push the envelope on the concept of "no-code" brain-computer interface (BCI) development. These updates are not just subtle changes but rather significant improvements and added features that dramatically reduce the time it takes you to get from concept design to "working BCI application."
The full remit of these changes includes no-code deployment modules, visualizations, added filters, and plot changes. These have been designed to help you reduce time to application, reduce the barrier to entry, improve ease of use, reduce overall costs of development, and focus on core development rather than worrying about developing your own signal processing and machine learning algorithms.
The most notable improvements we have introduced include a series of enhancements to the Wisdom-SDK, allowing developers to take advantage of a no-code build. Our deployment module has been updated to let you build a bigger and bigger pipeline, add multiple algorithms, down sample data, transform events, and change data recording or collection interval times. And all of this in real-time without having to write a single line of code.
These changes now also let you create real-world BCI applications off live data. For example, it's easier for a user to take the output of a specific band power algorithm and use it as the input for an application, which is then controlled based on the band output. Now, when mapping subtle changes in alpha and beta bands in the case of a virtual reality application, the output gives the command to move around, pick up objects, and so forth.
We have also improved the development of base applications to be significantly more straightforward and quicker. You no longer need to worry about synchronizing data, developing signal processing algorithms, or ML inference, as everything works within a pipeline. You can now also run parallel pipelines working together in real-time to perform parallel analysis and, in turn, use different imagery for different application use cases simultaneously. These updates remove the headache you experienced with the EEG portion of BCI development.
We have created a series of pre-built filters that you can adapt and simply apply to your data on the visualization front. These filters let you filter the noise out of EEG data in real-time. Your developers can now create and update filters without writing a single line of code and do this all in real-time while plotting and engaging with the EEG data.
Critically, filters enable a user to remove predefined unwanted interference from raw EEG data, such as different bands that account for the electrical frequencies or line noise depending on your geography. These pre-built filters will "clean" signals for improved data and signal processing.
We have also increased the number of data plots developers can work with and made data plotting that much faster. You can now see over 80,000 data points in real-time without any lag, and alongside our real-time plot, we have also added a faster transfer, band power, and spectrogram plot.
With this update, once you have created your filter, you click "update plot," and the filter is applied to the data in real-time. In the past, you may have had first to record the data and filter out environmental noise interference after the fact. By adding filters in real-time with the Wisdom-SDK, you can immediately view EEG data without interference (even while it is being acquired) and then record this filtered data and download it. With this enhancement, you can now analyze filtered data in real-time, without writing a single line of code, without processing the data at a later time in Matlab, and as a result, get access to insights and information immediately.
Look out for our upcoming enhancements and modules, including finalizing and enhancing our experiment and machine learning/analytics module. We will be announcing these in the next month.