With each new update to our Wisdom-SDK, we are making your journey to developing end-to-end brain-computer interface (BCI) applications and solutions that much easier.
At NexStem, we believe in the process of continual development. Which for our customers and partners translates to a solution that is constantly melded around the feedback you provide and the technical opportunities our team identifies.
So, what new features and modules have we added to our Wisdom-SDK Webapp V1? Here is a sneak peek of our new modules, which include device management, subscription management, visualization, experiments, algorithms, and deploy.
The addition of the device management module allows a user to list the headsets they own and connect or disconnect each device simply by selecting it in the drop-down menu. Another cool feature is that your devices will immediately start streaming data if appropriately configured. If a device isn't configured correctly, errors and user actions will be sent as toast messages ensuring feedback is received in real-time and action can be taken.
With an easy-to-navigate subscription management module, a user can subscribe to any NexStem payment or subscription plan and upgrade or downgrade as required, all from a centralized place. This also extends to customers who want to cancel their subscription.
A standalone feature of the Wisdom-SDK Webapp, the visualization model, supports real-time data streaming and the recording of data directly from the device. The module currently supports four different visualization charts: Time Series Plot, Fast Fourier Transform (FFT) Plot, Bandpower Plot, and Time-Frequency Spectrograph. Users can update plots on the go while the stream itself is still being plotted.Central to the addition of this module is the ease of use, and a user can quickly select or deselect channels that need to be plotted and add signal processing filters such as Low-pass, Band-pass, High-pass, and Band Stop in real-time. Streaming data can also be recorded, and the data collected can be downloaded once the recording is completed – offering users access to their raw data.Playback functionality has also been added, and the playback feature is automatically applied with the appropriate configuration (channels, filters) during the recording. Users can playback recordings and modify the applied configuration while the recording is being played back.
The first module in our BCI pipeline, the experiment module, helps users record data while a subject is wearing the headset and a BCI experiment is being performed. BCI Experiments supported include Motor Imagery, SSVEP, and Facial Expression Detection, with an imminent release of custom experiments in the pipeline.A typical application of the experiment module begins with a user designing an experiment of their choice, selecting the type of experiment, and then defining how many classes they want it to support. Users can determine some standard information upfront, such as duration and phases, including Cue, Action, and Rest phases.The Cue phase prepares and informs the subject of the instructions that will be applied and what is expected of the user. In the Action phase, the user performs the instruction, and in the Rest phase, the user relaxes and prepares for the next instruction. The actions associated with each phase are carried out for each class. Here developers can configure the shape and color of the element and the animation objects that must be performed at each step in the process.Once the experiment is configured in the experiment module, it moves to the preview stage, where developers can preview it as many times as is needed before finalizing it. Once completed, you can't perform any more changes, and you can start to run the experiment. The data generated will then be recorded in real-time and can be downloaded or used in the Algorithms module.
In the second module in our BCI Pipeline, the algorithm module takes the recorded data from an experiment and applies machine learning models to the data. The first step in this process is to create a new training job and provide basic information such as name, type of experiment, and algorithm.Once this is done, a user can then edit or change the hyperparameters of the algorithms and initiate the training job. Each training job undergoes several stages and will dynamically update its status in the training job list. If the training job is a success, an ML model is generated and ready to be deployed.
The final and critical step in BCI pipeline development is the deploy module. Here a user can use multiple data stream processing operators, including the model operator, to deploy the trained model to create a deployment pipeline. A user can activate or deactivate the deployment pipeline from a central environment in this module.When a deployment pipeline is activated, a user can connect to all streamed intermediatory and final data through the operator model, connected to the end-user applications being developed using WebSockets.
If we were to summarize these additions to our Wisdom-SDK, we would define them as a bold step towards making the development of BCIs that much easier, simplifying the steps developers need to take, and bringing concepts from the design board into application that much quicker!