You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I should be able to interface with BLE functionalities on CC2650, so that I can gather data for training and prediction
I should be able to navigate to 2 separate screens for training and prediction (autoplay), so that the separation of functionalities is clear.
I should be able to initiate and stop motion (running, walking, ...) training at any time with a button
I should be able to sample and collect motion data into a temporary store
The activity should be selectable as a dropdown / secondary popup / etc. @Florence-100
I should be able to stop training with the same button, and call the motion training API (API not done) @ang-zeyu
I should be able to source training and autoplay files from a folder on my phone
I should be able to populate song moods of training music files with a hardcoded .json in the app
I should be able to populate song moods of (prediction) music files with an API call
I should be able to see song moods of loaded songs in the UI of
training songs
prediction songs
I should be able to have a UI to call an API for POST-ing song title -> mood mapping
I should be able to initiate song training (play randomised songs from folder) at any time with a button
I should be able to have the sensor collect data for a while at the beginning of each song playing (3-5s) (done)
When I skip a song before it has been completed, I will send a training API call with an "inverted" flag that inverts the moods
When I let a song play to completion, I will send a normal training API call
I should be able to initiate song autoplay mode using sensor IoT data on the prediction screen with a button
I should be able to have the sensor collect data for a while at the beginning of each song playing (3-5s) (half done)
So that I can call the prediction API to get a song mood recommendation
irrelevant - no longer labels I should be able to populate song labels of music files with an API call
Backend
I should be able to administrate EC2 instances, for serving the backend APIs and training the ML models
I should support a motion training MQTT API that takes sensor inputs over some period + the activity
I should support a song training MQTT API that takes sensor inputs over a short period + the song mood
I should support a song prediction API that takes sensor inputs over a short period, and recommend a song mood
I should be able to store song moods of music files with an API call
I should be able to retrieve song moods of music files with an API call
!! I should be able to save training data to some local file / database, so that I can tweak and adjust the model anytime during development
Correspondingly, I should be able to retrieving said training data and train ML models at any time
irrelevant - no longer labels I should support a song labels API that takes a list of song titles and artists as inputs (ideally) and returns their corresponding song labels
irrelevant - no longer labels I should be able to interface with the millionsongdataset.com, and possibly others, so that I can retrieve song labels
CC2650
We should calibrate our sensors if needed, so that aggregating data is consistent (not done)
The text was updated successfully, but these errors were encountered:
App
Backend
CC2650
The text was updated successfully, but these errors were encountered: