-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌟 Project Candidates #1
Comments
💡 build your own tree 🌳Decision tree from scratch |
and I could leverage some code I created for histogram based decision trees at work? |
why not, you need MIT license to use it more comfortably? we can put in one histogram based decision tree --- isn't that already in many library like even sklearn? |
💡
Yup, but it was fun to implement anyway and we were going to re-implement in c++ to be faaaast |
I'm pretty sure my c++ worthiness is less, I'm more of a Rust person. RESPECT~~~ |
💡4. Download stock prices from your favorite online finance website over a period of at least three years. Create a dataset for testing portfolio selection algorithms by creating price-return vectors. Implement the OGD and ONS algorithms and benchmark them on your data. Introduction to Online Convex Optimization |
I love all the projects here, but right now number 4 is my absolute favourite and I shamelessly encourage everyone to vote for it!! 🔥🔥🔥 For those who haven’t been reading the OCO textbook:
This project is both self-contained and super straightforward to implement, consisting of clearly demarcated tasks:
|
well this is all just great. My wife built something that can scrap financial data and analyze things in very simple way, and I asked can you make it more useful by add something that's beyond "asking chatgpt if this stock is going to rise". And we stuck there, so I guess your suggesting is right our answer. her homework |
@elasticsearcher u must be Andrew |
suggestion: MLP, ANN, Markov chain, reinforcement learning |
good suggestions, can you make it more specific eg. Create MLP with well defined back-propagation in using numpy etc and lead with 💡 so we can vote on it! 🌟 |
💡 MLP with back-propagation and inference using numpy |
💡 2-Layer ANN with back-propagation and inference function using numpy |
💡 A hidden Markov model with an adjustable number of hidden states.Training it with the Expectation Maximization algorithm, and empirically investigating applications using the Forward-Backward (sum-product) and Viterbi (max-product) algorithms. It'll accept commandline arguments for the path to the training data, the number of hidden units to use, and the maximum number of iterations of EM to apply. By default, it should simply “do EM on the dataset” and print out the overall likelihood at initialization and again after each iteration of EM. Evaluate accuracy when predicting “into the future”. You may calculate the accuracy when predicting the “next state”, averaged over all states in the training data. You may explore how the accuracy drops off when predicting t steps into the future. https://github.com/tianyimasf/sequence-hmm/blob/main/sequenceProject.pdf |
💡 RL with Q-learning -- training & prediction using numpy |
💡 RL with SARSA -- training & prediction using numpy |
idk anything about probabilistic graphic model so I'll leave to others to suggest the details. |
🌟 Possible Targets for Vanilla Events
PLEASE USE 👍🏻 TO VOTE IDEAS
The text was updated successfully, but these errors were encountered: