#! https://zhuanlan.zhihu.com/p/143392167
如需转发,烦请邮件告知 [email protected]
This is a private learning repository about Reinforcement learning techniques, Reasoning, and Representation learning used in Robotics, founded for Real intelligence.
- 神经网络基础:反向传播推导与卷积公式 [Zhihu]
- 强化学习基础 Ⅰ:马尔可夫与值函数 [Zhihu]
- 强化学习基础 Ⅱ:动态规划,蒙特卡洛,时序差分 [Zhihu]
- 强化学习基础 Ⅲ:on-policy, off-policy & Model-based, Model-free & Rollout [Zhihu]
- 强化学习基础 Ⅳ:State-of-the-art 强化学习经典算法汇总 [Zhihu]
- 强化学习基础 Ⅴ:Q learning 原理与实战 [Zhihu]
- 强化学习基础 Ⅵ:DQN 原理与实战 [Zhihu]
- 强化学习基础 Ⅶ:Double DQN & Dueling DQN 原理与实战 [Zhihu]
- 强化学习基础 Ⅷ:Vanilla Policy Gradient 策略梯度原理与实现 [Zhihu]
- 强化学习基础 Ⅸ:一文读懂 TRPO 原理与实现 [Zhihu]
- 强化学习基础 Ⅹ:一文读懂两种 PPO 原理与实现 [Zhihu]
- 强化学习基础 Ⅺ: Actor-Critic & A2C 原理与实现 [Zhihu]
- 强化学习基础 Ⅻ:DDPG 原理与实现 [Zhihu]
- 强化学习基础 XIII:Twin Delayed DDPG TD3原理与实现 [Zhihu]
- Model-Based RL Ⅰ:Dyna, MVE & STEVE [Zhihu]
- Model-Based RL Ⅱ:MBPO原理解读 [Zhihu]
- Model-Based RL Ⅲ:从源码读懂PILCO [Zhihu]
- PR 序:机器人学的概率方法学习路径 [Zhihu]
- PR Ⅰ:最大似然估计MLE与最大后验概率估计MAP [Zhihu]
- PR Ⅱ:贝叶斯估计/推断及其与MAP的区别 [Zhihu]
- PR Ⅲ:从高斯分布到高斯过程、高斯过程回归、贝叶斯优化 [Zhihu]
- PR Ⅳ:贝叶斯神经网络 Bayesian Neural Network [Zhihu]
- PR Ⅴ:熵、KL散度、交叉熵、JS散度及python实现 [Zhihu]
- PR Ⅵ:多元连续高斯分布的KL散度及python实现 [Zhihu]
- PR Sampling Ⅰ:蒙特卡洛采样、重要性采样及python实现 [Zhihu]
- PR Sampling Ⅱ:马尔可夫链蒙特卡洛 MCMC及python实现 [Zhihu]
- PR Sampling Ⅲ:M-H and Gibbs 采样 [Zhihu]
- PR Structured Ⅰ:Graph Neural Network: An Introduction Ⅰ [Zhihu]
- PR Structured Ⅱ:Structured Probabilistic Model 结构化概率模型 [Zhihu]
- PR Structured Ⅲ:马尔可夫、隐马尔可夫 HMM 、条件随机场 CRF 全解析及其python实现 [Zhihu]
- PR Structured Ⅳ:General / Graph Conditional Random Field (CRF) 及其 python 实现 [Zhihu]
- PR Structured Ⅴ:GraphRNN——将图生成问题转化为序列生成 [Zhihu]
- PR Reasoning 序:Reasoning Robotics 推理机器人学习路线与资源汇总 [Zhihu]
- PR Reasoning Ⅰ:Bandit问题与 UCB / UCT / AlphaGo [Zhihu]
- PR Reasoning Ⅱ:Relational Inductive bias 关系归纳偏置及其在深度学习中的应用 [Zhihu]
- PR Reasoning Ⅲ:基于图表征的关系推理框架 —— Graph Network [Zhihu]
- PR Reasoning Ⅳ:数理逻辑(命题逻辑、谓词逻辑)知识整理 [Zhihu]
- PR Memory Ⅰ:Memory systems 2018 – towards a new paradigm 【重磅综述】记忆系统——神经科学的启示 [Zhihu]
- PR Perspective Ⅰ:Embodied AI 的新浪潮 —— new generation of AI [Zhihu]
- PR Perspective Ⅱ:2021/08/03 近期 Robot Learning 领域大事件及思考 [Zhihu]
- PR Efficient Ⅰ:机器人中的数据高效强化学习 [Zhihu]
- PR Efficient Ⅱ:Bayesian Transfer RL with prior knowledge [Zhihu]
- PR Efficient Ⅲ:像训练狗狗一样高效地训练机器人 [Zhihu]
- PR Efficient Ⅳ:五分钟内让四足机器人自主学会行走 [Zhihu]
- PR Efficient Ⅴ:自预测表征,让RL agent高效地理解世界 [Zhihu]
- Meta-Learning: An Introduction Ⅰ [Zhihu]
- Meta-Learning: An Introduction Ⅱ [Zhihu]
- Meta-Learning: An Introduction Ⅲ [Zhihu]
- Imitation Learning Ⅰ:模仿学习 (Imitation Learning) 入门指南 [Zhihu]
- Imitation Learning Ⅱ:DAgger透彻理论分析 [Zhihu]
- Imitation Learning Ⅲ:EnsembleDAgger 一种贝叶斯DAgger [Zhihu]
- RLfD Ⅰ:Deep Q-learning from Demonstrations 解读 [Zhihu]
- RLfD Ⅱ:Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance [Zhihu]
- MARL Ⅰ:A Selective Overview of Theories and Algorithms 【重磅综述】 多智能体强化学习算法理论研究 [Zhihu]
Active Visual Navigation
- Reading:利用物体关系的目标驱动视觉导航 [Zhihu]
- Reading:Learning to learn how to learn-Meta自适应视觉导航 [Zhihu]
- Reading:Bayesian Relational Memory 在视觉导航中的应用 [Zhihu]
- Reading:Attention+3D空间关系图在视觉导航中的应用 [Zhihu]
- Reading:机器人导航的半参数化拓扑记忆结构 [Zhihu]
- Reading:将Transformer应用到机器人视觉导航中 [Zhihu]
RL robotics in the physical world with micro-data / data-efficiency
- 【重磅综述】如何在少量尝试下学习机器人强化学习控制 [Zhihu]
Others
- End-to-End Robotic Reinforcement Learning without Reward Engineering: [Medium] [Zhihu]
- Overcoming Exploration in RL with Demonstrations: [Medium] [Zhihu]
- The Predictron: End-To-End Learning and Planning: [Zhihu]
- IROS2019 Paper速读(一) [Zhihu]
- IROS2019 Paper速读(二) [Zhihu]
- IROS2019 Paper速读(三) [Zhihu]
- IROS2019 Paper速读(四) [Zhihu]
- Tools 1:如何用 PyQt5 和 Qt Designer 在 Pycharm 中愉快地开发软件 [Zhihu]
- Tools 2:Arxiv 论文提交流程——看这篇就够了 [Zhihu]
- Tools 3:Python socket 服务器与客户端双向通信(服务器NAT,文件传输) [Zhihu]
- Tools 4:Python三行转并行——真香![Zhihu]
- Tools 5:Python三行转并行后续——多进程全局变量 [Zhihu]
Reinforcement-Learning-in-Robotics
Machine-Learning-is-ALL-You-Need
If you're interested in reinforcement learning, we encourage you to check out our latest library of reinforcement learning and imitation learning in (humanoid) robotics.
Repository address: https://github.com/Skylark0924/Rofunc