Ruidan Su (苏锐丹 博士, 硕士生导师)
Department of Computer Science and Engineering
Center for Cognitive Machines and Computational Health (CMaCH)
Shanghai Jiao Tong University
Shanghai, China. 200240.
Office: Room 501, SEIEE Building#3, Minhang Campus
Email: suruidan@sjtu.edu.cn
Biography
-
I am an research assistant professor at Department of Computer Science and Engineering, Shanghai Jiao Tong University (SJTU). I obtained my B.Eng in Communication Engineering in 2006, and Ph.D degree in Computer Application Technology from Northeastern University, China in 2014.
Prior to SJTU, I was an Assistant Professor[2015-2021] of Shanghai Advanced Research Institute, Chinese Academy of Sciences, where I worked in field of science is High-speed Train Control Strategy, Computational Intelligence, Software Engineering, Machine Learning and Multiple Object Tracking.
Recruitment
-
I am looking for well-motivated undergraduate and graduate students who are interested in AI4Science.
If you want to join us, please email me your CV.
Research Interests
-
My research interests include machine learning, AI4Science,computational finance.
Teaching
-
CS2306-Computer Architecture
Publication List
Projects
-
横向,城市交通感知数据采集及智慧管养平台建设,2024-2027,金额 517.2 万,项目负责人
校级项目,基于深度双向智能的简笔画可控生成方法研究,2021-2024,项目负责人
中科院科技服务网络计划(STS),新型城镇化下的智慧城市关键技术及应用研究-智慧政务决策分析及数据呈现, 2015.6 -2017.7 ,金额:50 万,项目负责人
中国科学院战略性先导科技专项(C类)基于快变时空下的城市全域感知数据采集与服务示范(子课题)2019-2020 金额:158万,项目骨干(3/16)
横向,海量影像数据存储平台&无人机数据标注平台,2018.9 -2019.11,金额:50万,项目负责人
Research Highlights
A Deep Reinforcement Learning Approach for Portfolio Management in Non Short-selling Market
Reinforcement Learning (RL) has been applied to financial portfolio management in recent years. Current studies mostly focus on profit accumulation without much consideration of risk. Some risk-return balanced studies extract features from price and volume data only, which is highly correlated and missing representation of risk features. To tackle these problems, we propose a Weight Control Unit (WCU) to effectively manage position of portfolio management in different market statuses. A loss penalty term is also designed in the reward function to prevent sharp drawdown during trading. Moreover, Stock Spatial Interrelation (SSI) representing the correlation between two different stocks is captured by Graph Convolution Network (GCN) based on fundamental data. Temporal Interrelation is also captured by Temporal Convolutional Network (TCN) based on new factors designed with price and volume data. Both spatial and temporal interrelation work for better feature extraction from historical data, also make the model more interpretable. Finally, a Deep Deterministic Policy Gradient (DDPG) actor-critic RL is applied to explore optimal policy in portfolio management. We conduct our approach in a challenging non-short-selling market, experiment results show that our method outperforms the state-of-the-art methods in both profit and risk criteria. Specifically with 6.72% improvement on Annualized Rate of Return (ARR), 7.72% decrease in Maximum DrawDown (MDD) and a better Annualized Sharpe Ratio (ASR) of 0.112. Also, the loss penalty and WCU provide new aspects for future work in risk control.