# 2020集智凯风研读营

## 主题：面向复杂系统的人工智能

“集智凯风研读营” 项目是由凯风研读营资助，集智俱乐部发起的学术交流活动。通过特定学术主题，汇聚术业有专攻但又视野广阔的青年学者，举行 5-7 天的封闭式交流营活动。通过深度研读讨论前沿科学研究，共同界定和审视一些新的问题，使得在当前学术体制下，实现跨文化、跨学科、跨领域的学术创新，形成真正具有原创思想能力的学术共同体。

## 基本安排

#### 日程安排

1天 张潘 尤亦庄 尤亦庄 机器学习在物理学建模中的应用

1天 张江 张潘 张潘 8:30 - 9:15 张量网络基础与缩并

11:15 - 12:00 自由讨论

1天 尤亦庄 吴令飞 吴令飞 网络嵌入以及在科学学(science of science)以及工作的未来(future of work)当中的应用

1天 吴令飞 复杂系统自动建模 张江 复杂系统自动建模概述与展望

#### 基本原则

• 在研读营正式举办期间，所有参营学者和老师都必须要全程参与线上讨论。

#### 基本信息

• 时间：北京时间8月24日上午8：30-12：00，晚上8：30-12：00
• 地点：Zoom房间（群内公布）

## 研讨主题

• 复杂系统自动建模
• 因果推断方法
• 技能、职业与社会分工的计算社会学
• 人工智能如何取代物理学家

...

### 复杂系统的自动建模

• Alvaro Sanchez-Gonzalez,Nicolas Heess,Jost Tobias Springenberg.et al.: Graph networks as learnable physics engines for inference and control ,arxiv,2018

• Thomas Kipf,Ethan Fetaya,Kuan-Chieh Wang.et al.: Neural Relational Inference for Interacting Systems ,arXiv:1802.04687, 2018.

• Seungwoong Ha,Hawoong Jeong: Towards Automated Statistical Physics : Data-driven Modeling of Complex Systems with Deep Learning ,arxiv,2020

• Danilo Jimenez Rezende Shakir Mohamed: Variational Inference with Normalizing Flows, arXiv:1505.05770v6

• Fan Yang†, Ling Chen∗†, Fan Zhou†, Yusong Gao‡, Wei Cao：RELATIONAL STATE-SPACE MODEL FOR STOCHASTIC MULTI-OBJECT SYSTEMS, arXiv:2001.04050v1

• Ricky T. Q. Chen*, Yulia Rubanova*, Jesse Bettencourt*, David Duvenaud: Neural Ordinary Differential Equations, arXiv:1806.07366v5

• Michael John Lingelbach, Damian Mrowca, Nick Haber, Li Fei-Fei, and Daniel L. K. Yamins: TOWARDS CURIOSITY-DRIVEN LEARNING OF PHYSICAL DYNAMICS, “Bridging AI and Cognitive Science” (ICLR 2020)

• Chengxi Zang and Fei Wang: Neural Dynamics on Complex Networks, AAAI 2020

AAAI 2020的best paper，将Neural ODE与图网络结合针对复杂网络的一般的动力学西问题，利用最优控制原理进行求解。该文还将半监督节点分类问题也转化为最优控制问题，从而取得了显著的效果。

### 网络嵌入以及在科学学(science of science)以及工作的未来(future of work)当中的应用

See Jure Leskovec's class of (Graph Representation Learning https://www.youtube.com/watch?v=YrhBZUtgG4E) for more details.

Network embedding, or Graph Representation Learning, is a task of learning vector representation of nodes such that these vector operations (usually cosine similarity) retrieve the similarity between nodes. Similarity can be defined in different ways, including 1st order (linked), 2nd order (shared neighbors), or other forms (occupy the same positions such as "gatekeeper" between communities).

There are multiple approaches in defining node similarity before applying word2vec to embed networks, including deepwalk and node2vec. These approaches are different strategies to retrieve context (nodes) for target nodes (e.g., breadth-first or depth-first).

If we already have sequence data (e.g., the published journals of scholars or the skills of workers), constructing graphs from these sequences can be viewed as a discrete model alternative to directly applying word2vec and learning the continuous manifold of embeddings. The constructed graphs are the discrete samples on the continuous manifold (see Isomap).

We focus on Word2vec as the main method for network embedding before moving to more sophisticated models such as TransE or Graph Neural Networks. An often-ignored fact is that Word2vec is dual-embeddings - it has two sets of embeddings. Each word will have two vector representations, including term embedding and context embedding. In practice, word2vec has two different frameworks, CBOW (many-to-one) and Skip-gram Negative Sampling (SGNS, one-to-many). For CBOW, term and context embeddings correspond to IN and OUT vector representations, for SGNS it is reversed.

#### 对组成人力资本的技能进行替代与互补建模

network embedding 是一个分析替代、互补的有潜力的框架。论文主要介绍了：word2vec 对于嵌入节点之间互信息的建模（1，2，3）；替代互补研究在语言学中的对应（4）；食谱中食材的替代互补网络研究（5，6）；人力资本当中替代互补的研究（7，8）

In word2vec (SGNS), the term-context dot product (T_i*C_j) implicitly models pointwise mutual information (PMI) between words, which also called collocation between items (two items co-occur more likely than random), or co-used patterns if the data is from demand-side (e.g., the skills required for jobs). Our assumption is that the term-term dot product (T_i*T_j) models the substitution between items (how similar or exchangeable two items), and that complement = collocation - substitution.

• 1. Nalisnick, E., Mitra, B., Craswell, N., & Caruana, R. (2016, April). Improving document ranking with dual word embeddings. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 83-84).

This paper presents how IN-IN and IN-OUT vector cosine similarities models collocative and substitutive word pairs.

• 2. Levy, O., & Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems (pp. 2177-2185).

This paper proves that term-context embedding (T_iC_j) in SGNS implicitly models pointwise mutual information (PMI).

• 3. Levy, O., Goldberg, Y., & Dagan, I. (2015). Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3, 211-225.

This paper proposes that the term-term vector cosine similarity model 2nd order association and the term-context vector cosine similarity model 1st order association, and suggests that adding these two vectors to obtain a combined vector improves the performance of word2vec on certain NLP tasks.

• 4. Rapp, R. (2002, August). The computation of word associations: comparing syntagmatic and paradigmatic approaches. In Proceedings of the 19th international conference on Computational linguistics-Volume 1 (pp. 1-7). Association for Computational Linguistics.

This paper called 1st and 2nd order associations "syntagmatic" and "paradigmatic" relations, respectively, following the convention created by Ferdinand de Saussure (the founding father of linguistics). This paper also proposes to measure 1st order association by co-occurrence and 2nd order association by comparing context word distribution similarity.

• 5. Teng, C. Y., Lin, Y. R., & Adamic, L. A. (2012, June). Recipe recommendation using ingredient networks. In Proceedings of the 4th Annual ACM Web Science Conference (pp. 298-307).

This paper constructed two food-ingredient networks (one links collocation ingredients as they co-used in food receipts, the other links substitutive ingredients as suggested by users), and found that "purified", complement ingredient network can be obtained through removing substitutive parts from collocation network. And the substitutes network is more informative in predicting users preference on receipts.

• 6. Sauer, C., Haigh, A., & Rachleff, J. Cooking up Food Embeddings.

This paper analyzes two kinds of food ingredients pairs, including substitutes and complements. They define complement pairs as frequently co-used (collocation) ingredients (maximizing Ti*Cj), and substitute pairs as those similar and replaceable (maximizing Ti*Tj).

• 7. Neffke, F. M. (2019). The value of complementary co-workers. Science Advances, 5(12), eaax3370.

This paper found that having co-workers with an education degree similar to one’s own is costly, having co-workers with a complementary education degree is beneficial. How this is defined?

• 8. Dibiaggio, L., Nasiriyar, M., & Nesta, L. (2014). Substitutability and complementarity of technological knowledge and the inventive performance of semiconductor companies. Research policy, 43(9), 1582-1593.

This paper found that complementarity between knowledge components positively contributes to firms’ inventive capability, whereas the overall level of substitutability between knowledge components generally has the opposite effect. How this is define?

#### 预测科学前沿的移动

word2vec is a diffusion model. This explains why it predicts the diffusion of collective attention in search of scientific knowledge. Base on the duality between dynamics on networks (Newtonian) and geometry of networks(Einsteinian), we can assume that for all network diffusion models with PMI on edges as "geo-distance", we can develop their word2vec/representative learning/manifold learning versions (actually this paper defines "effective distance" in a similar way as PMI).

##### The geodesic distance of ideas

We can see the idea as the city in Brockmann's model. Scientific exploration will be the human being's migration between cities. Can we think that the difference between scholar's generation, which is the main traditional metrics of scientific innovation and spreading, is a wrong perspective? Some scholars live in the same time with others but do advanced research(eg, pioneer of quantum mechanism active in 1920 s). some scholars live now but still cites and talk about old knowledge, that is out of step with the surrounding people. though every theory will own its life circles as well as the active time. However, it cannot explain why scholars used to find the same idea across time and disciplines, as multiple discoveries proposed by Merton. Maybe the correct approach to learning idea is to construct their knowledge map( or networks), each node is an idea, the distance between any two nodes, is not time but the "effective distance". The number of scholars, that travel from one idea to another idea traced by the geodesic distance of ideas, represent the difficulty of thinking. We can use word2vec to embed the idea, or we can suppose the article is the mini element of ideas, one article can only contain one idea. According to the knowledge graph of Wikipedia, we can identify the idea( or claim, the triple-tuple including head, relation, and tail) of a paper. And clustering these articles by the claim. We can trace a scholar's career in the idea map and measure the exploration ability of the scholars.

• 1.Brockmann, D., & Helbing, D. (2013). The hidden geometry of complex, network-driven contagion phenomena. science, 342(6164), 1337-1342.

This paper analyzed disease spread via the “effective distance” rather than geographical distance, wherein two locations that are connected by a strong link are effectively close. The approach was successfully applied to predict disease arrival times or disease sources.

• 2.Tshitoyan, V., Dagdelen, J., Weston, L., Dunn, A., Rong, Z., Kononova, O., ... & Jain, A. (2019). Unsupervised word embeddings capture latent knowledge from materials science literature. Nature, 571(7763), 95-98.

This paper shows that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings without human labeling or supervision. We demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery.

#### 构建原子技能以及职业在机器中的表示

Arora, S., Li, Y., Liang, Y., Ma, T., & Risteski, A. (2018). Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6, 483-495. Chicago This paper shows that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. ，A novel aspect of the technique is that each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.

$\mathcal{L}_{\text{sc}} = \underbrace{||WH - X||_2^2}_{\text{reconstruction term}} + \underbrace{\lambda ||H||_1}_{\text{sparsity term}}$

Sparse coding is a representation learning method that aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.

### 因果推理

#### Judea Pearl 如下的三篇论文是现代因果推理的必读文章

• J. Pearl, "The Seven Tools of Causal Inference with Reflections on Machine -
• Learning," July 2018. Communications of ACM, 62(3): 54-60, March 2019 J. Pearl, "Causal and counterfactual inference," October 2019. Forthcoming section in The Handbook of Rationality, MIT Press.
• J. Pearl, "Causal inference in statistics: An overview," Statistics Surveys, 3:96--146, 2009.

#### Bernhard Scholkopf 及其团队，有两篇关键论文

• Causality for machine learning, B Schölkopf - ‎2019
• Foundations of Structural Causal Models with Cycles and Latent Variables, Bongers etc -2020

Causal Inference and Data-Fusionin Econometrics 2019 Paul Hunermund 和 Elias Bareinboim（Judea Pearl 学生）是在披着经济学的皮讲解着 Causal AI 如何解决 confounding bias, selection bias and 迁移学习这些难题的因果理论框架。该文章是现代因果理论如何结合某个具体领域的标杆文章。

• A Survey on Causal Inference, 2020 Liuyi YAO etc,

（观测数据因果推断是热点，尤其是结合机器学习）Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. 本文介绍在潜结果框架下的因果推断方法，这些方法可以按照其所需要的因果假设分类，而每个类别我们会分别介绍对应的机器学习和统计方法，也会介绍其在各领域的应用，最后我们会介绍各种方法的benchmark. Pearl‘s 结构因果模型并不是当前唯一流行的因果建模框架，Potential Outcome 是另外一个主流因果建模框架，尤其是在计量经济学，流行病学等非AI领域非常流行，这篇论文是相关内容一个全面准确清晰的综述。

#### 其他可选论文包括：

• Kun Kuang 老师的 Stable Learning 相关论文
• Calculus For Stochastic Interventions: Causal Effect Identification and Surrogate Experiments, J. Correa, E. Bareinboim. AAAI-20.
• A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms 2019 by Yoshua Bengio etc.
• A Second Chance to Get Causal Inference Right: A Classification of Data Science Tasks 因果推断前言综述 by Migual. A recent influx of data analysts, many not formally trained in statistical theory, bring a fresh attitude that does not a priori exclude causal questions.
• 时间序列因果 Granger Causality
• .....

PS：相关的入门参考资料如下：

### 求解统计力学:从平均场到神经网络，再到张量网络

• Understanding belief propagation and its generalizations J. Yedidia, W. Freeman, Y. Weiss International Joint Conference on Artificial Intelligence (IJCAI), 2001

• Solving Statistical Mechanics Using Variational Autoregressive Networks D. Wu, L. Wang, P. Zhang Phys. Rev. Lett., 122, 080602 (2019)

• Contracting Arbitrary Tensor Networks: general approximate algorithm and applications in graphical models and quantum circuit simulations F. Pan, P. Zhou, S. Li, P. Zhang arXiv preprint arXiv:1912.03014

### 机器学习在物理学建模中的应用

Quantum State Tomography [1] Juan Carrasquilla, Ciacomo Torlai, Roger Melko, Leandro Aolita. Reconstructing quantum states with generative models. arXiv: 1810.10584

[2] Peter Cha, Paul Ginsparg, Felix Wu, Juan Carrasquilla, Peter L. McMahon, Eun-Ah Kim. Attentioned-based quantum tomography. arXiv: 2006.12469

Autoregressive generative model [3] Dian Wu, Lei Wang, Pan Zhang. Solving Statistical Mechanics Using Variational Autoregressive Networks. arXiv:1809.10606

[4] Or Sharir, Yoav Levine, Noam Wies, Giuseppe Carleo, Amnon Shashua. Deep autoregressive models for the efficient variational simulation of many-body quantum systems. arXiv:1902.04057

Flow based model [5] Shuo-Hui Li, Chen-Xiao Dong, Linfeng Zhang, Lei Wang. Neural Canonical Transformation with Symplectic Flows. arXiv: 1910.00024.

## 参加人员

• 张江，北京师范大学系统科学学院教授，集智俱乐部创始人、集智AI学园创始人，研究兴趣包括：复杂系统、图网络、集体注意力流、流网络、异速生长律等。
• 张潘，中国科学院理论物理研究所副研究员，集智科学家，研究方向为统计物理与复杂系统，具体来说是用统计物理中的一些理论如自旋玻璃理论，副本对称破缺理论研究应用数学，网络和计算机科学中的一些问题。张潘的研究兴趣广泛，涉及物理，统计，网络和机器学习的很多方面并且在不断地拓宽自己的研究领域。
• 尤亦庄，加州大学圣地亚哥分校物理系助理教授，集智科学家，主要研究领域是量子多体物理，关注集体行为导致的演生现象和临界现象。对信息论（特别是量子信息），复杂系统，人工智能等领域也很感兴趣。
• 吴令飞，匹兹堡大学计算与信息学院助理教授，集智俱乐部核心成员、集智科学家。研究兴趣：思想的几何（the geometry of thinking）及其在人类知识与技能的优化组合上的应用，包括科学学（science of science）,技能科学（science of skills），团队科学（science of teams）,未来工作（future of work）等方向。
• 王磊，中科院物理所研究员，主要研究方向：深度学习，量子算法，量子多体计算
• 傅渥成，真名唐乾元，东京大学大学院综合文化研究科特任研究员，南京大学物理系物理学博士。集智核心成员，主要研究方向为统计物理及其在生命科学问题中的应用，曾作为中国博士生代表参加在德国林岛举办的「诺贝尔奖获得者大会」。「知乎盐 Club 2014」荣誉会员，曾出版科普书《宇宙从何而来》和多本电子书。
• 苑明理，北京彩彻区明公司软件工程师，集智核心成员，数学系毕业、程序员生涯，年龄渐长，但对许多事情仍然充满好奇和困惑。因维基对知识工程发生兴趣，想去探寻知识大厦搭建之道。通过一些长期的思考，认为知识表示的几何化有助于揭示概念创生过程的秘密。
• 肖达，北京邮电大学讲师，彩云科技首席科学家
• 罗秀哲，中国科学院物理研究所助理。感兴趣的方向是将新的机器学习理论应用在量子物理和利用量子物理理论改进机器学习，具体包括Quantum Validation, Quantum Optimization等量子信息理论和量子蒙卡，张量网络等凝聚态数值方法。详见我的主页：rogerluo.me。
• 程嵩，中国科学院物理研究所在读博士，研究方向为凝聚态理论量子多体与强关联系统，具体来说是基于张量重整化群的强关联数值算法。张量重整化群是近年来一个新兴的且还在不断发展的方法，为了不断优化这个方法，我自己的兴趣面也不断地在纠缠，网络，几何，机器学习等等方面拓展。
• 吴雨桐，西北大学，坐标芝加哥地区，主要研究方向是社交网络分析、组织传播、科学合作与创新，希望通过研读营拓宽学术视野，结识志同道合的朋友，可以为讨论提供一些社会科学的视角
• 崔浩川，北京师范大学系统科学学院，目前在芝加哥大学knowledge lab访问，主要研究方向是科学学，知识创新与流动，职业生涯演化，对知识表示和用信息理论解释神经网络感兴趣，希望通过研读营认识更多相关背景的小伙伴，大家一起分享、传播和发现知识
• 林意灵，芝加哥大学计算社科，坐标芝加哥。主要的研究方向是科学学和人力资本中的职业演化，文化与阶层。希望跟大家有很多的交流和讨论，希望能在不同学科之间也能找到共同之处，相互启发。
• 我是张蕊丽，法语鲁汶大学，目前在比利时新鲁汶，应用数学与力学专业，希望能开阔视野，为以后的学术研究拓宽基础。
• 黄龙吉，西安电子科技大学计算机科学与技术学院直博生，主要研究方向是城市道路网络，图卷积神经网络，希望通过研读营认识更多同行，互相了解想法，最好的学习就是思想的碰撞。个人博客：https://www.zhihu.com/people/huangshao-62?utm_source=wechat_session&utm_medium=social&utm_oi=903218342368776192。
• 朱秋雨，新加坡国立大学运筹与分析学院博士在读，主要研究方向是机器学习和供应链管理，希望通过研读营开拓视野，了解一些前沿研究
• 刘晶，就读于北京师范大学系统科学学院，坐标北京，目前研究的问题是基于张量网络方法的无监督学习，未来希望能用张量网络解决一些复杂网络系统科学领域的实际问题，希望在研读营认识相关背景的小伙伴一起合作
• 王硕，北京师范大学系统科学学院在读博士，导师是张江老师，研究方向是基于图神经网络的时空数据预测，如雾霾预报和交通流预报，希望认识能合作的朋友。
• 龚鹤扬，中国科学技术大学统计学在读博士，坐标北京，目前研究方向是信息视角下的因果推理，希望能找到志同道合的小伙伴，一起在Causal AI，也就是教会机器因果推理方面有所贡献
• 李奉治，中科院计算所直博生，导师为徐志伟研究员，位于北京，主要研究方向为因果推理在机器学习中的应用，希望在研读营中找到合作伙伴，共同探索因果学习的现实应用。https://github.com/L-F-Z
• 冶恩泽，清华大学电子系学生，对类脑智能与硬件智能感兴趣，希望和大家一起交流
• 扈鸿业，加州大学圣地亚哥分校理论物理博士生，位于美国加州的圣地亚哥，我对人工智能，量子多体系统和量子引力感兴趣，目前研究的问题是如何在非监督性学习中找到一些可解释的latent space representation，以及重整化群在机器学习中的应用。希望通过活动可以认识有趣的小伙伴，开阔自己的研究思路，也希望合作做一些有意思的问题，建立一些长期的合作关系。我的主页在hongyehu.com
• 彭文杰，北京语言大学计算机应用方向硕士生，研究方向是语音信息处理，之前的实验用的方法比较传统，目前对深度学习在语音处理的技术很感兴趣，特别是无监督方向的处理，以及学习语音的representation也很感兴趣。希望能和相关方向的同学好好交流。我的researchgate主页是 https://www.researchgate.net/profile/Wenjie_Peng
• 黄金龙，加州大学圣地亚哥分校理论物理博士生，坐标美国加州的圣地亚哥，我对量子纠错，量子多体系统，非监督学习训练感兴趣，目前研究的问题是如何从量子纠缠推导出量子多体系统的一般性质，以及随机张量网络的训练。希望在研读营能认识可以一起合作的小伙伴。https://github.com/JinlongHuang
• 陈梓文，芝加哥大学computational social science专业的硕士生，目前在james evans实验室做研究助理。我的研究兴趣包括文化，社交媒体，网络科学、自然语言处理、城市计算等计量社会科学的内容。希望在研读营能与大家交流各种研究想法，结识兴趣一致的朋友，甚至找到合作伙伴～我的主页是 https://ziwnchen.github.io/
• 郭铁城，清华大学物理系博士。目前在做量子多体物理的工作，希望在研读营可以一起合作。
• 杜伟韬 现在西北大学数学系联合培养 研究性趣是stochastic analysis on non-euclidean domain, 业余时间研究神经网络相关 发过关于神经网络算法理论的paper, 之前本科在中科大理论物理 所以也对从统计物理角度理解机器学习算法有兴趣
• 陈宇韬，目前是芝加哥大学计算社科的硕士研究生，研究领域主要是两个方面：线上社群中(online community)中的行为模式及其演化，另外一方面是音乐相关的媒体和传播研究。希望通过研读营看到更多AI与传统社科理论，框架的融合性，例如在AI算法中加入某一些社科观点(认知偏差等等)的可能性，以及如何利用AI(例如一些embeddings)对社科理论中的概念进行合理的量化，使其能够拥有prediction, projection之外更广的应用。也希望能借此机会与大家交流其他学科对于AI的理解、应用或者对AI本身的发展有更多的了解~
• 纪语，芝加哥大学系统神经科学博士在读。最近的研究方向之一是探索神经生物网络中的高维和复杂拓扑网络和同样具有高维与复杂拓扑性的计算语言问题之间共通的几何和动力学方法。另一个方向是以这类方法为基础，在近几年的计算语言学成果和传统的认知语言学和社会语言学研究之间建立桥梁，以探索一些新的计算社会科学的可能范式。 期待与物理、人工智能、和计算社会科学的各位朋友交流
• 张章 现在北京师范大学系统科学学院硕士在读 研究性趣是复杂系统与深度学习的交叉方向 参与研读营希望和大家沟通 我的个人主页是3riccc.github.io
• 章彦博，现在在亚利桑那州立大学读博士。研究兴趣是复杂系统与机器学习的交叉领域，希望能够使用机器学习的方法，研究复杂系统领域的重要问题。例如涌现、边界、意识等问题。希望在研读营里可以与大家探讨相关的问题，并开拓视野。我的主页是：http://emergence.asu.edu/yanbo-zhang.html
• 姚迪，中科院计算所博士毕业，目前是所里助理研究员。研究兴趣是图机器学习，时空数据挖掘，最近对因果推断在时空数据相关任务上的应用很有兴趣。希望在研读营跟大家多多交流学习。我的主页是：http://yaodi.info/
• 黄伟，现在悉尼科技大学读博士。研究的领域是深度学习理论，对深度学习和统计物理，图网络的交叉领域感兴趣。希望能够使用统计物理的方法对深度学习理论/图神经网络研究提供切入点。希望在研读营里可以与大家探讨相关的问题，并开拓视野。
• 黄华津，中国传媒大学传媒经济学，坐标北京市朝阳区。近期我一直在关注思考传播网络特别是基于社交媒体的传播网络中的级联-逆向级联现象，我的研究兴趣包括政府组织中的复杂系统，国家、政府组织的诞生与发展，职能部门的演化过程，媒介与媒介组织中的复杂网络。我原来的研究基本是纯人文社科背景，近一年正在努力学习复杂性科学，希望能得到大家的帮助
• 曹立坤 目前是芝加哥大学社会学系二年级的博士生。我的研究兴趣是计算社科，组织研究，网络分析，研究里会涉及到一些自然语言处理和复杂网络的内容，最近也对复杂系统很感兴趣，希望了解很多。