Twin Speakers Amplify Neural Networks

On 25 April 2019, we were honored to feature two speakers in our KDD.SG Seminar series and host them in SMU School of Information Systems.

20190425_cover1.png

Agenda of the Seminar

Prof Jie Tang from Tsinghua University summarized several works on using deep learning for deriving graph representations, distilled their fundamental concepts, and described several generalizations and extensions on such models.

20190425_jie1.jpg

Jie describing the process of learning embeddings of nodes from a graph, in this case via employing Skip-gram technique on random walk paths on the graph

20190425_jie2.jpg

Jie discussing how NSGCN, a graph convolutional network with partial nodes’ features, would be much more efficient than ordinary GCN, yet obtaining close performance

In turn, Prof Michalis Vazirgiannis from Ecole Polytechnique described his recent work on using deep learning to derive representations for sets.

20190425_michalis1.jpg

Michalis introducing a recent work on using neural networks to learn set representations

20190425_michalis2.jpg

Michalis providing an overview of several related works on deep learning for graphs emerging from his lab

The audience was engaged, and the ensuing questions-and-answers were lively with a number of relevant questions. Overall, it was a very informative and inspiring session that certainly amplified our understanding of deep learning and neural networks.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s