Search results in Data Science DLShttp://podserv.cs.umass.edu/groups/datasciencedls/search/This is a feed of pages for Data Science DLSFri, 22 Sep 2017 18:45:05 GMTPyRSS2Gen-1.0.0http://blogs.law.harvard.edu/tech/rss Tommi S. Jaakkola http://podserv.cs.umass.edu/groups/datasciencedls/wiki/e6263/
<div class="wiki_entry"><br class="" />
<h2>Continuous Embedding with Justification</h2>
<p><br class="" /></p>
<p><span>One of the key enabling features underlying deep learning is </span><span>continuous embedding. At one end, we frame continuous embedding of </span><span>objects from the point of view of metric recovery. We demonstrate that </span><span>metric recovery is possible even on the basis of random walks over </span><span>unweighted directed graphs, and illustrate recovery algorithms from </span><span>co-occurrences in the context of word embedding. At the other end, we </span><span>address the key downside arising from a pervasive use of continuous </span><span>embedding within larger architectures: predictions, while generally </span><span>accurate, cannot be justified in a manner suitable for human consumption </span><span>or communication. I will describe work towards learning rationales in an </span><span>unsupervised manner together with the supervised training of the </span><span>predictor itself.</span><br class="" /><br class="" /><span>The talk covers joint work with David Alvarez, Tatsu Hashimoto, Tao Lei, </span><span>Yi Sun, and Regina Barzilay.</span></p>
<p><span><br class="" /></span></p>
<ul>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/F5711B46-A94C-42FA-BC11-53FE6F19D8C6.m4v">Small Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/D1C763AF-F865-46F3-9E2B-54D86AC1EF06.m4v">Large Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/AB889F7F-469C-4549-8AAD-86A14E82D8B6.m4a">Audio Version</a></span></li>
</ul>
<p><span><span class="gmail-m_-6755353661093366007gmail-il">Tommi</span> S. Jaakkola received M.Sc. in theoretical physics from </span><span>Helsinki University of Technology, Finland, and Ph.D. from MIT in </span><span>computational neuroscience. He joined the MIT EECS faculty late 1998. At </span><span>MIT his research has focused on many theoretical and applied aspects of </span><span>machine learning and statistical inference. On the theoretical side, his </span><span>work includes algorithms for large scale statistical estimation and </span><span>problems that involve predominantly incomplete data sources. His applied </span><span>research has concentrated around inferential questions appearing in </span><span>recommender systems, computational biology, and natural language </span><span>processing. He has received several awards for his publications.</span></p></div>CSCFhttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/e6263/Mon, 31 Oct 2016 18:43:14 GMTData Science DLShttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/welcome/
<div class="wiki_entry">
<p><br class="" /></p>
<p><br class="" /></p>
<h2>DLS</h2>
<p><br class="" /></p>
<ul>
<li>
<h3>09-29-16 - <a href="/groups/datasciencedls/wiki/fc4a3/Yoshua_Bengio__Improving_the_memory_capability_of_recurrent_networks.html">Yoshua Bengio - Improving the Memory Capability of Recurrent Networks</a></h3></li>
</ul>
<ul>
<li>
<h3>10-03-16 - <a href="/groups/datasciencedls/wiki/ad127/Bin_Yu__Theory_to_Gain_Insight_and_Inform_Practice.html">Bin Yu - Theory to Gain Insight and Inform Practice</a></h3></li>
</ul>
<ul>
<li>
<h3>10-28-16 - <a href="/groups/datasciencedls/wiki/e6263/_Tommi_S_Jaakkola__Continuous_Embedding_with_Justification.html"> Tommi S. Jaakkola - Continuous Embedding with Justification</a></h3></li>
</ul>
<ul>
</ul>
<ul>
</ul></div>CSCFhttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/welcome/Mon, 31 Oct 2016 18:40:49 GMTBin Yuhttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/ad127/
<div class="wiki_entry">
<p><br class="" /></p>
<h3>Theory to Gain Insight and Inform Practice</h3>
<p><br class="" /></p>
<p><strong><span>Abstract:</span> </strong><span>Henry L. Rietz, the first president of IMS, published his book </span><span>“Mathematical Statistics” in 1927. One review wrote in 1928:</span><br class="" /><span>“Professor Rietz has developed this theory so skillfully that the</span> <span>’workers in other fields’, provided only that they have a passing</span> <span>familiarity with the grammar of mathematics, can secure a satisfactory</span> <span>understanding of the points involved.”</span><br class="" /><span>In this lecture, I would like to promote the good tradition of</span> <span>mathematical statistics as expressed in Rietzs book in order to gain</span> <span>insight and inform practice. In particular, I will recount the</span> <span>beginning of our theoretical study of dictionary learning (DL) as part</span> <span>of a multi-disciplinary project to “map a cell’s destiny” in</span> <span>Drosophila embryo. I will share insights gained regarding local</span> <span>identifiability of primal and dual formulations of DL. Furthermore,</span> <span>comparing the two formulations is leading us down the path of seeking</span> <span>confidence measures of the learned dictionary elements (corresponding</span> <span>to biologically meaningful regions in Drosophila embryo). Finally, I</span> <span>will present preliminary work using our confidence measures to</span> <span>identify potential knockout (or gene editing) experiments in an</span> <span>iterative interaction between biological and data sciences.</span></p>
<p><span><br class="" /></span></p>
<ul>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/EBAC1F22-8238-47BB-8851-C21D64CB7B37/36197D26-5CE7-4485-97A4-F0817872F768.m4v">Small Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/EBAC1F22-8238-47BB-8851-C21D64CB7B37/611B61C4-A7BD-43D6-8021-902DD5F8D6CD.m4v">Large Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/EBAC1F22-8238-47BB-8851-C21D64CB7B37/6A8EF9A7-F1D8-4557-81FE-8230D8214ECF.m4a">Audio Version</a></span></li>
</ul>
<p><br class="" /></p>
<ul>
</ul>
<p> <strong>Bio:</strong> Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Science at the University of California at Berkeley and a former Chair of Statistics at Berkeley. She is founding co-director of the Microsoft Joint Lab at Peking University on Statistics and Information Technology. Her group at Berkeley is engaged in interdisciplinary research with scientists from genomics, neuroscience, and medicine. In order to solve data problems in these domain areas, her group employs quantitative critical thinking, and develops statistical and machine learning algorithms and theory. She has published over 100 scientific papers in premier journals in statistics, machine learning, information theory, signal processing, remote sensing, neuroscience, genomics, and networks.</p>
<p> She is Member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011 and the Tukey Memorial Lecturer of the Bernoulli Society in 2012, the Rietz Lecture of Institute of Mathematical Statistics (IMS) in 2016. She was IMS President in 2013-2014, and is a Fellow of IMS, ASA, AAAS and IEEE. She has served or is serving on leadership committees of NAS-BMSA, SAMSI, IPAM and ICERM, and editorial boards of Journal of Machine Learning, Annals of Statistics, Annual Review of Statistics.</p></div>CSCFhttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/ad127/Thu, 06 Oct 2016 13:28:09 GMTYoshua Bengiohttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/fc4a3/
<div class="wiki_entry">
<p><br class="" /></p>
<h2> Improving the Memory Capability of Recurrent Networks</h2>
<p><br class="" /></p>
<p><span>Since the 90s we have known about the fundamental challenge in training a parametrized dynamical system such as a recurrent networks to capture long-term dependencies. The notion of stable memory is crucial in understanding this issue, and is behind the LSTM and GRU architectures, as well as the recent work on networks with an external memory. We present several new ideas exploring how to further expand the reach of recurrent architectures, improve their training and scale up their memory, in particular to model language-related data and better capture semantics for question answering, machine translation and dialogue.</span></p>
<ul>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/3BFBF9F1-0F46-4F60-B644-F22F87E4D231.m4v">Small Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/E11CEA56-124D-452B-82E7-7961ADED1609.m4v">Large Video Version</a></span></li>
<li><span><a href="http://podserv.cs.umass.edu:8171/podcastproducer/attachments/9536FF80-5B97-4D36-A954-1DE58050F322/3EB7EF95-934A-4BDD-9B78-628CAC463C5E.m4a">Audio Version</a></span></li>
</ul>
<p><br class="" /><span><span class="gmail-il">Yoshua</span> <span class="gmail-il">Bengio</span> received a PhD in Computer Science from McGill University, Canada in 1991. After two post-doctoral years, one at M.I.T. with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun and Vladimir Vapnik, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of three books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since '2000 he holds a Canada Research Chair in Statistical Learning Algorithms, is a Senior Fellow of the Canadian Institute for Advanced Research and since 2014 he co-directs its program focused on deep learning. He heads the Montreal Institute for Learning Algorithms (MILA), currently the largest academic research group on deep learning. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the new International Conference on Learning Representations. His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, generative models, biologically inspired learning algorithms, natural language understanding and other challenging applications of machine learning.</span></p></div>Terrie Kellogghttp://podserv.cs.umass.edu/groups/datasciencedls/wiki/fc4a3/Tue, 04 Oct 2016 15:32:24 GMT