Omer Ben-Porat is a fourth-year PhD student in the Industrial Engineering and Management faculty of Technion in Israel, working with Professor Moshe Tennenholtz.
Omer’s research interests lie in the overgrowing intersection of game theory and machine learning. In his PhD, Omer focuses on market-oriented machine learning. He studies the interaction between agents who employ learning algorithms for revenue maximization in competitive environments.
Omer relies on game-theoretic foundations to design learning algorithms that explicitly address market competition, and perform significantly better than their well-studied competition-oblivious counterparts.
David Byrd is a research scientist and PhD student at the Georgia Institute of Technology, where he is advised by Tucker Balch.
David’s PhD research focuses on machine learning in finance applications, investigating mutual fund portfolio inference, intraday equity market forecasting, stock market simulation, and machine learning approaches to the evaluation of market efficiency. To support AI research in interactive markets, David leads the development of an open-source multi-agent equity trading simulation environment. As a research scientist, David works at Georgia Tech's Institute for People and Technology applying machine learning to business intelligence, animal tracking and activity recognition.
In 2018 David won the Graduate Student Instructor of the Year Award in the School of Interactive Computing for teaching courses in AI and ML for Trading. Before joining Georgia Tech, David worked at internet start-ups and in the telecommunications industry.
Oana-Maria is a PhD student in Artificial Intelligence at the University of Oxford, UK.
Oana-Maria’s research interests lie in developing interpretable neural network models that can learn from human-provided guidance at train-time and can provide natural language explanations of their predicted decisions at test-time.
Previously, Oana-Maria obtained Engineering and MSc degrees from the École Polytechnique, Paris. She has completed internships in various countries such as Germany, France, the UK, and the USA, where she enjoyed the cultural diversity she experienced. In her free time, Oana-Maria enjoys dancing, playing tennis and squash, learning foreign languages and traveling.
Sina Ghiassian is a PhD student at the University of Alberta (Edmonton, Canada), under the supervision of Professor Richard Sutton at the Reinforcement Learning and Artificial Intelligence (RLAI) laboratory.
Sina’s research in the RLAI lab is focused on Reinforcement Learning (RL). Particularly, he is interested in understanding state-of-the-art off-policy learning methods and their application to real-world and simulated problems. Another topic of interest is fully incremental deep reinforcement learning.
Sina graduated from the University of Alberta with an MSc in Computer Science in 2014, where he worked with Russell Greiner on developing supervised learning techniques for image classification.
Zahra Ghodsi is a PhD candidate in the Electrical and Computer Engineering department at NYU Tandon School of Engineering, under the supervision of Professor Siddharth Garg.
Zahra’s research interests lie in the intersection of security, privacy and machine learning, most recently focusing on private and verifiable outsourcing of machine learning to untrusted parties. She designs and develops practical frameworks that improve the security of machine learning systems.
Zahra received her BS degree in Electrical Engineering from Sharif University of Technology in 2013 and MS degree in Computer Engineering from New York University in 2015. Zahra is a recipient of the Ernst Weber Fellowship from the department of Electrical and Computer Engineering at NYU, and the Bronze Medalist of the 20th Iranian National Physics Olympiad in 20.
Surbhi Goel is a PhD student in Computer Science at the University of Texas at Austin, working with Adam Klivans.
Surbhi’s research interests lie at the intersection of theory and machine learning where she focuses on finding algorithms for training deep networks that allow for rigorous mathematical analysis. Gradient descent has been a successful workhorse for training these networks, but the backbone of the entire field rests on essentially this algorithm and its variants – and a major drawback of gradient descent is that it is hard to analyze mathematically. Surbhi has proposed a variety of alternate efficient algorithms for training simple neural networks that are backed by strong theoretical guarantees and is working towards extending her proposed techniques to deeper networks.
Surbhi received her bachelor’s degree from Indian Institute of Technology (IIT) in Delhi, majoring in Computer Science and Engineering. In her free time she enjoys dancing, reading and traveling around the world.
Sanket Kamthe is a third-year PhD student at Imperial College London.
Sanket is focusing on reinforcement learning for robotics and control for his PhD. He is particularly interested in Safe Model-based Reinforcement Learning, where the agent learns to perform tasks while being aware of risks and uncertainties. He primarily works with Gaussian Processes for modeling system dynamics.
Sanket started his studies at Imperial College with an MSc in Advanced Computing. Before moving to the UK, he was a Marie Curie research fellow at the department of Applied Maths at University of Twente in the Netherlands. He also holds an MSc degree in Information and Communication Engineering from Technische Universitat Darmstadt, Germany and a BE in Electronics & Telecommunications from University of Pune, India. In his spare time Sanket volunteers with Imperial's WOHL lab reach-out programs that inspire school students from public schools in London to take up STEM subjects.
Krishna Pillutla is a PhD student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, where he is advised by Zaid Harchaoui and Sham Kakade.
Krishna’s research interests lie in the fields of machine learning and optimization, and specifically in the area of structured prediction. Krishna's current work is aimed at assessing the confidence of structured prediction in a risk-sensitive framework. A model that knows its limitations can seek outside assistance from, for instance, a human expert to avoid making decisions that could backfire, leading to safer machine learning systems. He previously worked on faster optimization algorithms to train structured prediction models using infimal convolution smoothing techniques.
Before Krishna started his PhD studies, he earned a master's degree from Carnegie Mellon University and a bachelor's degree from the Indian Institute of Technology (IIT) in Bombay.
Adarsh Prasad is a Machine Learning Department PhD student at Carnegie Mellon University (CMU), where he is advised by Pradeep Ravikumar.
Adarsh is broadly interested in statistical machine learning, high-dimensional statistics and optimization. His current focus is to modify state-of-the-art machine learning techniques and design new algorithms so machine learning models can be (1) robust – i.e. trained even with noisy and corrupted data, (2) reliable – i.e. should not break down under benign shifts of the distribution, and (3) resilient – i.e. the modeling procedure should work under model mis-specification.
Adarsh did his undergraduate studies at Indian Institute of Technology (IIT) in Delhi, where he won the Best Undergraduate Thesis in his department. He has represented CMU at the Citadel Datathon National Finals, has won the Gold Medal at the WorldQuant Quantitative Research challenge, and previously worked at Cubist Systematic Strategies.
Megan Shearer is a third-year PhD student in Computer Science & Engineering at the University of Michigan, where she is advised by Michael Wellman.
Megan’s research uses simulation and autonomous agents to study manipulation in financial markets. Currently, Megan is studying transaction-based financial benchmarks to determine which benchmarks are more robust to manipulation. She is a member of the Strategic Reasoning Group.
Megan received a BS in Mathematics from the University of Arizona. She previously was a Visiting Research Fellow at the Investors Exchange (IEX). This summer she will be a Data Science & Machine Learning Intern at J.P. Morgan in New York City.
Ana-Andreea is a third-year PhD student in Computer Science at Columbia University, working with Augustin Chaintreau on social networks and privacy.
Ana-Andreea’s work focuses on mathematical models, data analysis, and policy and ethical implications for algorithmic design in social networks. She is particularly interested in designing fair algorithms for online networks – from recommendation systems to information diffusion heuristics – aiming to understand how social inequality can be mirrored in algorithms that work with human data.
Ana-Andreea graduated from Princeton University in 2016 with a bachelor's degree in Mathematics and certificates in Computing and Applied Mathematics, where she cemented her passion for networks and mathematical modeling. Outside of academia, she closely follows news around privacy and data protection policy, and algorithmic bias in big tech and the public sector.
Paul Vicol is a machine learning PhD student at the University of Toronto, working with Roger Grosse.
Paul’s current research interests include making it easier to train neural networks by automating the process of hyperparameter selection. In particular, in his recent work he has developed efficient, gradient-based approaches for adapting hyperparameters during training. He is also interested in Bayesian machine learning and adversarial robustness.
Paul completed his BSc and MSc in computer science at Simon Fraser University in Vancouver, where he was a recipient of Canada’s prestigious Silver Governor General’s Academic Medal. Paul has won NSERC scholarships during undergraduate and graduate studies. He was a reviewer for ICCV, EMNLP, and ICML. He also enjoys mentoring undergraduate students doing research in machine learning.
Teng Zhang is currently pursuing a PhD at the Institute of Microelectronics at Peking University’s School of Electronics Engineering.
Teng’s main research interest includes the optimization and integration of nanoscale devices, new memory and storage systems, hardware security and neuromorphic computing. He has published more than 10 technical publications and has presented at international conferences. Part of his work has been highlighted on nanotechweb.org and he has also been selected for the special anniversary collection for the journal Nano Futures (Computer Science (EECS), Peking University).
Teng received his BS in Electronic Science and Technology from Peking University in 2016. In the same year, he joined the group led by Professor Ru Huang.
Shengjia Zhao is a third-year PhD student at Stanford University, advised by Stefano Ermon.
Shengjia’s research interests include all types of theoretically grounded techniques for deep learning. His primary research objective is unsupervised representation learning, where clear features or data representations are critical in AI. Humans effortlessly learn rich representations from our perception, without any supervision — but this ability is elusive to machine learning algorithms. Shengjia would like to improve representation learning by drawing upon ideas from information theory, deep learning, and complexity theory – to learn representations that are hierarchical, interpretable, structured and theoretically grounded.
Shengjia likes to travel and participate in relaxing sports in his spare time.