Neural Information Processing Systems Foundation, Inc. (NeurIPS)

  • What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks

    Nitesh Chawla, Kehan Guo, Taicheng Guo, Zhichun Guo, Zhenwen Liang, Bozhao Nan, Olaf Wiest, Xiangliang Zhang p59662-59688 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?

    Julia Kempe, Nikolaos Tsilivis p18116-18130 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What Can Transformers Learn In-Context? a Case Study of Simple Function Classes

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Can Transformers Learn In-Context? a Case Study of Simple Function Classes

    Shivam Garg, Percy Liang, Dimitris Tsipras, Gregory Valiant p30583-30598 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What Can We Learn from Unlearnable Datasets?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Can We Learn from Unlearnable Datasets?

    Jonas Geiping, Micah Goldblum, Tom Goldstein, Pedro Sandoval-Segura, Vasu Singla p75372-75391 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?

    David Evans, Fnu Suya, Yuan Tian, Xiao Zhang p34942-34980 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Do Deep Saliency Models Learn about Visual Attention?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Do Deep Saliency Models Learn about Visual Attention?

    Shi Chen, Ming Jiang, Qi Zhao p9543-9555 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What do Graph Neural Networks learn? Insights from Tropical Geometry

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What do Graph Neural Networks learn? Insights from Tropical Geometry

    Vikas Garg, Tuan Anh Pham p10988-11020 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What does guidance do? A fine-grained analysis in a simple setting

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What does guidance do? A fine-grained analysis in a simple setting

    Sitan Chen, Muthu Chidambaram, Khashayar Gatmiry, Holden Lee, Jianfeng Lu p84968-85005 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration

    Wanxiang Che, Qiguang Chen, Zhi Chen, Hao Fei, Min Li, Libo Qin p123207-123236 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding

    Nicolas Keriven, Samuel Vaiter p11823-11849 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

    Remi Cadene, Julien Colin, Thomas Fel, Thomas Serre p2832-2845 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What If the Input is Expanded in OOD Detection?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What If the Input is Expanded in OOD Detection?

    Bo Du, Bo Han, Tongliang Liu, Zengmao Wang, Boxuan Zhang, Jianing Zhu p21289-21329 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What is a Good Metric to Study Generalization of Minimax Learners?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What is a Good Metric to Study Generalization of Minimax Learners?

    Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang p38190-38203 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What is Flagged in Uncertainty Quantification?  Latent Density Models for Uncertainty Categorization

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What is Flagged in Uncertainty Quantification? Latent Density Models for Uncertainty Categorization

    Jonathan Crabbé, Nabeel Seedat, Hao Sun, Boris Van Breugel, Mihaela Van Der Schaar p4664-4684 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks

    Lihui Chen, Sitao Luan, Yilun Zheng p68406-68452 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What is my quantum computer good for? Quantum capability learning with physics-aware neural networks

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What is my quantum computer good for? Quantum capability learning with physics-aware neural networks

    Daniel Hothem, Ashe Miller, Timothy Proctor p34846-34869 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models

    Ching-Yao Chuang, Khashayar Gatmiry, Stefanie Jegelka, Zhiyuan Li, Tengyu Ma, Sashank Reddi p28040-28052 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding Without Text Inputs

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding Without Text Inputs

    Tal Shaharabany, Yoad Tewel, Lior Wolf p28222-28237 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What Knowledge Gets Distilled in Knowledge Distillation?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Knowledge Gets Distilled in Knowledge Distillation?

    Yong Jae Lee, Yuheng Li, Yingyu Liang, Utkarsh Ojha, Anirudh Sundara Rajan p11037-11048 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective

    Yun Fu, Michael Jones, Suhas Lohit, Huan Wang p13456-13469 from Advances in Neural Information Processing Systems 35
    Our Price: $0.00
  • What Makes and Breaks Safety Fine-tuning? A Mechanistic Study

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Makes and Breaks Safety Fine-tuning? A Mechanistic Study

    Puneet Dokania, Samyak Jain, Tom Joy, Ekdeep Lubana, Kemal Oksuz, Amartya Sanyal, Philip Torr p93406-93478 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights

    Yilun Chen, Jiangmiao Pang, Xiaojuan Qi, Xin Wen, Bingchen Zhao p36567-36601 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement.

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement.

    Yotam Alexander, Nadav Cohen, Nimrod De La Vega, Noam Razin p40994-41033 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • What Makes Good Examples for Visual In-Context Learning?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    What Makes Good Examples for Visual In-Context Learning?

    Ziwei Liu, Yuanhan Zhang, Kaiyang Zhou p17773-17794 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00