Paper

  • How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach

    Filippo Lazzati, Alberto Maria Metelli, Mirco Mutti p54820-54871 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How Does Message Passing Improve Collaborative Filtering?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How Does Message Passing Improve Collaborative Filtering?

    Zhichun Guo, Clark Ju, Yozen Liu, Neil Shah, William Shiao, Yanfang Ye, Tong Zhao p8760-8784 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How does PDE order affect the convergence of PINNs?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How does PDE order affect the convergence of PINNs?

    Myungjoo Kang, Yesom Park, Changhoon Song p73-131 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How Does Variance Shape the Regret in Contextual Bandits?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How Does Variance Shape the Regret in Contextual Bandits?

    Zeyu Jia, Jian Qian, Alexander Rakhlin, Chen-Yu Wei p83730-83785 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

    Iz Beltagy, Khyathi Chandu, Pradeep Dasigi, Hannaneh Hajishirzi, Jack Hessel, Hamish Ivison, Tushar Khot, Kelsey Macmillan, Noah Smith, David Wadden, Yizhong Wang p74764-74786 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

    Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, Omid Saremi p27850-27895 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How Far Should the UK Go with Negative Emission Technologies?

    ECOS 2023

    How Far Should the UK Go with Negative Emission Technologies?

    Semra Bakkaloglu, Matthias Mersch, Nixon Sunny, Christos Markides, Nilay Shah, Adam Hawkes p2939-2949 from 36th International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems (ECOS 2023)
    Our Price: $0.00
  • How hard are computer vision datasets? Calibrating dataset difficulty to viewing time

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How hard are computer vision datasets? Calibrating dataset difficulty to viewing time

    Andrei Barbu, Jesse Cummings, Dan Gutfreund, Boris Katz, Xinyu Lin, David Mayo p11008-11036 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00
  • HOW INTERACTION OF INTERNAL FORCES AND MOMENTS INFLUENCE THE LOAD-BEARING CAPACITY OF DOWELS

    World Conference on Timber Engineering 2025

    HOW INTERACTION OF INTERNAL FORCES AND MOMENTS INFLUENCE THE LOAD-BEARING CAPACITY OF DOWELS

    Elisabet Kuck, Carmen Sandhaas, Hans Joachim Blaß p2834-2840 from 14th World Conference on Timber Engineering 2025 (WCTE 2025)
    Our Price: $0.00
  • How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks

    Madhu Advani, Chen Huang, Etai Littwin, Preetum Nakkiran, Omid Saremi, Joshua Susskind, Vimal Thilak p91300-91336 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How many classifiers do we need?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How many classifiers do we need?

    Liam Hodgkinson, Hyunsuk Kim, Michael Mahoney, Ryan Theisen p86458-86482 from Advances in Neural Information Processing Systems 37
    Our Price: $0.00
  • How many samples are needed to leverage smoothness?

    Neural Information Processing Systems Foundation, Inc. (NeurIPS)

    How many samples are needed to leverage smoothness?

    Vivien Cabannes, Stefano Vigogna p26776-26809 from Advances in Neural Information Processing Systems 36
    Our Price: $0.00