| "What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| #InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| $\alpha$TC-VAE: On the relationship between Disentanglement and Diversity |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified States |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $\mathbb{D}^2$ Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| $\mathcal{B}$-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| $\pi$2vec: Policy Representation with Successor Features |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| $\texttt{NAISR}$: A 3D Neural Additive Model for Interpretable Shape Representation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $t^3$-Variational Autoencoder: Learning Heavy-tailed Data with Student's t and Power Divergence |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| 3D Feature Prediction for Masked-AutoEncoder-Based Point Cloud Pretraining |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3D Reconstruction with Generalizable Neural Fields using Scene Priors |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 3D-Aware Hypothesis & Verification for Generalizable Relative Object Pose Estimation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A 2-Dimensional State Space Layer for Spatial Inductive Bias |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Benchmark Study on Calibration |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Benchmark for Learning to Translate a New Language from One Grammar Book |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Branching Decoder for Set Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Characterization Theorem for Equivariant Networks with Point-wise Activations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| A Cognitive Model for Learning Abstract Relational Structures from Memory-based Decision-Making Tasks |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A Data-Driven Measure of Relative Uncertainty for Misclassification Detection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Differentially Private Clustering Algorithm for Well-Clustered Graphs |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| A Discretization Framework for Robust Contextual Stochastic Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Dynamical View of the Question of Why |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Fast and Provable Algorithm for Sparse Phase Retrieval |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Flexible Generative Model for Heterogeneous Tabular EHR with Missing Modality |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A Foundation Model for Error Correction Codes |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Framework for Inference Inspired by Human Memory Mechanisms |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A General Framework for User-Guided Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Good Learner can Teach Better: Teacher-Student Collaborative Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Hierarchical Bayesian Model for Few-Shot Meta Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Lie Group Approach to Riemannian Batch Normalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Linear Algebraic Framework for Counterfactual Generation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Multi-Level Framework for Accelerating Training Transformer Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Mutual Information Perspective on Federated Contrastive Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Neural Framework for Generalized Causal Sensitivity Analysis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Newborn Embodied Turing Test for Comparing Object Segmentation Across Animals and Machines |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Plug-and-Play Image Registration Network |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Policy Gradient Method for Confounded POMDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Precise Characterization of SGD Stability Using Loss Surface Geometry |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Primal-Dual Approach to Solving Variational Inequalities with General Constraints |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Probabilistic Framework for Modular Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Progressive Training Framework for Spiking Neural Networks with Learnable Multi-hierarchical Model |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Quadratic Synchronization Rule for Distributed Deep Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A ROBUST DIFFERENTIAL NEURAL ODE OPTIMIZER |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| A Recipe for Improved Certifiable Robustness |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Restoration Network as an Implicit Prior |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Semantic Invariant Robust Watermark for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Simple Romance Between Multi-Exit Vision Transformer and Token Reduction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Simple and Effective Pruning Approach for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Simple and Scalable Representation for Graph Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive Coding Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Statistical Analysis of Wasserstein Autoencoders for Intrinsically Low-dimensional Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Study of Bayesian Neural Network Surrogates for Bayesian Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Sublinear Adversarial Training Algorithm |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Topological Perspective on Demystifying GNN-Based Link Prediction Performance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Unified Framework for Bayesian Optimization under Contextual Uncertainty |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Unified and General Framework for Continual Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Variational Framework for Estimating Continuous Treatment Effects with Measurement Error |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Variational Perspective on Solving Inverse Problems with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Versatile Causal Discovery Framework to Allow Causally-Related Hidden Variables |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A differentiable brain simulator bridging brain simulation and brain-inspired computing |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| A path-norm toolkit for modern networks: consequences, promises and challenges |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A representation-learning game for classes of prediction tasks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| A unique M-pattern for micro-expression spotting in long videos |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| ACRF: Compressing Explicit Neural Radiance Fields via Attribute Compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ADOPD: A Large-Scale Document Page Decomposition Dataset |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ALAM: Averaged Low-Precision Activation for Memory-Efficient Training of Transformer Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ARGS: Alignment as Reward-Guided Search |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ARM: Refining Multivariate Forecasting with Adaptive Temporal-Contextual Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ASID: Active Exploration for System Identification in Robotic Manipulation |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| ASMR: Activation-Sharing Multi-Resolution Coordinate Networks for Efficient Inference |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| AUC-CL: A Batchsize-Robust Framework for Self-Supervised Contrastive Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AUGCAL: Improving Sim2Real Adaptation by Uncertainty Calibration on Augmented Synthetic Images |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Accelerated Convergence of Stochastic Heavy Ball Method under Anisotropic Gradient Noise |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Accelerated Sampling with Stacked Restricted Boltzmann Machines |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Accelerating Data Generation for Neural Operators via Krylov Subspace Recycling |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Accelerating Sinkhorn algorithm with sparse Newton iterations |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Accurate Forgetting for Heterogeneous Federated Continual Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Achieving Fairness in Multi-Agent MDP Using Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Achieving Human Parity in Content-Grounded Datasets Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Active Retrosynthetic Planning Aware of Route Quality |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Active Test-Time Adaptation: Theoretical Analyses and An Algorithm |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AdaMerging: Adaptive Model Merging for Multi-Task Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adapting Large Language Models via Reading Comprehension |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adapting to Distribution Shift by Visual Domain Prompt Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Adaptive Federated Learning with Auto-Tuned Clients |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adaptive Instrument Design for Indirect Experiments |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Rational Activations to Boost Deep Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Regret for Bandits Made Possible: Two Queries Suffice |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adaptive Retrieval and Scalable Indexing for k-NN Search with Cross-Encoders |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Self-training Framework for Fine-grained Scene Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adaptive Sharpness-Aware Pruning for Robust Sparse Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Stochastic Gradient Algorithm for Black-box Multi-Objective Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Window Pruning for Efficient Local Motion Deblurring |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive deep spiking neural network with global-local learning via balanced excitatory and inhibitory mechanism |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Addressing Signal Delay in Deep Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| AdjointDPM: Adjoint Sensitivity Method for Gradient Backpropagation of Diffusion Probabilistic Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Advancing Pose-Guided Image Synthesis with Progressive Conditional Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Advancing the Lower Bounds: an Accelerated, Stochastic, Second-order Method with Optimal Adaptation to Inexactness |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Adversarial Adaptive Sampling: Unify PINN and Optimal Transport for the Approximation of PDEs |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Adversarial Attacks on Fairness of Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Adversarial AutoMixup |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adversarial Causal Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Adversarial Feature Map Pruning for Backdoor |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adversarial Imitation Learning via Boosting |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Adversarial Training Should Be Cast as a Non-Zero-Sum Game |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| AffineQuant: Affine Transformation Quantization for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AgentBench: Evaluating LLMs as Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| AirPhyNet: Harnessing Physics-Guided Neural Networks for Air Quality Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Algorithms for Caching and MTS with reduced number of predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Alice Benchmarks: Connecting Real World Re-Identification with the Synthetic |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Align With Purpose: Optimize Desired Properties in CTC Models with a General Plug-and-Play Framework |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Aligning Relational Learning with Lipschitz Fairness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Alleviating Exposure Bias in Diffusion Models through Sampling with Shifted Time Steps |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| AlpaGasus: Training a Better Alpaca with Fewer Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Alt-Text with Context: Improving Accessibility for Images on Twitter |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Amortized Network Intervention to Steer the Excitatory Point Processes |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AmortizedPeriod: Attention-based Amortized Inference for Periodicity Identification |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Amortizing intractable inference in large language models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| An Analytical Solution to Gauss-Newton Loss for Direct Image Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| An Efficient Tester-Learner for Halfspaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| An Emulator for Fine-tuning Large Language Models using Small Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Extensible Framework for Open Heterogeneous Collaborative Perception |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Image Is Worth 1000 Lies: Transferability of Adversarial Images across Prompts on Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| An Intuitive Multi-Frequency Feature Representation for SO(3)-Equivariant Networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| An Investigation of Representation and Allocation Harms in Contrastive Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| An LLM can Fool Itself: A Prompt-Based Adversarial Attack |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Unforgeable Publicly Verifiable Watermark for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An improved analysis of per-sample and per-update clipping in federated learning |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| An interpretable error correction method for enhancing code-to-code translation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An operator preconditioning perspective on training in physics-informed machine learning |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
2 |
| Analysis of Learning a Flow-based Generative Model from Limited Sample Complexity |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Analyzing and Improving Optimal-Transport-based Adversarial Networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Analyzing and Mitigating Object Hallucination in Large Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Annealing Self-Distillation Rectification Improves Adversarial Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AnyText: Multilingual Visual Text Generation and Editing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Approximately Piecewise E(3) Equivariant Point Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| ArchLock: Locking DNN Transferability at the Architecture Level with a Zero-Cost Binary Predictor |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Are Human-generated Demonstrations Necessary for In-context Learning? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Are Models Biased on Text without Gender-related Language? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Assessing Uncertainty in Similarity Scoring: Performance & Fairness in Face Recognition |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Asymptotically Free Sketched Ridge Ensembles: Risks, Cross-Validation, and Tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| At Which Training Stage Does Code Data Help LLMs Reasoning? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| AttEXplore: Attribution for Explanation with model parameters eXploration |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Attention-Guided Contrastive Role Representations for Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Attention-based Iterative Decomposition for Tensor Product Representation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Augmented Bayesian Policy Search |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Augmenting Transformers with Recursively Composed Multi-grained Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AutoChunk: Automated Activation Chunk for Memory-Efficient Deep Learning Inference |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
2 |
| AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| AutoLoRa: An Automated Robust Fine-Tuning Framework |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AutoVP: An Automated Visual Prompting Framework and Benchmark |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Automatic Functional Differentiation in JAX |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BECLR: Batch Enhanced Contrastive Few-Shot Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BEND: Benchmarking DNA Language Models on Biologically Meaningful Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BENO: Boundary-embedded Neural Operators for Elliptic PDEs |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BRUSLEATTACK: A QUERY-EFFICIENT SCORE- BASED BLACK-BOX SPARSE ADVERSARIAL ATTACK |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Backdoor Contrastive Learning via Bi-level Trigger Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Backdoor Federated Learning by Poisoning Backdoor-Critical Layers |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BadEdit: Backdooring Large Language Models by Model Editing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Balancing Act: Constraining Disparate Impact in Sparse Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Bandits with Replenishable Knapsacks: the Best of both Worlds |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Batch normalization is sufficient for universal function approximation in CNNs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| BatchPrompt: Accomplish more with less |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Batched Low-Rank Adaptation of Foundation Models |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| BatteryML: An Open-source Platform for Machine Learning on Battery Degradation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BayesDiff: Estimating Pixel-wise Uncertainty in Diffusion via Bayesian Inference |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bayesian Bi-clustering of Neural Spiking Activity with Latent Structures |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayesian Coreset Optimization for Personalized Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Low-rank Adaptation for Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Bayesian Neural Controlled Differential Equations for Treatment Effect Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bayesian Optimization through Gaussian Cox Process Models for Spatio-temporal Data |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Beam Enumeration: Probabilistic Explainability For Sample Efficient Self-conditioned Molecular Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Beating Price of Anarchy and Gradient Descent without Regret in Potential Games |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Behaviour Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bellman Optimal Stepsize Straightening of Flow-Matching Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Benchmarking Algorithms for Federated Domain Generalization |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Benchmarking and Improving Generator-Validator Consistency of Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Bespoke Solvers for Generative Flow Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Better Neural PDE Solvers Through Data-Free Mesh Movers |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Beyond Memorization: Violating Privacy via Inference with Large Language Models |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Beyond Vanilla Variational Autoencoders: Detecting Posterior Collapse in Conditional and Hierarchical Variational Autoencoders |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Beyond task performance: evaluating and reducing the flaws of large multimodal models with in-context-learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BioBridge: Bridging Biomedical Foundation Models via Knowledge Graphs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Blending Imitation and Reinforcement Learning for Robust Policy Improvement |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BooookScore: A systematic exploration of book-length summarization in the era of LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Boosting Graph Anomaly Detection with Adaptive Message Passing |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Boosting Vanilla Lightweight Vision Transformers via Re-parameterization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Boosting the Adversarial Robustness of Graph Neural Networks: An OOD Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boundary Denoising for Video Activity Localization |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Bounding Box Stability against Feature Dropout Reflects Detector Generalization across Environments |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Brain decoding: toward real-time reconstruction of visual perception |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| BrainLM: A foundation model for brain activity recordings |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Branch-GAN: Improving Text Generation with (not so) Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridging Neural and Symbolic Representations with Transitional Dictionary Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridging State and History Representations: Understanding Self-Predictive RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bridging Vision and Language Spaces with Assignment Prediction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BroGNet: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Building Cooperative Embodied Agents Modularly with Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Butterfly Effects of SGD Noise: Error Amplification in Behavior Cloning and Autoregression |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CABINET: Content Relevance-based Noise Reduction for Table Question Answering |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CADS: Unleashing the Diversity of Diffusion Models through Condition-Annealed Sampling |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CAMBranch: Contrastive Learning with Augmented MILPs for Branching |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CAS: A Probability-Based Approach for Universal Condition Alignment Score |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| CCIL: Continuity-Based Data Augmentation for Corrective Imitation Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CIFAR-10-Warehouse: Broad and More Realistic Testbeds in Model Generalization Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CLAP: Collaborative Adaptation for Patchwork Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CLEX: Continuous Length Extrapolation for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CLIP the Bias: How Useful is Balancing Data in Multimodal Learning? |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CLIP-MUSED: CLIP-Guided Multi-Subject Visual Neural Information Semantic Decoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CNN Kernels Can Be the Best Shapelets |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CO2: Efficient Distributed Training with Full Communication-Computation Overlap |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| COCO-Periph: Bridging the Gap Between Human and Machine Perception in the Periphery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| CODE REPRESENTATION LEARNING AT SCALE |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| COLLIE: Systematic Construction of Constrained Text Generation Tasks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CORN: Contact-based Object Representation for Nonprehensile Manipulation of General Unseen Objects |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| COSA: Concatenated Sample Pretrained Vision-Language Foundation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CPPO: Continual Learning for Reinforcement Learning with Human Feedback |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Cameras as Rays: Pose Estimation via Ray Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Can LLM-Generated Misinformation Be Detected? |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Can Large Language Models Infer Causation from Correlation? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can Transformers Capture Spatial Relations between Objects? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can We Evaluate Domain Adaptation Models Without Target-Domain Labels? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Can we get the best of both Binary Neural Networks and Spiking Neural Networks for Efficient Computer Vision? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Candidate Label Set Pruning: A Data-centric Perspective for Deep Partial-label Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cascading Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cauchy-Schwarz Divergence Information Bottleneck for Regression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Causal Fairness under Unobserved Confounding: A Neural Sensitivity Framework |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Causal Inference with Conditional Front-Door Adjustment and Identifiable Variational Autoencoder |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Causal Modelling Agents: Causal Graph Discovery through Synergising Metadata- and Data-driven Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Causal Structure Recovery with Latent Variables under Milder Distributional and Graphical Assumptions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Causal-StoNet: Causal Inference for High-Dimensional Complex Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| CausalLM is not optimal for in-context learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Causally Aligned Curriculum Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CellPLM: Pre-training of Cell Language Model Beyond Single Cells |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Certified Adversarial Robustness for Rate Encoded Spiking Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Chain of Hindsight aligns Language Models with Feedback |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Chain of Log-Concave Markov Chains |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Chain of Thought Empowers Transformers to Solve Inherently Serial Problems |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Chain-of-Experts: When LLMs Meet Complex Operations Research Problems |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Circuit Component Reuse Across Tasks in Transformer Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CircuitNet 2.0: An Advanced Dataset for Promoting Machine Learning Innovations in Realistic Chip Design Environment |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Circumventing Concept Erasure Methods For Text-To-Image Generative Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CivRealm: A Learning and Reasoning Odyssey in Civilization for Decision-Making Agents |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Class Incremental Learning via Likelihood Ratio Based Task Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Class Probability Matching with Calibrated Networks for Label Shift Adaption |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Classification with Conceptual Safeguards |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Clifford Group Equivariant Simplicial Message Passing Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Closing the Curious Case of Neural Text Degeneration |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View. |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CoBIT: A Contrastive Bi-directional Image-Text Generation Model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CoLiDE: Concomitant Linear DAG Estimation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoRe-GD: A Hierarchical Framework for Scalable Graph Visualization with GNNs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CoT3DRef: Chain-of-Thoughts Data-Efficient 3D Visual Grounding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Coeditor: Leveraging Repo-level Diffs for Code Auto-editing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Combinatorial Bandits for Maximum Value Reward Function under Value-Index Feedback |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Combining Axes Preconditioners through Kronecker Approximation for Deep Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Communication-Efficient Federated Non-Linear Bandit Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CompA: Addressing the Gap in Compositional Reasoning in Audio-Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Complete and Efficient Graph Transformers for Crystal Material Property Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Complex priors and flexible inference in recurrent circuits with dendritic nonlinearities |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Compose and Conquer: Diffusion-Based 3D Depth Aware Composable Image Synthesis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Compositional Conservatism: A Transductive Approach in Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Compositional Generative Inverse Design |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Compositional Preference Models for Aligning LMs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Compressed Context Memory for Online Language Model Interaction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Compressing LLMs: The Truth is Rarely Pure and Never Simple |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Compressing Latent Space via Least Volume |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ConR: Contrastive Regularizer for Deep Imbalanced Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Concept Bottleneck Generative Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Conditional Information Bottleneck Approach for Time Series Imputation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conditional Instrumental Variable Regression with Representation Learning for Causal Inference |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conditional Variational Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Confidential-DPproof: Confidential Proof of Differentially Private Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Conformal Inductive Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conformal Language Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformal Prediction via Regression-as-Classification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Conformal Risk Control |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Confronting Reward Model Overoptimization with Constrained RLHF |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Consistency Training with Learnable Data Augmentation for Graph Anomaly Detection with Limited Supervision |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Consistency-guided Prompt Learning for Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Consistent Multi-Class Classification from Multiple Unlabeled Datasets |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Consistent Video-to-Video Transfer Using Synthetic Dataset |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Consistent algorithms for multi-label classification with macro-at-$k$ metrics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Constrained Bi-Level Optimization: Proximal Lagrangian Value Function Approach and Hessian-free Algorithm |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Constrained Decoding for Cross-lingual Label Projection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Constraint-Free Structure Learning with Smooth Acyclic Orientations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed Bandit |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Context is Environment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Context-Aware Meta-Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ContextRef: Evaluating Referenceless Metrics for Image Description Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Contextual Bandits with Online Neural Regression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Continual Learning in the Presence of Spurious Correlations: Analyses and a Simple Baseline |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Continual Momentum Filtering on Parameter Space for Online Test-time Adaptation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Continuous Field Reconstruction from Sparse Observations with Implicit Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Continuous Invariance Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Contrastive Difference Predictive Coding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Contrastive Learning is Spectral Clustering on Similarity Graph |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Contrastive Preference Learning: Learning from Human Feedback without Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ControlVideo: Training-free Controllable Text-to-video Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Controlled Text Generation via Language Model Arithmetic |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Controlling Vision-Language Models for Multi-Task Image Restoration |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Convergence of Bayesian Bilevel Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Conversational Drug Editing Using Retrieval and Domain Feedback |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Convolutional Deep Kernel Machines |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Coordinate-Aware Modulation for Neural Fields |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Copilot4D: Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Copula Conformal prediction for multi-step time series prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Correlated Noise Provably Beats Independent Noise for Differentially Private Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Counterfactual Density Estimation using Kernel Stein Discrepancies |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Counting Graph Substructures with Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Course Correcting Koopman Representations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| CrIBo: Self-Supervised Learning via Cross-Image Object-Level Bootstrapping |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Critical Learning Periods Emerge Even in Deep Linear Networks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Cross-Modal Contextualized Diffusion Models for Text-Guided Visual Generation and Editing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CrossLoco: Human Motion Driven Control of Legged Robots via Guided Unsupervised Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Crystalformer: Infinitely Connected Attention for Periodic Structure Encoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Curiosity-driven Red-teaming for Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Curriculum reinforcement learning for quantum architecture search under hardware errors |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Customizable Combination of Parameter-Efficient Modules for Multi-Task Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Cycle Consistency Driven Object Discovery |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DAFA: Distance-Aware Fair Adversarial Training |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| DAM: Towards a Foundation Model for Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DATS: Difficulty-Aware Task Sampler for Meta-Learning Physics-Informed Neural Networks |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| DDMI: Domain-agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DEEP NEURAL NETWORK INITIALIZATION WITH SPARSITY INDUCING ACTIVATIONS |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DENEVIL: TOWARDS DECIPHERING AND NAVIGATING THE ETHICAL VALUES OF LARGE LANGUAGE MODELS VIA INSTRUCTION LEARNING |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DMV3D: Denoising Multi-view Diffusion Using 3D Large Reconstruction Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of GPT-Generated Text |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DORSal: Diffusion for Object-centric Representations of Scenes $\textit{et al.}$ |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DOS: Diverse Outlier Sampling for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DP-SGD Without Clipping: The Lipschitz Neural Network Way |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DREAM: Dual Structured Exploration with Mixup for Open-set Graph Domain Adaption |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DV-3DLane: End-to-end Multi-modal 3D Lane Detection with Dual-view Representation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Data Debugging with Shapley Importance over Machine Learning Pipelines |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Data Filtering Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Data-independent Module-aware Pruning for Hierarchical Vision Transformers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| De novo Protein Design Using Geometric Vector Field Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Debiased Collaborative Filtering with Kernel-Based Causal Balancing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Debiasing Algorithm through Model Adaptation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Debiasing Attention Mechanism in Transformer without Demographics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decentralized Riemannian Conjugate Gradient Method on the Stiefel Manifold |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Deceptive Fairness Attacks on Graphs via Meta Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Decodable and Sample Invariant Continuous Object Encoder |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Decoding Natural Images from EEG for Object Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decongestion by Representation: Learning to Improve Economic Welfare in Marketplaces |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Decoupled Marked Temporal Point Process using Neural Ordinary Differential Equations |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Decoupling Weighing and Selecting for Integrating Multiple Graph Pre-training Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decoupling regularization from the action space |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Deep Confident Steps to New Pockets: Strategies for Docking Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Deep Neural Networks Tend To Extrapolate Predictably |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Deep Orthogonal Hypersphere Compression for Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Deep Reinforcement Learning Guided Improvement Heuristic for Job Shop Scheduling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep Reinforcement Learning for Modelling Protein Complexes |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement Tasks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Deep Temporal Graph Clustering |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DeepSPF: Spherical SO(3)-Equivariant Patches for Scan-to-CAD Estimation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DeepZero: Scaling Up Zeroth-Order Optimization for Deep Model Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Defining Expertise: Applications to Treatment Effect Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Defining and extracting generalizable interaction primitives from DNNs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Delta-AI: Local objectives for amortized inference in sparse graphical models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Democratizing Fine-grained Visual Recognition with Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Demonstration-Regularized RL |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Demystifying CLIP Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Demystifying Embedding Spaces using Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Demystifying Linear MDPs and Novel Dynamics Aggregation Framework |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Demystifying Local & Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Demystifying Poisoning Backdoor Attacks from a Statistical Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Denoising Diffusion Bridge Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Denoising Diffusion Step-aware Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Denoising Diffusion via Image-Based Rendering |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Denoising Task Routing for Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Designing Skill-Compatible AI: Methodologies and Frameworks in Chess |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean Discrepancy |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Detecting Pretraining Data from Large Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Detecting, Explaining, and Mitigating Memorization in Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffEnc: Variational Diffusion with a Learned Encoder |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Differentiable Euler Characteristic Transforms for Shape Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Differentiable Learning of Generalized Structured Matrices for Efficient Deep Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentially Private Synthetic Data via Foundation Model APIs 1: Images |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Diffusion Model for Dense Matching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Models for Multi-Task Generative Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diffusion Sampling with Momentum for Mitigating Divergence Artifacts |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion-TS: Interpretable Diffusion for General Time Series Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffusionSat: A Generative Foundation Model for Satellite Imagery |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Directly Fine-Tuning Diffusion Models on Differentiable Rewards |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy Label Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Discovering Failure Modes of Text-guided Diffusion Models via Adversarial Search |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discovering Temporally-Aware Reinforcement Learning Algorithms |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Discovering modular solutions that generalize compositionally |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| DisenBooth: Identity-Preserving Disentangled Tuning for Subject-Driven Text-to-Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Disentangling Time Series Representations via Contrastive Independence-of-Support on l-Variational Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dissecting Sample Hardness: A Fine-Grained Analysis of Hardness Characterization Methods for Data-Centric AI |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dissecting learning and forgetting in language model finetuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DistillSpec: Improving Speculative Decoding via Knowledge Distillation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Distinguished In Uniform: Self-Attention Vs. Virtual Nodes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Distributionally Robust Optimization with Bias and Variance Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DittoGym: Learning to Control Soft Shape-Shifting Robots |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diverse Projection Ensembles for Distributional Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Divide and not forget: Ensemble of selectively trained experts in Continual Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diving Segmentation Model into Pixels |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Do Generated Data Always Help Contrastive Learning? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Does CLIP’s generalization performance mainly stem from high train-test similarity? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Does Progress On Object Recognition Benchmarks Improve Generalization on Crowdsourced, Global Data? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Does Writing with Language Models Reduce Content Diversity? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Domain Randomization via Entropy Maximization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Domain constraints improve risk prediction when outcome data is missing |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Domain-Agnostic Molecular Generation with Chemical Feedback |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Domain-Inspired Sharpness-Aware Minimization Under Domain Shifts |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Don't Judge by the Look: Towards Motion Coherent Video Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Don't Play Favorites: Minority Guidance for Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Doubly Robust Instance-Reweighted Adversarial Training |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Doubly Robust Proximal Causal Learning for Continuous Treatments |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DrS: Learning Reusable Dense Rewards for Multi-Stage Tasks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamClean: Restoring Clean Image Using Deep Diffusion Prior |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamFlow: High-quality text-to-3D generation by Approximating Probability Flow |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DreamLLM: Synergistic Multimodal Comprehension and Creation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamTime: An Improved Optimization Strategy for Diffusion-Guided 3D Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dropout Enhanced Bilevel Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dropout-Based Rashomon Set Exploration for Efficient Predictive Multiplicity Estimation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Dual Associated Encoder for Face Restoration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dual RL: Unification and New Methods for Reinforcement and Imitation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dual-Encoders for Extreme Multi-label Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| DyST: Towards Dynamic Neural Scene Representations on Real-World Videos |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric Voxelization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamic Discounted Counterfactual Regret Minimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamic Layer Tying for Parameter-Efficient Transformers |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Neighborhood Construction for Structured Large Discrete Action Spaces |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamic Neural Response Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dynamic Sparse Training with Structured Sparsity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamics-Informed Protein Design with Structure Conditioning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EBMDock: Neural Probabilistic Protein-Protein Docking via a Differentiable Energy Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EControl: Fast Distributed Optimization with Compression and Error Control |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ED-NeRF: Efficient Text-Guided Editing of 3D Scene With Latent Space NeRF |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EMO: EARTH MOVER DISTANCE OPTIMIZATION FOR AUTO-REGRESSIVE LANGUAGE MODELING |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| EQA-MX: Embodied Question Answering using Multimodal Expression |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| EX-Graph: A Pioneering Dataset Bridging Ethereum and X |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Early Stopping Against Label Noise Without Validation Data |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| EasyTPP: Towards Open Benchmarking Temporal Point Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Effective Data Augmentation With Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Effective Structural Encodings via Local Curvature Profiles |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Effective and Efficient Federated Tree Learning on Hybrid Data |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Effective pruning of web-scale datasets based on complexity of concept clusters |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Efficient Backpropagation with Variance Controlled Adaptive Sampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Continual Finite-Sum Minimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient ConvBN Blocks for Transfer Learning and Beyond |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Dynamics Modeling in Interactive Environments with Koopman Theory |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Heterogeneous Meta-Learning via Channel Shuffling Modulation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Efficient Integrators for Diffusion Generative Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Inverse Multiagent Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Efficient Modulation for Vision Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Multi-agent Reinforcement Learning by Planning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Efficient Planning with Latent Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Score Matching with Deep Equilibrium Layers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Efficient Sharpness-Aware Minimization for Molecular Graph Transformer Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Streaming Language Models with Attention Sinks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Subgraph GNNs by Learning Effective Selection Policies |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient and Scalable Graph Generation through Iterative Local Expansion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient local linearity regularization to overcome catastrophic overfitting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient-3Dim: Learning a Generalizable Single-image Novel-view Synthesizer in One Day |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficiently Computing Similarities to Private Datasets |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Elucidating the Exposure Bias in Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Elucidating the design space of classifier-guided diffusion generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Embarrassingly Simple Dataset Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| EmerDiff: Emerging Pixel-level Semantic Knowledge in Diffusion Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Emergent Communication with Conversational Repair |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Empirical Likelihood for Fair Classification |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Emu: Generative Pretraining in Multimodality |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enabling Lanuguage Models to Implicitly Learn Self-Improvement |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Encoding Unitig-level Assembly Graphs with Heterophilous Constraints for Metagenomic Contigs Binning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| End-to-End (Instance)-Image Goal Navigation through Correspondence as an Emergent Phenomenon |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Energy-based Automated Model Evaluation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Energy-conserving equivariant GNN for elasticity of lattice architected metamaterials |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Energy-guided Entropic Neural Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhanced Face Recognition using Intra-class Incoherence Constraint |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Enhancing Contrastive Learning for Ordinal Regression via Ordinal Content Preserved Data Augmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Enhancing Group Fairness in Online Settings Using Oblique Decision Forests |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Human Experience in Human-Agent Collaboration: A Human-Centered Modeling Approach Based on Positive Human Gain |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Human-AI Collaboration Through Logic-Guided Reasoning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Enhancing Instance-Level Image Classification with Set-Level Labels |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enhancing Neural Subset Selection: Integrating Background Information into Set Representations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Enhancing Neural Training via a Correlated Dynamics Model |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Tail Performance in Extreme Classifiers by Label Variance Reduction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Transfer Learning with Flexible Nonparametric Posterior Sampling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Transferable Adversarial Attacks on Vision Transformers through Gradient Normalization Scaling and High-Frequency Adaptation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Ensemble Distillation for Unsupervised Constituency Parsing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Entity-Centric Reinforcement Learning for Object Manipulation from Pixels |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Entropy Coding of Unordered Data Structures |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Entropy-MCMC: Sampling from Flat Basins with Ease |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Epitopological learning and Cannistraci-Hebb network shape intelligence brain-inspired theory for ultra-sparse advantage in deep learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Equivariant Matrix Function Neural Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Equivariant Scalar Fields for Molecular Docking with Fast Fourier Transforms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Escape Sky-high Cost: Early-stopping Self-Consistency for Multi-step Reasoning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Estimating Conditional Mutual Information for Dynamic Feature Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Estimating Shape Distances on Neural Representations with Limited Samples |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Eureka: Human-Level Reward Design via Coding Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Evaluating Language Model Agency Through Negotiations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Evaluating Large Language Models at Evaluating Instruction Following |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Evaluating Representation Learning on the Protein Structure Universe |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evaluating the Zero-shot Robustness of Instruction-tuned Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EventRPG: Event Data Augmentation with Relevance Propagation Guidance |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expected flow networks in stochastic environments and two-player zero-sum games |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Experimental Design for Multi-Channel Imaging via Task-Driven Feature Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Explaining Kernel Clustering via Decision Trees |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Explaining Time Series via Contrastive and Locally Sparse Perturbations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Exploring Diffusion Time-steps for Unsupervised Representation Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploring Effective Stimulus Encoding via Vision System Modeling for Visual Prostheses |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Exploring Target Representations for Masked Autoencoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exploring Weight Balancing on Long-Tailed Recognition Problem |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Exploring the Promise and Limits of Real-Time Recurrent Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploring the cloud of feature interaction scores in a Rashomon set |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Exposing Text-Image Inconsistency Using Diffusion Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Expressive Losses for Verified Robustness via Convex Combinations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expressivity of ReLU-Networks under Convex Relaxations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Extending Power of Nature from Binary to Real-Valued Graph Learning in Real World |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FITS: Modeling Time Series with $10k$ Parameters |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| FOSI: Hybrid First and Second Order Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| FROSTER: Frozen CLIP is A Strong Teacher for Open-Vocabulary Action Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Facing the Elephant in the Room: Visual Prompt Tuning or Full finetuning? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fair Classifiers that Abstain without Harm |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fair and Efficient Contribution Valuation for Vertical Federated Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-Bound Scaling |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image Analysis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FairerCLIP: Debiasing CLIP's Zero-Shot Predictions using Functions in RKHSs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Faithful Rule Extraction for Differentiable Rule Learning Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Faithful Vision-Language Interpretation via Concept Bottleneck Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fake It Till Make It: Federated Learning with Consensus-Oriented Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Fantastic Generalization Measures are Nowhere to be Found |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Fast Ensembling with Diffusion Schrödinger Bridge |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast Equilibrium of SGD in Generic Situations |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fast Hyperboloid Decision Tree Algorithms |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Fast Imitation via Behavior Foundation Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fast Updating Truncated SVD for Representation Learning with Sparse Matrices |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Fast Value Tracking for Deep Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast and unified path gradient estimators for normalizing flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast, Expressive $\mathrm{SE}(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast-ELECTRA for Efficient Pre-training |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Faster Approximation of Probabilistic and Distributional Values via Least Squares |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Faster Sampling from Log-Concave Densities over Polytopes via Efficient Linear Solvers |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| FasterViT: Fast Vision Transformers with Hierarchical Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| FeatUp: A Model-Agnostic Framework for Features at Any Resolution |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Feature Collapse |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Feature emergence via margin maximization: case studies in algebraic tasks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Feature-aligned N-BEATS with Sinkhorn divergence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FedCDA: Federated Learning with Cross-rounds Divergence-aware Aggregation |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FedDA: Faster Adaptive Gradient Methods for Federated Constrained Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FedHyper: A Universal and Robust Learning Rate Scheduler for Federated Learning with Hypergradient Descent |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| FedImpro: Measuring and Improving Client Update in Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| FedInverse: Evaluating Privacy Leakage in Federated Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| FedLoGe: Joint Local and Generic Federated Learning under Long-tailed Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FedWon: Triumphing Multi-domain Federated Learning Without Normalization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Federated Causal Discovery from Heterogeneous Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Federated Q-Learning: Linear Regret Speedup with Low Communication Cost |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Federated Recommendation with Additive Personalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Federated Text-driven Prompt Generation for Vision-Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Federated Wasserstein Distance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Ferret: Refer and Ground Anything Anywhere at Any Granularity |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Few-Shot Detection of Machine-Generated Text using Style Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Few-shot Hybrid Domain Adaptation of Image Generator |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fiber Monte Carlo |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fine-Tuned Language Models Generate Stable Inorganic Materials as Text |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fine-Tuning Language Models for Factuality |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Finetuning Text-to-Image Diffusion Models for Fairness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Finite Scalar Quantization: VQ-VAE Made Simple |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Finite-State Autoregressive Entropy Coding for Efficient Learned Lossless Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Finite-Time Analysis of On-Policy Heterogeneous Federated Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| First-order ANIL provably learns representations despite overparametrisation |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Fixed Non-negative Orthogonal Classifier: Inducing Zero-mean Neural Collapse with Feature Dimension Separation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Fixed-Budget Differentially Private Best Arm Identification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flat Minima in Linear Estimation and an Extended Gauss Markov Theorem |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Flow Matching on General Geometries |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Flow to Better: Offline Preference-based Reinforcement Learning via Preferred Trajectory Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Forward $\chi^2$ Divergence Based Variational Importance Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Forward Learning of Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fourier Transporter: Bi-Equivariant Robotic Manipulation in 3D |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| FreeDyG: Frequency Enhanced Continuous-Time Dynamic Graph Model for Link Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Frequency-Aware Transformer for Learned Image Compression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| From Graphs to Hypergraphs: Hypergraph Projection and its Reconstruction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Posterior Sampling to Meaningful Diversity in Image Restoration |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| From Sparse to Soft Mixtures of Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Zero to Turbulence: Generative Modeling for 3D Flow Simulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Frozen Transformers in Language Models Are Effective Visual Encoder Layers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fully Hyperbolic Convolutional Neural Networks for Computer Vision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Function Vectors in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Function-space Parameterization of Neural Networks for Sequential Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Functional Bayesian Tucker Decomposition for Continuous-indexed Tensor Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Functional Interpolation for Relative Positions improves Long Context Transformers |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fusing Models with Complementary Expertise |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fusion Is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Future Language Modeling from Temporal Document History |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| G$^2$N$^2$ : Weisfeiler and Lehman go grammatical |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GAIA: Zero-shot Talking Avatar Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| GAIA: a benchmark for General AI Assistants |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| GENOME: Generative Neuro-Symbolic Visual Reasoning by Growing and Reusing Modules |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| GIM: Learning Generalizable Image Matcher From Internet Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GIO: Gradient Information Optimization for Training Dataset Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GNNBoundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GNNX-BENCH: Unravelling the Utility of Perturbation-based GNN Explainers through In-depth Benchmarking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| GOAt: Explaining Graph Neural Networks via Graph Output Attribution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GPAvatar: Generalizable and Precise Head Avatar from Image(s) |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GRAPH-CONSTRAINED DIFFUSION FOR END-TO-END PATH PLANNING |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GROOT: Learning to Follow Instructions by Watching Gameplay Videos |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GTMGC: Using Graph Transformer to Predict Molecule’s Ground-State Conformation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake Analysis |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gen-Z: Generative Zero-Shot Text Classification with Contextualized Label Descriptions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| GenSim: Generating Robotic Simulation Tasks via Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gene Regulatory Network Inference in the Presence of Dropouts: a Causal View |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| General Graph Random Features |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| General Stability Analysis for Zeroth-Order Optimization Algorithms |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Generalization error of spectral algorithms |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Generalization in diffusion models arises from geometry-adaptive harmonic representations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Generalization of Scaled Deep ResNets in the Mean-Field Regime |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Generalized Neural Sorting Networks with Error-Free Differentiable Swap Functions |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generalized Policy Iteration using Tensor Approximation for Hybrid Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Generalized Schrödinger Bridge Matching |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Generating Images with 3D Annotations Using Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generating Pragmatic Examples to Train Neural Program Synthesizers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Generative Adversarial Equilibrium Solvers |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generative Human Motion Stylization in Latent Space |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Judge for Evaluating Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Generative Learning for Solving Non-Convex Problem with Multi-Valued Input-Solution Mapping |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Generative Modeling with Phase Stochastic Bridge |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Pre-training for Speech with Flow Matching |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generative Sliced MMD Flows with Riesz Kernels |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| GeoLLM: Extracting Geospatial Knowledge from Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Geometrically Aligned Transfer Encoder for Inductive Transfer in Regression Tasks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometry-Aware Projective Mapping for Unbounded Neural Radiance Fields |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Ghost on the Shell: An Expressive Representation of General 3D Shapes |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Global Optimality for Non-linear Constrained Restoration Problems via Invexity |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| GlucoBench: Curated List of Continuous Glucose Monitoring Datasets with Prediction Benchmarks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Going Beyond Neural Network Feature Similarity: The Network Feature Complexity and Its Interpretation Using Category Theory |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Goodhart's Law in Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Gradual Domain Adaptation via Gradient Flow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gradual Optimization Learning for Conformational Energy Minimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Graph Generation with $K^2$-trees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Lottery Ticket Automated |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Graph Metanetworks for Processing Diverse Neural Architectures |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Graph Neural Networks for Learning Equivariant Representations of Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Graph Parsing Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Transformers on EHRs: Better Representation Improves Downstream Performance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GraphCare: Enhancing Healthcare Predictions with Personalized Knowledge Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GraphChef: Decision-Tree Recipes to Explain Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GraphPulse: Topological representations for temporal graph property prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graphical Multioutput Gaussian Process with Attention |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Grokking as a First Order Phase Transition in Two Layer Networks |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Grokking as the transition from lazy to rich training dynamics |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Grounded Object-Centric Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Grounding Language Plans in Demonstrations Through Counterfactual Perturbations |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Grounding Multimodal Large Language Models to the World |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Group Preference Optimization: Few-Shot Alignment of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Guaranteed Approximation Bounds for Mixed-Precision Neural Operators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Guess & Sketch: Language Model Guided Transpilation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Guiding Instruction-based Image Editing via Multimodal Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| H-GAP: Humanoid Control with a Generalist Planner |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HIFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| HYPO: Hyperspherical Out-Of-Distribution Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Habitat 3.0: A Co-Habitat for Humans, Avatars, and Robots |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Harnessing Density Ratios for Online Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Headless Language Models: Learning without Predicting with Contrastive Weight Tying |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Heterogeneous Personalized Federated Learning by Local-Global Updates Mixing via Convergence Rate |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HiGen: Hierarchical Graph Generative Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| High-dimensional SGD aligns with emerging outlier eigenspaces |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Hindsight PRIORs for Reward Learning from Human Preferences |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| HoloNets: Spectral Convolutions do extend to Directed Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Horizon-Free Regret for Linear Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| How Does Unlabeled Data Provably Help Out-of-Distribution Detection? |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing: The Curses of Symmetry and Initialization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| How connectivity structure shapes rich and lazy learning in neural circuits |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| How do Language Models Bind Entities in Context? |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| How to Capture Higher-order Correlations? Generalizing Matrix Softmax Attention to Kronecker Computation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| How to Fine-Tune Vision Models with SGD |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Human Feedback is not Gold Standard |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Human Motion Diffusion as a Generative Prior |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hybrid Directional Graph Neural Network for Molecules |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Hybrid Distillation: Connecting Masked Autoencoders with Contrastive Learners |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hybrid Sharing for Multi-Label Image Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HypeBoy: Generative Self-Supervised Representation Learning on Hypergraphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| HyperAttention: Long-context Attention in Near-Linear Time |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hypergraph Dynamic System |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hypothesis Search: Inductive Reasoning with Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| I-PHYRE: Interactive Physical Reasoning |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| INViTE: INterpret and Control Vision-Language Models with Text Explanations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Idempotence and Perceptual Image Compression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Idempotent Generative Network |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Identifiable Latent Polynomial Causal Models through the Lens of Change |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Identifying Policy Gradient Subspaces |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Identifying Representations for Intervention Extrapolation |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Identifying the Risks of LM Agents with an LM-Emulated Sandbox |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Illusory Attacks: Information-theoretic detectability matters in adversarial attacks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Image Background Serves as Good Proxy for Out-of-distribution Data |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Image Clustering Conditioned on Text Criteria |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Image Inpainting via Iteratively Decoupled Probabilistic Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Image Inpainting via Tractable Steering of Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Image Translation as Diffusion Visual Programmers |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Image2Sentence based Asymmetrical Zero-shot Composed Image Retrieval |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| ImagenHub: Standardizing the evaluation of conditional image generation models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Imitation Learning from Observation with Automatic Discount Scheduling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Implicit Gaussian process representation of vector fields over arbitrary latent manifolds |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
1 |
| Implicit Maximum a Posteriori Filtering via Adaptive Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Implicit Neural Representation Inference for Low-Dimensional Bayesian Deep Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Implicit Neural Representations and the Algebra of Complex Wavelets |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Implicit bias of SGD in $L_2$-regularized linear DNNs: One-way jumps from high to low rank |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Implicit regularization of deep residual networks towards neural ODEs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ImplicitSLIM and How it Improves Embedding-based Collaborative Filtering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improved Active Learning via Dependent Leverage Score Sampling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Efficiency Based on Learned Saccade and Continuous Scene Reconstruction From Foveated Visual Sampling |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improved Probabilistic Image-Text Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Regret Bounds for Non-Convex Online-Within-Online Meta Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improved Techniques for Training Consistency Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Improved algorithm and bounds for successive projection |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improved sampling via learned diffusions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Improving Convergence and Generalization Using Parameter Symmetries |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Improving Domain Generalization with Domain Relations |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Generalization of Alignment with Human Preferences through Group Invariant Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving Intrinsic Exploration by Creating Stationary Objectives |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improving LoRA in Privacy-preserving Federated Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Improving Non-Transferable Representation Learning by Harnessing Content and Style |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Improving Offline RL by Blending Heuristics |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving equilibrium propagation without weight symmetry through Jacobian homeostasis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving protein optimization with smoothed fitness landscapes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving the Convergence of Dynamic NeRFs via Optimal Transport |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| In defense of parameter sharing for model-compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| In-Context Learning Dynamics with Random Binary Sequences |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| In-Context Learning Learns Label Relationships but Is Not Conventional Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| In-Context Learning through the Bayesian Prism |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| In-Context Pretraining: Language Modeling Beyond Document Boundaries |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| In-context Autoencoder for Context Compression in a Large Language Model |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| In-context Exploration-Exploitation for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Incentive-Aware Federated Learning with Training-Time Model Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Incentivized Truthful Communication for Federated Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Incremental Randomized Smoothing Certification |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Independent-Set Design of Experiments for Estimating Treatment and Spillover Effects under Network Interference |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Influencer Backdoor Attack on Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InfoCon: Concept Discovery with Generative and Discriminative Informativeness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Information Retention via Learning Supplemental Features |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Inherently Interpretable Time Series Classification via Multiple Instance Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Initializing Models with Larger Ones |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Inner Classifier-Free Guidance and Its Taylor Expansion for Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Input-gradient space particle inference for neural network ensembles |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Ins-DetCLIP: Aligning Detection Model to Follow Human-Language Instruction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Instant3D: Fast Text-to-3D with Sparse-view Generation and Large Reconstruction Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| InstructDET: Diversifying Referring Object Detection with Generalized Instructions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Intelligent Switching for Reset-Free RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| InterpGNN: Understand and Improve Generalization Ability of Transdutive GNNs through the Lens of Interplay between Train and Test Nodes |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpretable Diffusion via Information Decomposition |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Interpretable Meta-Learning of Physical Systems |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Interpretable Sparse System Identification: Beyond Recent Deep Learning Techniques on Time-Series Prediction |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interpreting CLIP's Image Representation via Text-Based Decomposition |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Interpreting Robustness Proofs of Deep Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Interventional Fairness on Partially Known Causal Graphs: A Constrained Optimization Approach |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Intriguing Properties of Data Attribution on Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Intriguing Properties of Generative Classifiers |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Invariance-based Learning of Latent Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Inverse Approximation Theory for Nonlinear Recurrent Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Investigating the Benefits of Projection Head for Representation Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is Self-Repair a Silver Bullet for Code Generation? |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| JoMA: Demystifying Multilayer Transformers via Joint Dynamics of MLP and Attention |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| JointNet: Extending Text-to-Image Diffusion for Dense Distribution Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Jointly Training Large Autoregressive Multimodal Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Jointly-Learned Exit and Inference for a Dynamic Neural Network |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| KW-Design: Pushing the Limit of Protein Design via Knowledge Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Kalman Filter for Online Classification of Non-Stationary Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Kernelised Normalising Flows |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Kill Two Birds with One Stone: Rethinking Data Augmentation for Deep Long-tailed Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Knowledge Distillation Based on Transformed Teacher Matching |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Knowledge Fusion of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| KoLA: Carefully Benchmarking World Knowledge of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Koopman-based generalization bound: New aspect for full-rank weights |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Kosmos-G: Generating Images in Context with Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| L2MAC: Large Language Model Automatic Computer for Extensive Code Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| L2P-MIP: Learning to Presolve for Mixed Integer Programming |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| LCOT: Linear Circular Optimal Transport |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| LDReg: Local Dimensionality Regularized Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LEAP: Liberate Sparse-View 3D Modeling from Camera Poses |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LEGO-Prover: Neural Theorem Proving with Growing Libraries |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| LEMON: Lossless model expansion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LILO: Learning Interpretable Libraries by Compressing and Documenting Code |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LLCP: Learning Latent Causal Processes for Reasoning-based Video Question Answer |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LLM Augmented LLMs: Expanding Capabilities through Composition |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| LLM-Assisted Code Cleaning For Training Accurate Code Generators |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LLM-grounded Video Diffusion Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LLaMA-Adapter: Efficient Fine-tuning of Large Language Models with Zero-initialized Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| LMUFormer: Low Complexity Yet Powerful Spiking Model With Legendre Memory Units |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LOQA: Learning with Opponent Q-Learning Awareness |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LQ-LoRA: Low-rank plus Quantized Matrix Decomposition for Efficient Language Model Finetuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LRM: Large Reconstruction Model for Single Image to 3D |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth Limited Optical Signal Acquisition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Label-Focused Inductive Bias over Latent Object Features in Visual Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Label-Noise Robust Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Label-free Node Classification on Graphs with Large Language Models (LLMs) |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LabelDP-Pro: Learning with Label Differential Privacy via Projections |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lagrangian Flow Networks for Conservation Laws |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Language Control Diffusion: Efficiently Scaling through Space, Time, and Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Language Model Beats Diffusion - Tokenizer is key to visual generation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Language Model Cascades: Token-Level Uncertainty And Beyond |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Language Model Decoding as Direct Metrics Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Language Model Detectors Are Easily Optimized Against |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Language Model Inversion |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Language Model Self-improvement by Reinforcement Learning Contemplation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Language Modeling Is Compression |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Language Models Represent Space and Time |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Language-Informed Visual Concept Learning |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Language-Interfaced Tabular Oversampling via Progressive Imputation and Self-Authentication |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Large Language Model Cascades with Mixture of Thought Representations for Cost-Efficient Reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Large Language Models Are Not Robust Multiple Choice Selectors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Large Language Models Cannot Self-Correct Reasoning Yet |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Large Language Models are Efficient Learners of Noise-Robust Speech Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Large Language Models as Analogical Reasoners |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Large Language Models as Automated Aligners for benchmarking Vision-Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Large Language Models as Generalizable Policies for Embodied Tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Large Language Models as Optimizers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Large Language Models as Tool Makers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Large Language Models to Enhance Bayesian Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Large-Vocabulary 3D Diffusion Model with Transformer |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Large-scale Training of Foundation Models for Wearable Biosignals |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Latent 3D Graph Diffusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Latent Representation and Simulation of Markov Processes via Time-Lagged Information Bottleneck |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Latent Trajectory Learning for Limited Timestamps under Distribution Shift over Time |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Layer-wise linear mode connectivity |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning 3D Particle-based Simulators from RGB-D Videos |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Conditional Invariances through Non-Commutativity |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Energy Decompositions for Partial Inference in GFlowNets |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning From Simplicial Data Based on Random Walks and 1D Convolutions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Grounded Action Abstractions from Language |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Hierarchical Image Segmentation For Recognition and By Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Hierarchical Polynomials with Three-Layer Neural Networks |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning Hierarchical World Models with Adaptive Temporal Abstractions from Discrete Latent Dynamics |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Implicit Representation for Reconstructing Articulated Objects |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Interactive Real-World Simulators |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Large DAGs is Harder than you Think: Many Losses are Minimal for the Wrong DAG |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Multi-Agent Communication from Graph Modeling Perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Multi-Agent Communication with Contrastive Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Learning Multi-Faceted Prototypical User Interests |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Nash Equilibria in Rank-1 Games |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Learning No-Regret Sparse Generalized Linear Models with Varying Observation(s) |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Optimal Contracts: How to Exploit Small Action Spaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Over Molecular Conformer Ensembles: Datasets and Benchmarks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Performance-Improving Code Edits |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Personalized Causally Invariant Representations for Heterogeneous Federated Clients |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Planning Abstractions from Language |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Polynomial Problems with $SL(2, \mathbb{R})$-Equivariance |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Robust Generalizable Radiance Field with Visibility and Feature Augmented Point Representation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Thresholds with Latent Values and Censored Feedback |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Learning dynamic representations of the functional connectome in neurobiological networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning from Aggregate responses: Instance Level versus Bag Level Loss Functions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning from Sparse Offline Datasets via Conservative Density Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning in reverse causal strategic environments with ramifications on two sided markets |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Learning interpretable control inputs and dynamics underlying animal locomotion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning invariant representations of time-homogeneous stochastic dynamical systems |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning model uncertainty as variance-minimizing instance weights |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning semilinear neural operators: A unified recursive framework for prediction and data assimilation. |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning the greatest common divisor: explaining transformer predictions |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning to Act from Actionless Videos through Dense Correspondences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Act without Actions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning to Compose: Improving Object Centric Learning by Injecting Compositionality |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Learning to Embed Time Series Patches Independently |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning to Jointly Understand Visual and Tactile Signals |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| Learning to Make Adherence-aware Advice |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning to Reject Meets Long-tail Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning to Reject with a Fixed Predictor: Application to Decontextualization |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning to Solve Bilevel Programs with Binary Tender |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning to design protein-protein interactions with enhanced generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning to solve Class-Constrained Bin Packing Problems via Encoder-Decoder Model |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning with Language-Guided State Abstractions |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning with Mixture of Prototypes for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning with a Mole: Transferable latent spatial representations for navigation without reconstruction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Leave-one-out Distinguishability in Machine Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Leftover Lunch: Advantage-based Offline Reinforcement Learning for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Lemur: Harmonizing Natural Language and Code for Language Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lemur: Integrating Large Language Models in Automated Program Verification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Less is More: Fewer Interpretable Region via Submodular Subset Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Let Models Speak Ciphers: Multiagent Debate through Embeddings |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Let's Verify Step by Step |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Let's do the time-warp-attend: Learning topological invariants of dynamical systems |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Leveraging Generative Models for Unsupervised Alignment of Neural Time Series Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Leveraging Optimization for Adaptive Attacks on Image Watermarks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Leveraging Uncertainty Estimates To Improve Classifier Performance |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lewis's Signaling Game as beta-VAE For Natural Word Lengths and Segments |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LiDAR-PTQ: Post-Training Quantization for Point Cloud 3D Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Lie Group Decompositions for Equivariant Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Lifting Architectural Constraints of Injective Flows |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Light Schrödinger Bridge |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Light-MILPopt: Solving Large-scale Mixed Integer Linear Programs with Lightweight Optimizer and Small-scale Training Dataset |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
5 |
| LightHGNN: Distilling Hypergraph Neural Networks into MLPs for 100x Faster Inference |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Likelihood Training of Cascaded Diffusion Models via Hierarchical Volume-preserving Maps |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Linear Log-Normal Attention with Unbiased Concentration |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Linear attention is (maybe) all you need (to understand Transformer optimization) |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Linearity of Relation Decoding in Transformer Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lion Secretly Solves a Constrained Optimization: As Lyapunov Predicts |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LipSim: A Provably Robust Perceptual Similarity Metric |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LipVoicer: Generating Speech from Silent Videos Guided by Lip Reading |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Lipschitz Singularities in Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lipsum-FT: Robust Fine-Tuning of Zero-Shot Models Using Random Text Guidance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Listen, Think, and Understand |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Llemma: An Open Language Model for Mathematics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Local Composite Saddle Point Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Local Graph Clustering with Noisy Labels |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Local Search GFlowNets |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Locality Sensitive Sparse Encoding for Learning World Models Online |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Locality-Aware Graph Rewiring in GNNs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Localizing and Editing Knowledge In Text-to-Image Generative Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Logical Languages Accepted by Transformer Encoders with Hard Attention |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Long-Short-Range Message-Passing: A Physics-Informed Framework to Capture Non-Local Interaction for Scalable Molecular Dynamics Simulation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Long-Term Typhoon Trajectory Prediction: A Physics-Conditioned Approach Without Reanalysis Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Long-tailed Diffusion Models with Oriented Calibration |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Look, Remember and Reason: Grounded Reasoning in Videos with Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Looped Transformers are Better at Learning Learning Algorithms |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Low Rank Matrix Completion via Robust Alternating Minimization in Nearly Linear Time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| M3C: A Framework towards Convergent, Flexible, and Unsupervised Learning of Mixture Graph Matching and Clustering |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MAP IT to Visualize Representations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MCM: Masked Cell Modeling for Anomaly Detection in Tabular Data |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MEND: Meta Demonstration Distillation for Efficient and Effective In-Context Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| METRA: Scalable Unsupervised RL with Metric-Aware Abstraction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MINDE: Mutual Information Neural Diffusion Estimation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent Recognition and Out-of-scope Detection in Conversations |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MMD Graph Kernel: Effective Metric Learning for Graphs via Maximum Mean Discrepancy |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| MOFI: Learning Image Representations from Noisy Entity Annotated Images |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MOTOR: A Time-to-Event Foundation Model For Structured Medical Records |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MT-Ranker: Reference-free machine translation evaluation by inter-system ranking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction Following |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MVDream: Multi-view Diffusion for 3D Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MaGIC: Multi-modality Guided Image Completion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Machine Unlearning for Image-to-Image Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MagicDrive: Street View Generation with Diverse 3D Geometry Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Magnitude Invariant Parametrizations Improve Hypernetwork Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Magnushammer: A Transformer-Based Approach to Premise Selection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Making LLaMA SEE and Draw with SEED Tokenizer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Making Pre-trained Language Models Great on Tabular Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Making RL with Preference-based Feedback Efficient via Randomization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Making Retrieval-Augmented Language Models Robust to Irrelevant Context |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Manifold Diffusion Fields |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Manifold Preserving Guided Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mask-Based Modeling for Neural Radiance Fields |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Masked Audio Generation using a Single Non-Autoregressive Transformer |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Masked Autoencoders with Multi-Window Local-Global Attention Are Better Audio Learners |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Masked Completion via Structured Diffusion with White-Box Transformers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Masked Distillation Advances Self-Supervised Transformer Architecture Search |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Masked Structural Growth for 2x Faster Language Model Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Masks, Signs, And Learning Rate Rewinding |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Massive Editing for Large Language Models via Meta Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Massively Scalable Inverse Reinforcement Learning in Google Maps |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mastering Memory Tasks with World Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mathematical Justification of Hard Negative Mining via Isometric Approximation Theorem |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Matrix Manifold Neural Networks++ |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Matryoshka Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Maximum Entropy Heterogeneous-Agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Maximum Entropy Model Correction in Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Mayfly: a Neural Data Structure for Graph Stream Summarization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mean Field Theory in Deep Metric Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Meaning Representations from Trajectories in Autoregressive Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Measuring Vision-Language STEM Skills of Neural Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Mediator Interpretation and Faster Learning Algorithms for Linear Correlated Equilibria in General Sequential Games |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Memorization Capacity of Multi-Head Attention in Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Memorization in Self-Supervised Learning Improves Downstream Generalization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Memory-Assisted Sub-Prototype Mining for Universal Domain Adaptation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Memory-Consistent Neural Networks for Imitation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Meta Inverse Constrained Reinforcement Learning: Convergence Guarantee and Generalization Analysis |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Meta-Evolve: Continuous Robot Evolution for One-to-many Policy Transfer |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Meta-Learning Priors Using Unrolled Proximal Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-VBO: Utilizing Prior Tasks in Optimizing Risk Measures with Gaussian Processes |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MetaPhysiCa: Improving OOD Robustness in Physics-informed Machine Learning |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MgNO: Efficient Parameterization of Linear Operators via Multigrid |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mind Your Augmentation: The Key to Decoupling Dense Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MiniLLM: Knowledge Distillation of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Minimax optimality of convolutional neural networks for infinite dimensional input-output problems and separation from kernel methods |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Minimum width for universal approximation using ReLU networks on compact domain |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Mirage: Model-agnostic Graph Distillation for Graph Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mitigating Emergent Robustness Degradation while Scaling Graph Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Mitigating the Curse of Dimensionality for Certified Robustness via Dual Randomized Smoothing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MixSATGEN: Learning Graph Mixing for SAT Instance Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mixture of LoRA Experts |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Mixture of Weak and Strong Experts on Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Model Merging by Uncertainty-Based Gradient Matching |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Modeling Boundedly Rational Agents with Latent Inference Budgets |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modelling complex vector drawings with stroke-clouds |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modulate Your Spectrum in Self-Supervised Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Modulated Phase Diffusor: Content-Oriented Feature Synthesis for Detecting Unknown Objects |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MogaNet: Multi-order Gated Aggregation Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Momentum Benefits Non-iid Federated Learning Simply and Provably |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Monte Carlo guided Denoising Diffusion models for Bayesian linear inverse problems. |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| More is Better: when Infinite Overparameterization is Optimal and Overfitting is Obligatory |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Most discriminative stimuli for functional cell type clustering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Motif: Intrinsic Motivation from Artificial Intelligence Feedback |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Resolution Diffusion Models for Time Series Forecasting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Scale Representations by Varying Window Attention for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Source Diffusion Models for Simultaneous Music Generation and Separation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Multi-View Causal Representation Learning with Partial Observability |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multi-View Representation is What You Need for Point-Cloud Pre-Training |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-granularity Correspondence Learning from Long-term Noisy Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Multi-modal Gaussian Process Variational Autoencoders for Neural and Behavioral Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-resolution HuBERT: Multi-resolution Speech Self-Supervised Learning with Masked Unit Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-task Learning with 3D-Aware Regularization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multilinear Operator Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multilingual Jailbreak Challenges in Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multimarginal Generative Modeling with Stochastic Interpolants |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multimodal Molecular Pretraining via Modality Blending |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Multimodal Patient Representation Learning with Missing Modalities and Labels |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Multimodal Web Navigation with Instruction-Finetuned Foundation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multiscale Positive-Unlabeled Detection of AI-Generated Texts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multisize Dataset Condensation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| NECO: NEural Collapse Based Out-of-distribution detection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| NEFTune: Noisy Embeddings Improve Instruction Finetuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| NOLA: Compressing LoRA using Linear Combination of Random Basis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on HuggingFace |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Navigating the Design Space of Equivariant Diffusion-Based Generative Models for De Novo 3D Molecule Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| NeRM: Learning Neural Representations for High-Framerate Human Motion Synthesis |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Near-Optimal Quantum Algorithm for Minimizing the Maximal Loss |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-Optimal Solutions of Constrained Learning Problems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nearly $d$-Linear Convergence Bounds for Diffusion Models via Stochastic Localization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Negative Label Guided OOD Detection with Pretrained Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Negatively Correlated Ensemble Reinforcement Learning for Online Diverse Game Level Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| NetInfoF Framework: Measuring and Exploiting Network Usable Information |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Network Memory Footprint Compression Through Jointly Learnable Codebooks and Mappings |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Neur2RO: Neural Two-Stage Robust Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| NeurRev: Train Better Sparse Neural Network Practically via Neuron Revitalization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Active Learning Beyond Bandits |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Neural Architecture Retrieval |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Auto-designer for Enhanced Quantum Kernels |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Common Neighbor with Completion for Link Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Contractive Dynamical Systems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neural Field Classifiers via Target Encoding and Classification Loss |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Fine-Tuning Search for Few-Shot Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Neural Fourier Transform: A General Approach to Equivariant Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Language of Thought Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Neural Neighborhood Search for Multi-agent Path Finding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Neural Optimal Transport with General Cost Functionals |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Polynomial Gabor Fields for Macro Motion Analysis |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
1 |
| Neural Processing of Tri-Plane Hybrid Neural Fields |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Rate Control for Learned Video Compression |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neural SDF Flow for 3D Reconstruction of Dynamic Scenes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neural Spectral Methods: Self-supervised learning in the spectral domain |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neural structure learning with stochastic differential equations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Neural-Symbolic Recursive Machine for Systematic Generalization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Neuron-Enhanced AutoEncoder Matrix Completion and Collaborative Filtering: Theory and Practice |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Neurosymbolic Grounding for Compositional World Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| New Insight of Variance reduce in Zero-Order Hard-Thresholding: Mitigating Gradient Error and Expansivity Contradictions |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| NfgTransformer: Equivariant Representation Learning for Normal-form Games |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Node2ket: Efficient High-Dimensional Network Embedding in Quantum Hilbert Space |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Noise Map Guidance: Inversion with Spatial Context for Real Image Editing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Noise-free Score Distillation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Noisy Interpolation Learning with Shallow Univariate ReLU Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Non-Exchangeable Conformal Risk Control |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Non-negative Contrastive Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nougat: Neural Optical Understanding for Academic Documents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted Activations |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| NuwaDynamics: Discovering and Updating in Causal Spatio-Temporal Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ODEFormer: Symbolic Regression of Dynamical Systems with Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| OMNI: Open-endedness via Models of human Notions of Interestingness |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| OWL: A Large Language Model for IT Operations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Object centric architectures enable efficient causal representation learning |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Object-Aware Inversion and Reassembly for Image Editing |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Object-Centric Learning with Slot Mixture Module |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Octavius: Mitigating Task Interference in MLLMs via LoRA-MoE |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OctoPack: Instruction Tuning Code Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Off-Policy Primal-Dual Safe Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Offline RL with Observation Histories: Analyzing and Improving Sample Complexity |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OmniControl: Control Any Joint at Any Time for Human Motion Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Accelerating Diffusion-Based Sampling Processes via Improved Integration Approximation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Adversarial Training without Perturbing all Examples |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On Bias-Variance Alignment in Deep Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On Differentially Private Federated Linear Contextual Bandits |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On Diffusion Modeling for Anomaly Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Double Descent in Reinforcement Learning with LSTD and Random Features |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| On Error Propagation of Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Harmonizing Implicit Subpopulations |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Representation Complexity of Model-based and Model-free Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On Stationary Point Convergence of PPO-Clip |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On Trajectory Augmentations for Off-Policy Evaluation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| On gauge freedom, conservativity and intrinsic dimensionality estimation in diffusion models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| On the Analysis of GAN-based Image-to-Image Translation with Gaussian Noise Injection |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Effect of Batch Size in Byzantine-Robust Distributed Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Expressivity of Objective-Specification Formalisms in Reinforcement Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Fairness ROAD: Robust Optimization for Adversarial Debiasing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| On the Foundations of Shortcut Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the Generalization and Approximation Capacities of Neural Controlled Differential Equations |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Hardness of Constrained Cooperative Multi-Agent Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Hardness of Online Nonconvex Optimization with Single Oracle Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Joint Interaction of Models, Data, and Features |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Learnability of Watermarks for Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Limitations of Temperature Scaling for Distributions with Overlaps |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Markov Property of Neural Algorithmic Reasoning: Analyses and Methods |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the Over-Memorization During Natural, Robust and Catastrophic Overfitting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Parameterization of Second-Order Optimization Effective towards the Infinite Width |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Posterior Distribution in Denoising: Application to Uncertainty Quantification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| On the Power of the Weisfeiler-Leman Test for Graph Motif Parameters |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Provable Advantage of Unsupervised Pretraining |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Reliability of Watermarks for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Role of Discrete Tokenization in Visual Representation Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Role of General Function Approximation in Offline Reinforcement Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Scalability and Memory Efficiency of Semidefinite Programs for Lipschitz Constant Estimation of Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| On the Stability of Expressive Positional Encodings for Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Stability of Iterative Retraining of Generative Models on their own Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| On the Variance of Neural Network Training with respect to Test Sets and Distributions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the Vulnerability of Adversarially Trained Models Against Two-faced Attacks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the generalization capacity of neural networks during generic multimodal reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| On the hardness of learning under symmetries |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| One For All: Towards Training One Graph Model For All Classification Tasks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| One Forward is Enough for Neural Network Training via Likelihood Ratio Method |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| One-hot Generalized Linear Model for Switching Brain State Discovery |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| One-shot Active Learning Based on Lewis Weight Sampling for Multiple Deep Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| One-shot Empirical Privacy Estimation for Federated Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Online Continual Learning for Interactive Instruction Following Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Online GNN Evaluation Under Test-time Graph Distribution Shifts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Information Acquisition: Hiring Multiple Agents |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Stabilization of Spiking Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Open the Black Box: Step-based Policy Updates for Temporally-Correlated Episodic Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| OpenChat: Advancing Open-source Language Models with Mixed-Quality Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| OpenTab: Advancing Large Language Models as Open-domain Table Reasoners |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimal Sample Complexity for Average Reward Markov Decision Processes |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimal Sample Complexity of Contrastive Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Optimal transport based adversarial patch to leverage large scale attack transferability |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Optimistic Bayesian Optimization with Unknown Constraints |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Oracle Efficient Algorithms for Groupwise Regret |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Orbit-Equivariant Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Order-Preserving GFlowNets |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Out-Of-Domain Unlabeled Data Improves Generalization |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Out-of-Distribution Detection with Negative Prompts |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Out-of-Variable Generalisation for Discriminative Models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Overthinking the Truth: Understanding how Language Models Process False Demonstrations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| P$^2$OT: Progressive Partial Optimal Transport for Deep Imbalanced Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| P2Seg: Pointly-supervised Segmentation via Mutual Distillation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PAC Prediction Sets Under Label Shift |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PAC-FNO: Parallel-Structured All-Component Fourier Neural Operators for Recognizing Low-Quality Images |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| PAE: Reinforcement Learning from External Knowledge for Efficient Exploration |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PB-LLM: Partially Binarized Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PBADet: A One-Stage Anchor-Free Approach for Part-Body Association |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PILOT: An $\mathcal{O}(1/K)$-Convergent Approach for Policy Evaluation with Nonlinear Function Approximation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| PINNACLE: PINN Adaptive ColLocation and Experimental points selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PORF: POSE RESIDUAL FIELD FOR ACCURATE NEURAL SURFACE RECONSTRUCTION |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| PRES: Toward Scalable Memory-Based Dynamic Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PRIME: Prioritizing Interpretability in Failure Mode Extraction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| PROGRAM: PROtotype GRAph Model based Pseudo-Label Learning for Test-Time Adaptation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PTaRL: Prototype-based Tabular Representation Learning via Space Calibration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PanoDiffusion: 360-degree Panorama Outpainting via Diffusion |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Parallelizing non-linear sequential models over the sequence length |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parameter-Efficient Multi-Task Model Fusion with Partial Linearization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Parametric Augmentation for Time Series Contrastive Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Pareto Deep Long-Tailed Recognition: A Conflict-Averse Solution |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Parsing neural dynamics with infinite recurrent switching linear dynamical systems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Partitioning Message Passing for Graph Fraud Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Patched Denoising Diffusion Models For High-Resolution Image Synthesis |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Path Choice Matters for Clear Attributions in Path Methods |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PeFLL: Personalized Federated Learning by Learning to Learn |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Perceptual Group Tokenizer: Building Perception with Iterative Grouping |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Perceptual Scales Predicted by Fisher Information Metrics |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Periodicity Decoupling Framework for Long-term Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Personalize Segment Anything Model with One Shot |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PhyloGFN: Phylogenetic inference with generative flow networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Physics-Regulated Deep Reinforcement Learning: Invariant Embeddings |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PlaSma: Procedural Knowledge Models for Language-based Planning and Re-Planning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Plugin estimators for selective classification with out-of-distribution detection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PnP Inversion: Boosting Diffusion-based Editing with 3 Lines of Code |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Point2SSM: Learning Morphological Variations of Anatomies from Point Clouds |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Poly-View Contrastive Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PolyVoice: Language Models for Speech to Speech Translation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Polynomial Width is Sufficient for Set Representation with High-dimensional Features |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Polynormer: Polynomial-Expressive Graph Transformer in Linear Time |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pooling Image Datasets with Multiple Covariate Shift and Imbalance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Pose Modulated Avatars from Video |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Post-hoc bias scoring is optimal for fair classification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Pre-Training and Fine-Tuning Generative Flow Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Pre-training LiDAR-based 3D Object Detectors through Colorization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Pre-training with Random Orthogonal Projection Image Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pre-training with Synthetic Data Helps Offline Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Predicting Emergent Abilities with Infinite Resolution Evaluation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Prediction Error-based Classification for Class-Incremental Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prediction without Preclusion: Recourse Verification with Reachable Sets |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Predictive auxiliary objectives in deep RL mimic learning in the brain |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Predictive, scalable and interpretable knowledge tracing on structured domains |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Principled Architecture-aware Scaling of Hyperparameters |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Principled Federated Domain Adaptation: Gradient Projection and Auto-Weighting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prioritized Soft Q-Decomposition for Lexicographic Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Privacy Amplification for Matrix Mechanisms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Privacy-Preserving In-Context Learning for Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Private Zeroth-Order Nonsmooth Nonconvex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Privately Aligning Language Models with Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Privileged Sensing Scaffolds Reinforcement Learning |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Probabilistic Adaptation of Black-Box Text-to-Video Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Probabilistic Self-supervised Representation Learning via Scoring Rules Minimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Probabilistically Rewired Message-Passing Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Procedural Fairness Through Decoupling Objectionable Data Generating Components |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Progressive Fourier Neural Representation for Sequential Video Compilation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Prometheus: Inducing Fine-Grained Evaluation Capability in Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Prompt Gradient Projection for Continual Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Prompt Learning with Quaternion Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PromptTTS 2: Describing and Generating Voices with Text Prompt |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Proper Laplacian Representation Learning |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Protein Discovery with Discrete Walk-Jump Sampling |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Protein Multimer Structure Prediction via Prompt Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Protein-ligand binding representation learning from fine-grained interactions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Provable Benefits of Multi-task RL under Non-Markovian Decision Making Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable Compositional Generalization for Object-Centric Learning |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
4 |
| Provable Memory Efficient Self-Play Algorithm for Model-free Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable Offline Preference-Based Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable Reward-Agnostic Preference-Based Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable Robust Watermarking for AI-Generated Text |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Provably Efficient CVaR RL in Low-rank MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provably Efficient Iterated CVaR Reinforcement Learning with Function Approximation and Human Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provably Efficient UCB-type Algorithms For Learning Predictive State Representations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provably Robust Conformal Prediction with Improved Efficiency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Proving Test Set Contamination in Black-Box Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pseudo-Generalized Dynamic View Synthesis from a Video |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PubDef: Defending Against Transfer Attacks From Public Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Pushing Boundaries: Mixup's Influence on Neural Collapse |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quadratic models for understanding catapult dynamics of neural networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Quality-Diversity through AI Feedback |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Quantifying and Enhancing Multi-modal Robustness with Modality Preference |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Quantifying the Plausibility of Context Reliance in Neural Machine Translation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Quasi-Monte Carlo for 3D Sliced Wasserstein |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Query-Policy Misalignment in Preference-Based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Querying Easily Flip-flopped Samples for Deep Active Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| R-EDL: Relaxing Nonessential Settings of Evidential Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| R-MAE: Regions Meet Masked Autoencoders |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RA-DIT: Retrieval-Augmented Dual Instruction Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RAIN: Your Language Models Can Align Themselves without Finetuning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| RAPPER: Reinforced Rationale-Prompted Paradigm for Natural Language Explanation in Visual Question Answering |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| RECOMBINER: Robust and Enhanced Compression with Bayesian Implicit Neural Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RECOMP: Improving Retrieval-Augmented LMs with Context Compression and Selective Augmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| REFACTOR: Learning to Extract Theorems from Proofs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RETSim: Resilient and Efficient Text Similarity |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| REValueD: Regularised Ensemble Value-Decomposition for Factorisable Markov Decision Processes |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| RLCD: Reinforcement Learning from Contrastive Distillation for LM Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RLIF: Interactive Imitation Learning as Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Raidar: geneRative AI Detection viA Rewriting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ReFusion: Improving Natural Language Understanding with Computation-Efficient Retrieval Representation Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ReLoRA: High-Rank Training Through Low-Rank Updates |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ReMasker: Imputing Tabular Data with Masked Autoencoding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ReSimAD: Zero-Shot 3D Domain Transfer for Autonomous Driving with Source Reconstruction and Target Simulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReTaSA: A Nonparametric Functional Estimation Approach for Addressing Continuous Target Shift |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Real-Fake: Effective Training Data Synthesis Through Distribution Matching |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Realistic Evaluation of Semi-supervised Learning Algorithms in Open Environments |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reasoning with Latent Diffusion in Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reclaiming the Source of Programmatic Policies: Programmatic versus Latent Spaces |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Reconciling Spatial and Temporal Abstractions for Goal Representation |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Recursive Generalization Transformer for Image Super-Resolution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reinforcement Symbolic Regression Machine |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Relaxing the Additivity Constraints in Decentralized No-Regret High-Dimensional Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Relay Diffusion: Unifying diffusion process across resolutions for image synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Remote Sensing Vision-Language Foundation Models without Annotations via Ground Remote Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Removing Biases from Molecular Representations via Information Maximization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Repelling Random Walks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Replay across Experiments: A Natural Extension of Off-Policy RL |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Representation Deficiency in Masked Language Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ResFields: Residual Neural Fields for Spatiotemporal Signals |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rethinking Branching on Exact Combinatorial Optimization Solver: The First Deep Symbolic Discovery Framework |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Rethinking CNN’s Generalization to Backdoor Attack from Frequency Domain |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Rethinking Information-theoretic Generalization: Loss Entropy Induced PAC Bounds |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Label Poisoning for GNNs: Pitfalls and Attacks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Model Ensemble in Transfer-based Adversarial Attacks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rethinking and Extending the Probabilistic Inference Capacity of GNNs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking the Power of Graph Canonization in Graph Representation Learning with Stability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking the Uniformity Metric in Self-Supervised Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking the symmetry-preserving circuits for constrained variational quantum algorithms |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Retrieval is Accurate Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Retrieval meets Long Context Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Retrieval-Enhanced Contrastive Vision-Text Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Retrieval-based Disentangled Representation Learning with Natural Language Supervision |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Retro-fallback: retrosynthetic planning in an uncertain world |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| RetroBridge: Modeling Retrosynthesis with Markov Bridges |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reverse Diffusion Monte Carlo |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Reverse Forward Curriculum Learning for Extreme Sample and Demo Efficiency |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Revisit and Outstrip Entity Alignment: A Perspective of Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting Data Augmentation in Deep Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Revisiting Deep Audio-Text Retrieval Through the Lens of Transportation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting Link Prediction: a data perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Reward Design for Justifiable Sequential Decision-Making |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reward Model Ensembles Help Mitigate Overoptimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reward-Free Curricula for Training Robust World Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| RingAttention with Blockwise Transformers for Near-Infinite Context |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Risk Bounds of Accelerated SGD for Overparameterized Linear Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Robot Fleet Learning via Policy Merging |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Angular Synchronization via Directed Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Classification via Regression for Learning with Noisy Labels |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Model Based Reinforcement Learning Using $\mathcal{L}_1$ Adaptive Control |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust Model-Based Optimization for Challenging Fitness Landscapes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust NAS under adversarial training: benchmark, theory, and beyond |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Robust Similarity Learning with Difference Alignment Regularization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Robust Training of Federated Models with Extremely Label Deficiency |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Robust agents learn causal world models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| RobustTSF: Towards Theory and Design of Robust Time Series Forecasting with Anomalies |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robustifying State-space Models for Long Sequences via Approximate Diagonalization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Robustifying and Boosting Training-Free Neural Architecture Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rotation Has Two Sides: Evaluating Data Augmentation for Deep One-class Classification |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| S$2$AC: Energy-Based Reinforcement Learning with Stein Soft Actor Critic |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SALMON: Self-Alignment with Instructable Reward Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SALMONN: Towards Generic Hearing Abilities for Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SAS: Structured Activation Sparsification |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SE(3)-Stochastic Flow Matching for Protein Backbone Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SEA: Sparse Linear Attention with Estimated Attention Mask |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SEABO: A Simple Search-Based Method for Offline Imitation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| SEAL: A Framework for Systematic Evaluation of Real-World Super-Resolution |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SEPT: Towards Efficient Scene Representation Learning for Motion Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the XOR problem |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SKILL-MIX: a Flexible and Expandable Family of Evaluations for AI Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SLiMe: Segment Like Me |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SOHES: Self-supervised Open-world Hierarchical Entity Segmentation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SOInter: A Novel Deep Energy-Based Interpretation Method for Explaining Structured Output Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| SPDER: Semiperiodic Damping-Enabled Object Representation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SPTNet: An Efficient Alternative Framework for Generalized Category Discovery with Spatial Prompt Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| STARC: A General Framework For Quantifying Differences Between Reward Functions |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
1 |
| STREAM: Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SWAP: Sparse Entropic Wasserstein Regression for Robust Network Pruning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SWE-bench: Can Language Models Resolve Real-world Github Issues? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SaNN: Simple Yet Powerful Simplicial-aware Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| SaProt: Protein Language Modeling with Structure-aware Vocabulary |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Safe Collaborative Filtering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Safe RLHF: Safe Reinforcement Learning from Human Feedback |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Safe and Robust Watermark Injection with a Single OoD Image |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SafeDreamer: Safe Reinforcement Learning with World Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Sample Efficient Myopic Exploration Through Multitask Reinforcement Learning with Diverse Tasks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-Efficient Linear Representation Learning from Non-IID Non-Isotropic Data |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sample-Efficient Multi-Agent RL: An Optimization Perspective |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-Efficient Quality-Diversity by Cooperative Coevolution |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sampling Multimodal Distributions with the Vanilla Score: Benefits of Data-Based Initialization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Scalable Diffusion for Materials Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scalable Language Model with Generalized Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Modular Network: A Framework for Adaptive Learning via Agreement Routing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Monotonic Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Scalable Neural Network Kernels |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scalable and Effective Implicit Graph Neural Networks on Large Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scale-Adaptive Diffusion Model for Complex Sketch Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Convex Neural Networks with Burer-Monteiro Factorization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Laws for Associative Memories |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Scaling Laws for Sparsely-Connected Foundation Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Scaling Laws of RoPE-based Extrapolation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scaling Supervised Local Learning with Augmented Auxiliary Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scaling physics-informed hard constraints with mixture-of-experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Score Models for Offline Goal-Conditioned Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Score Regularized Policy Optimization through Diffusion Behavior |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Score-based generative models break the curse of dimensionality in learning a family of sub-Gaussian distributions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Searching for High-Value Molecules Using Reinforcement Learning and Transformers |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Seer: Language Instructed Video Prediction with Latent Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Select to Perfect: Imitating desired behavior from large multi-agent data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Selective Visual Representations Improve Convergence and Generalization for Embodied AI |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Alignment with Instruction Backtranslation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Self-Consuming Generative Models Go MAD |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Supervised Contrastive Learning for Long-term Forecasting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Self-Supervised Dataset Distillation for Transfer Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Self-Supervised Heterogeneous Graph Learning: a Homophily and Heterogeneity View |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in Dynamic Scenes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Self-Supervised Speech Quality Estimation and Enhancement Using Only Clean Speech |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-supervised Pocket Pretraining via Protein Fragment-Surroundings Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-supervised Representation Learning from Random Data Projectors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Semantic Flow: Learning Semantic Fields of Dynamic Scenes from Monocular Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SemiReward: A General Reward Model for Semi-supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sentence-level Prompts Benefit Composed Image Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Separate and Diffuse: Using a Pretrained Diffusion Model for Better Source Separation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Separating common from salient patterns with Contrastive Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Set Learning for Accurate and Calibrated Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SetCSE: Set Operations using Contrastive Learning of Sentence Embeddings |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Shadow Cones: A Generalized Framework for Partial Order Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sharpness-Aware Data Poisoning Attack |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sign2GPT: Leveraging Large Language Models for Gloss-Free Sign Language Translation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Simple Hierarchical Planning with Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Simple Minimax Optimal Byzantine Robust Algorithm for Nonconvex Objectives with Uniform Gradient Heterogeneity |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Simplicial Representation Learning with Neural $k$-Forms |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Simplifying Transformer Blocks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SineNet: Learning Temporal Dynamics in Time-Dependent Partial Differential Equations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Single Motion Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Skill or Luck? Return Decomposition via Advantage Functions |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Skip-Attention: Improving Vision Transformers by Paying Less Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SliceGPT: Compress Large Language Models by Deleting Rows and Columns |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Sliced Denoising: A Physics-Informed Molecular Pre-Training Method |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Sliced Wasserstein Estimation with Control Variates |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Small-scale proxies for large-scale Transformer training instabilities |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SmartPlay : A Benchmark for LLMs as Intelligent Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Social-Transmotion: Promptable Human Trajectory Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SocioDojo: Building Lifelong Analytical Agents with Real-world Text and Time Series |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Soft Contrastive Learning for Time Series |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Soft Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Solving High Frequency and Multi-Scale PDEs with Gaussian Processes |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Solving Homogeneous and Heterogeneous Cooperative Tasks with Greedy Sequential Execution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Some Fundamental Aspects about Lipschitz Continuity of Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Source-Free and Image-Only Unsupervised Domain Adaptation for Category Level Object Pose Estimation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpaCE: The Spatial Confounding Environment |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Space Group Constrained Crystal Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Space and time continuous physics simulation from partial observations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Sparse Autoencoders Find Highly Interpretable Features in Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sparse MoE with Language Guided Routing for Multilingual Machine Translation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SparseDFF: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SparseFormer: Sparse Visual Recognition via Limited Latent Tokens |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sparsistency for inverse optimal transport |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Spatially-Aware Transformers for Embodied Agents |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Spatio-Temporal Approximation: A Training-Free SNN Conversion for Transformers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Spectrally Transformed Kernel Regression |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Spurious Feature Diversification Improves Out-of-distribution Generalization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Stabilizing Backpropagation Through Time to Learn Complex Physics |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stable Anisotropic Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| State Representation Learning Using an Unbalanced Atlas |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Statistical Rejection Sampling Improves Preference Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Statistically Optimal $K$-means Clustering via Nonnegative Low-rank Semidefinite Programming |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Stochastic Controlled Averaging for Federated Learning with Communication Compression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stochastic Gradient Descent for Gaussian Processes Done Right |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Stochastic Modified Equations and Dynamics of Dropout Algorithm |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Str2Str: A Score-based Framework for Zero-shot Protein Conformation Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Strategic Preys Make Acute Predators: Enhancing Camouflaged Object Detectors by Generating Camouflaged Objects |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| StructComp: Substituting propagation with Structural Compression in Training Graph Contrastive Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Structural Estimation of Partially Observed Linear Non-Gaussian Acyclic Model: A Practical Approach with Identifiability |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Structural Fairness-aware Active Learning for Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Structural Inference with Dynamics Encoding and Partial Correlation Coefficients |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Structured Video-Language Modeling with Temporal Grouping and Spatial Grounding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stylized Offline Reinforcement Learning: Extracting Diverse High-Quality Behaviors from Heterogeneous Datasets |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Submodular Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Subtractive Mixture Models via Squaring: Representation and Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Successor Heads: Recurring, Interpretable Attention Heads In The Wild |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Sufficient conditions for offline reactivation in recurrent neural networks |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Sum-Product-Set Networks: Deep Tractable Models for Tree-Structured Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Supervised Knowledge Makes Large Language Models Better In-context Learners |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| SweetDreamer: Aligning Geometric Priors in 2D diffusion for Consistent Text-to-3D |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Symbol as Points: Panoptic Symbol Spotting via Point-based Representation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Symmetric Neural-Collapse Representations with Supervised Contrastive Loss: The Impact of ReLU and Batching |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Symmetric Single Index Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Symphony: Symmetry-Equivariant Point-Centered Spherical Harmonics for 3D Molecule Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Synaptic Weight Distributions Depend on the Geometry of Plasticity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SyncDreamer: Generating Multiview-consistent Images from a Single-view Image |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Synergistic Patch Pruning for Vision Transformer: Unifying Intra- & Inter-Layer Patch Importance |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| T-MARS: Improving Visual Representations by Circumventing Text Feature Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| T-Rep: Representation Learning for Time Series using Time-Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TAB: Temporal Accumulated Batch Normalization in Spiking Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| TD-MPC2: Scalable, Robust World Models for Continuous Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TEDDY: Trimming Edges with Degree-based Discrimination Strategy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| THOUGHT PROPAGATION: AN ANALOGICAL APPROACH TO COMPLEX REASONING WITH LARGE LANGUAGE MODELS |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| TOSS: High-quality Text-guided Novel View Synthesis from a Single Image |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| TRAM: Bridging Trust Regions and Sharpness Aware Minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TUVF: Learning Generalizable Texture UV Radiance Fields |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| TabR: Tabular Deep Learning Meets Nearest Neighbors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tackling the Data Heterogeneity in Asynchronous Federated Learning with Cached Update Calibration |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tag2Text: Guiding Vision-Language Model via Image Tagging |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tailoring Self-Rationalizers with Multi-Reward Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Talk like a Graph: Encoding Graphs for Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Tangent Transformers for Composition,Privacy and Removal |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TapMo: Shape-aware Motion Generation of Skeleton-free Characters |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Task Adaptation from Skills: Information Geometry, Disentanglement, and New Objectives for Unsupervised Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Task Planning for Visual Room Rearrangement under Partial Observability |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Task structure and nonlinearity jointly determine learned representational geometry |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Teach LLMs to Phish: Stealing Private Information from Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Teaching Arithmetic to Small Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Teaching Language Models to Hallucinate Less with Synthetic Tasks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Teaching Large Language Models to Self-Debug |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Temporal Generalization Estimation in Evolving Graphs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Test-Time Training on Nearest Neighbors for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Test-time Adaptation against Multi-modal Reliability Bias |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Text-to-3D with Classifier Score Distillation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Text2Reward: Reward Shaping with Language Models for Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| TextField3D: Towards Enhancing Open-Vocabulary 3D Generation with Noisy Text Fields |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Alignment Problem from a Deep Learning Perspective |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Consensus Game: Language Model Generation via Equilibrium Search |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Curse of Diversity in Ensemble-Based Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Language Models |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Devil is in the Object Boundary: Towards Annotation-free Instance Segmentation using Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Effect of Intrinsic Dataset Properties on Generalization: Unraveling Learning Differences Between Natural and Medical Images |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Effective Horizon Explains Deep RL Performance in Stochastic Environments |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Effectiveness of Random Forgetting for Robust Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks. |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| The Expressive Power of Low-Rank Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Expressive Power of Transformers with Chain of Thought |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The False Promise of Imitating Proprietary Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Generalization Gap in Offline Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Generative AI Paradox: “What It Can Create, It May Not Understand” |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Hidden Language of Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Human-AI Substitution game: active learning from a strategic labeler |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting — An Analytical Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The LLM Surgeon |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Marginal Value of Momentum for Small Learning Rate SGD |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| The Need for Speed: Pruning Transformers with One Recipe |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Reasonableness Behind Unreasonable Translation Capability of Large Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Reversal Curse: LLMs trained on “A is B” fail to learn “B is A” |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| The Trickle-down Impact of Reward Inconsistency on RLHF |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Unreasonable Effectiveness of Linear Prediction as a Perceptual Metric |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Update-Equivalence Framework for Decision-Time Planning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The importance of feature preprocessing for differentially private linear optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The mechanistic basis of data dependence and abrupt learning in an in-context classification task |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The optimality of kernel classifiers in Sobolev space |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Theoretical Understanding of Learning from Adversarial Perturbations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Thin-Shell Object Manipulations With Differentiable Physics Simulations |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| Think before you speak: Training Language Models With Pause Tokens |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Threaten Spiking Neural Networks through Combining Rate and Temporal Information |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Threshold-Consistent Margin Loss for Open-World Deep Metric Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TiC-CLIP: Continual Training of CLIP Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tight Rates in Supervised Outlier Transfer Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Time Fairness in Online Knapsack Problems |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Time Travel in LLMs: Tracing Data Contamination in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Time-Efficient Reinforcement Learning with Stochastic Stateful Policies |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Time-LLM: Time Series Forecasting by Reprogramming Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Time-Varying Propensity Score to Bridge the Gap between the Past and Present |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| To Grok or not to Grok: Disentangling Generalization and Memorization on Corrupted Algorithmic Datasets |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| TokenFlow: Consistent Diffusion Features for Consistent Video Editing |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tool-Augmented Reward Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Topic Modeling as Multi-Objective Contrastive Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TopoMLP: A Simple yet Strong Pipeline for Driving Topology Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Topological data analysis on noisy quantum computers |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| TorchRL: A data-driven decision-making library for PyTorch |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Toward Optimal Policy Population Growth in Two-Player Zero-Sum Games |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Toward Student-oriented Teacher Network Training for Knowledge Distillation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Toward effective protection against diffusion-based mimicry through score distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards 3D Molecule-Text Interpretation in Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Towards Best Practices of Activation Patching in Language Models: Metrics and Methods |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Category Unification of 3D Single Object Tracking on Point Clouds |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards Characterizing Domain Counterfactuals for Invertible Latent Causal Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Codable Watermarking for Injecting Multi-Bits Information to LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards Cross Domain Generalization of Hamiltonian Representation via Meta Learning |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards Energy Efficient Spiking Neural Networks: An Unstructured Pruning Framework |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards Enhancing Time Series Contrastive Learning: A Dynamic Bad Pair Mining Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Towards Establishing Guaranteed Error for Learned Database Operations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Faithful XAI Evaluation via Generalization-Limited Backdoor Watermark |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Foundation Models for Knowledge Graph Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Identifiable Unsupervised Domain Translation: A Diversified Distribution Matching Approach |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Imitation Learning to Branch for MIP: A Hybrid Reinforcement Learning based Sample Augmentation Approach |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Meta-Pruning via Optimal Transport |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Non-Asymptotic Convergence for Diffusion-Based Generative Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Towards Offline Opponent Modeling with In-context Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Optimal Feature-Shaping Methods for Out-of-Distribution Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Towards Poisoning Fair Representations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Principled Representation Learning from Videos for Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Towards Robust Multi-Modal Reasoning via Model Selection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Robust Offline Reinforcement Learning under Diverse Data Corruption |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards Robust Out-of-Distribution Generalization Bounds via Sharpness |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Transparent Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Understanding Factual Knowledge of Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Towards Understanding Sycophancy in Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards a statistical theory of data selection under weak supervision |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards image compression with perfect realism at ultra-low bitrates |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards the Fundamental Limits of Knowledge Transfer over Finite Domains |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tractable Probabilistic Graph Representation Learning with Graph-Induced Sum-Product Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Bayesian Neural Networks with Sparse Subspace Variational Inference |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Training Diffusion Models with Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Training Graph Transformers via Curriculum-Enhanced Attention Distillation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Training Socially Aligned Language Models on Simulated Social Interactions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Training Unbiased Diffusion Models From Biased Dataset |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Training-free Multi-objective Diffusion Model for 3D Molecule Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Trajeglish: Traffic Modeling as Next-Token Prediction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transferring Learning Trajectories of Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transformer Fusion with Optimal Transport |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transformer-Modulated Diffusion Models for Probabilistic Multivariate Time Series Forecasting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transformer-VQ: Linear-Time Transformers via Vector Quantization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Transformers can optimally learn regression mixture models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Transport meets Variational Inference: Controlled Monte Carlo Diffusions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Traveling Waves Encode The Recent Past and Enhance Sequence Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Treatment Effects Estimation By Uniform Transformer |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Tree Cross Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Tree Search-Based Policy Optimization under Stochastic Execution Delay |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Tree-Planner: Efficient Close-loop Task Planning with Large Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| True Knowledge Comes from Practice: Aligning Large Language Models with Embodied Environments via Reinforcement Learning |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Turning large language models into cognitive models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Two-stage LLM Fine-tuning with Less Specialization and More Generalization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Two-timescale Extragradient for Finding Local Minimax Points |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| UC-NERF: Neural Radiance Field for Under-Calibrated Multi-View Cameras in Autonomous Driving |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| UNR-Explainer: Counterfactual Explanations for Unsupervised Node Representation Learning Models |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Un-Mixing Test-Time Normalization Statistics: Combatting Label Temporal Correlation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unbiased Watermark for Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Uncertainty Quantification via Stable Distribution Propagation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Uncertainty-aware Constraint Inference in Inverse Constrained Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Uncertainty-aware Graph-based Hyperspectral Image Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unconstrained Stochastic CCA: Unifying Multiview and Self-Supervised Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Understanding Addition in Transformers |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Catastrophic Forgetting in Language Models via Implicit Inference |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Understanding Certified Training with Interval Bound Propagation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Domain Generalization: A Noise Robustness Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding Expressivity of GNN in Rule Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Understanding In-Context Learning from Repetitions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Understanding prompt engineering may not require rethinking generalization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Understanding the Effects of RLHF on LLM Generalisation and Diversity |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Understanding when Dynamics-Invariant Data Augmentations Benefit Model-free Reinforcement Learning Updates |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Uni3D: Exploring Unified 3D Representation at Scale |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Unified Generative Modeling of 3D Molecules with Bayesian Flow Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unified Human-Scene Interaction via Prompted Chain-of-Contacts |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Universal Backdoor Attacks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Universal Guidance for Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Universal Humanoid Motion Representations for Physics-Based Control |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Universal Jailbreak Backdoors from Poisoned Human Feedback |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Unknown Domain Inconsistency Minimization for Domain Generalization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unlocking the Power of Representations in Long-term Novelty-based Exploration |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| Unpaired Image-to-Image Translation via Neural Schrödinger Bridge |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unprocessing Seven Years of Algorithmic Fairness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unraveling the Key Components of OOD Generalization via Diversification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Order Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Pretraining for Fact Verification by Language Model Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unveiling Options with Neural Network Decomposition |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unveiling and Manipulating Prompt Influence in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unveiling the Pitfalls of Knowledge Editing for Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unveiling the Unseen: Identifiable Clusters in Trained Depthwise Convolutional Kernels |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| VBH-GNN: Variational Bayesian Heterogeneous Graph Neural Networks for Cross-subject Emotion Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| VDT: General-purpose Video Diffusion Transformers via Mask Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VFLAIR: A Research Library and Benchmark for Vertical Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| VONet: Unsupervised Video Object Learning With Parallel U-Net Attention and Object-wise Sequential VAE |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VQ-TR: Vector Quantized Attention for Time Series Forecasting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Vanishing Gradients in Reinforcement Finetuning of Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Variance-aware Regret Bounds for Stochastic Contextual Dueling Bandits |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Variance-enlarged Poisson Learning for Graph-based Semi-Supervised Learning with Extremely Sparse Labeled Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Variational Bayesian Last Layers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Variational Inference for SDEs Driven by Fractional Noise |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| VeRA: Vector-based Random Matrix Adaptation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| VersVideo: Leveraging Enhanced Temporal Diffusion Models for Versatile Video Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Video Decomposition Prior: Editing Videos Layer by Layer |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Video Language Planning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Vision Transformers Need Registers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Vision-Language Foundation Models as Effective Robot Imitators |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Vision-by-Language for Training-Free Compositional Image Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Data-Type Understanding does not emerge from scaling Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Waxing-and-Waning: a Generic Similarity-based Framework for Efficient Self-Supervised Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Weaker MVI Condition: Extragradient Methods with Multi-Step Exploration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Weakly-supervised Audio Separation via Bi-modal Semantic Similarity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Weatherproofing Retrieval for Localization with Generative AI and Geometric Consistency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WebArena: A Realistic Web Environment for Building Autonomous Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| What Algorithms can Transformers Learn? A Study in Length Generalization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| What Matters to You? Towards Visual Representation Alignment for Robot Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| What does automatic differentiation compute for neural networks? |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| What does the Knowledge Neuron Thesis Have to do with Knowledge? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| What's In My Big Data? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What's in a Prior? Learned Proximal Networks for Inverse Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| When Semantic Segmentation Meets Frequency Aliasing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When can transformers reason with abstract symbols? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When should we prefer Decision Transformers for Offline Reinforcement Learning? |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Whittle Index with Multiple Actions and State Constraint for Inventory Management |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Why is SAM Robust to Label Noise? |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| WildChat: 1M ChatGPT Interaction Logs in the Wild |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| WildFusion: Learning 3D-Aware Latent Diffusion Models in View Space |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Win-Win: Training High-Resolution Vision Transformers from Two Windows |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Window Attention is Bugged: How not to Interpolate Position Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| WizardCoder: Empowering Code Large Language Models with Evol-Instruct |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Xformer: Hybrid X-Shaped Transformer for Image Denoising |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| YaRN: Efficient Context Window Extension of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| You Only Query Once: An Efficient Label-Only Membership Inference Attack |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ZeRO++: Extremely Efficient Collective Communication for Large Model Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Zero Bubble (Almost) Pipeline Parallelism |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero and Few-shot Semantic Parsing with Ambiguous Inputs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Zero-Mean Regularized Spectral Contrastive Learning: Implicitly Mitigating Wrong Connections in Positive-Pair Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Zero-Shot Continuous Prompt Transfer: Generalizing Task Semantics Across Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-Shot Robustification of Zero-Shot Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ZeroFlow: Scalable Scene Flow via Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ZipIt! Merging Models from Different Tasks without Training |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Zipformer: A faster and better encoder for automatic speech recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Zoology: Measuring and Improving Recall in Efficient Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| fairret: a Framework for Differentiable Fairness Regularization Terms |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| iGraphMix: Input Graph Mixup Method for Node Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| iTransformer: Inverted Transformers Are Effective for Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| lpNTK: Better Generalisation with Less Data via Sample Interaction During Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| sRGB Real Noise Modeling via Noise-Aware Sampling with Normalizing Flows |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |