| $C^2M^3$: Cycle-Consistent Multi-Model Merging |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| $SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$ |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $\boldsymbol{\mu}\mathbf{P^2}$: Effective Sharpness Aware Minimization Requires Layerwise Perturbation Scaling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $\epsilon$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $\textit{Bifr\"ost}$: 3D-Aware Image Compositing with Language Instructions |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| $\textit{NeuroPath}$: A Neural Pathway Transformer for Joining the Dots of Human Connectomes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| $\textit{Read-ME}$: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $\textit{Trans-LoRA}$: towards data-free Transferable Parameter Efficient Finetuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $\text{Di}^2\text{Pose}$: Discrete Diffusion Model for Occluded 3D Human Pose Estimation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| $\text{ID}^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| (FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| 2D-OOB: Attributing Data Contribution Through Joint Valuation Framework |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| 3D Focusing-and-Matching Network for Multi-Instance Point Cloud Registration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3D Gaussian Rendering Can Be Sparser: Efficient Rendering via Learned Fragment Pruning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 3D Gaussian Splatting as Markov Chain Monte Carlo |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| 3D Structure Prediction of Atomic Systems with Flow-based Direct Preference Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| 3DET-Mamba: Causal Sequence Modelling for End-to-End 3D Object Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 4+3 Phases of Compute-Optimal Neural Scaling Laws |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| 4-bit Shampoo for Memory-Efficient Network Training |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| 4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| 4Diffusion: Multi-view Video Diffusion Model for 4D Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Bayesian Approach to Data Point Selection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Canonicalization Perspective on Invariant and Equivariant Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Closer Look at AUROC and AUPRC under Class Imbalance |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Closer Look at the CLS Token for Cross-Domain Few-Shot Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| A Compositional Atlas for Algebraic Circuits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Comprehensive Analysis on the Learning Curve in Kernel Ridge Regression |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Concept-Based Explainability Framework for Large Multimodal Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Critical Evaluation of AI Feedback for Aligning Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Foundation Model for Zero-shot Logical Query Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Framework for Bilevel Optimization on Riemannian Manifolds |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Full-duplex Speech Dialogue Scheme Based On Large Language Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Functional Extension of Semi-Structured Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A General Protocol to Probe Large Vision Models for 3D Physical Understanding |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Generative Model of Symmetry Transformations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Globally Optimal Portfolio for m-Sparse Sharpe Ratio Maximization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Gradient Accumulation Method for Dense Retriever under Memory Constraint |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Kernel Perspective on Distillation-based Collaborative Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Label is Worth A Thousand Images in Dataset Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Local Method for Satisfying Interventional Fairness with Partially Known Causal Graphs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Metalearned Neural Circuit for Nonparametric Bayesian Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Modular Conditional Diffusion Framework for Image Reconstruction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Motion-aware Spatio-temporal Graph for Video Salient Object Ranking |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Near-optimal Algorithm for Learning Margin Halfspaces with Massart Noise |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A New Neural Kernel Regime: The Inductive Bias of Multi-Task Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| A Non-parametric Direct Learning Approach to Heterogeneous Treatment Effect Estimation under Unmeasured Confounding |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A PID Controller Approach for Adaptive Probability-dependent Gradient Decay in Model Calibration |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Pairwise Pseudo-likelihood Approach for Matrix Completion with Informative Missingness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Polar coordinate system represents syntax in large language models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Recipe for Charge Density Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Siamese Transformer with Hierarchical Refinement for Lane Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Simple Framework for Generalization in Visual RL under Dynamic Scene Perturbations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Simple Image Segmentation Framework via In-Context Examples |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $\Theta(T^{2/3})$ and its Application to Best-of-Both-Worlds |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Simple and Optimal Approach for Universal Online Learning with Gradient Variations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Simple yet Scalable Granger Causal Structural Learning Approach for Topological Event Sequences |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Simple yet Universal Framework for Depth Completion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Single-Step, Sharpness-Aware Minimization is All You Need to Achieve Efficient and Accurate Sparse Training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Sober Look at the Robustness of CLIPs to Spurious Features |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| A Structure-Aware Framework for Learning Device Placements on Computation Graphs |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Swiss Army Knife for Heterogeneous Federated Learning: Flexible Coupling via Trace Norm |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Theoretical Perspective for Speculative Decoding Algorithm |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Theoretical Understanding of Self-Correction through In-context Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Theory of Optimistically Universal Online Learnability for General Concept Classes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Topology-aware Graph Coarsening Framework for Continual Graph Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Tractable Inference Perspective of Offline RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Unified Framework for 3D Scene Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Unified Principle of Pessimism for Offline Reinforcement Learning under Model Mismatch |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Unifying Normative Framework of Decision Confidence |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Unifying Post-Processing Framework for Multi-Objective Learn-to-Defer Problems |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Universal Growth Rate for Learning with Smooth Surrogate Losses |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Walsh Hadamard Derived Linear Vector Symbolic Architecture |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A distributional simplicity bias in the learning dynamics of transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A generalized neural tangent kernel for surrogate gradient learning |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| A hierarchical decomposition for explaining ML performance discrepancies |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A probability contrastive learning framework for 3D molecular representation learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| A robust inlier identification algorithm for point cloud registration via $\mathbf{\ell_0}$-minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A scalable generative model for dynamical system reconstruction from neuroimaging data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A teacher-teacher framework for clinical language representation learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A theoretical case-study of Scalable Oversight in Hierarchical Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A theoretical design of concept sets: improving the predictability of concept bottleneck models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A two-scale Complexity Measure for Deep Learning Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A versatile informative diffusion model for single-cell ATAC-seq data generation and analysis |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| A2PO: Towards Effective Offline Reinforcement Learning from an Advantage-aware Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ACES: Generating a Diversity of Challenging Programming Puzzles with Autotelic Generative Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ACFun: Abstract-Concrete Fusion Facial Stylization |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| ADOPT: Modified Adam Can Converge with Any $\beta_2$ with the Optimal Rate |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AED: Adaptable Error Detection for Few-shot Imitation Policy |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| AGILE: A Novel Reinforcement Learning Framework of LLM Agents |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AHA: Human-Assisted Out-of-Distribution Generalization and Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AID: Attention Interpolation of Text-to-Image Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ALPINE: Unveiling The Planning Capability of Autoregressive Learning in Language Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AMAGO-2: Breaking the Multi-Task Barrier in Meta-Reinforcement Learning with Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ANT: Adaptive Noise Schedule for Time Series Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AP-Adapter: Improving Generalization of Automatic Prompts on Unseen Text-to-Image Diffusion Models |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal Properties |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ARC: A Generalist Graph Anomaly Detector with In-Context Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| AROMA: Preserving Spatial Structure for Latent PDE Modeling with Local Neural Fields |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AUC Maximization under Positive Distribution Shift |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Abductive Reasoning in Logical Credal Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Abrupt Learning in Transformers: A Case Study on Matrix Completion |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Absorb & Escape: Overcoming Single Model Limitations in Generating Heterogeneous Genomic Sequences |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Abstracted Shapes as Tokens - A Generalizable and Interpretable Model for Time-series Classification |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Accelerated Regularized Learning in Finite N-Person Games |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Accelerating Augmentation Invariance Pretraining |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Accelerating Blockwise Parallel Language Models with Draft Refinement |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Accelerating ERM for data-driven algorithm design using output-sensitive techniques |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Accelerating Matroid Optimization through Fast Imprecise Oracles |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Accelerating Nash Equilibrium Convergence in Monte Carlo Settings Through Counterfactual Value Based Fictitious Play |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Accelerating Non-Maximum Suppression: A Graph Theory Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Accelerating Pre-training of Multimodal LLMs via Chain-of-Sight |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Accelerating Relative Entropy Coding with Space Partitioning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Accelerating Transformers with Spectrum-Preserving Token Merging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Acceleration Exists! Optimization Problems When Oracle Can Only Compare Objective Function Values |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Accuracy is Not All You Need |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Accurate and Steady Inertial Pose Estimation through Sequence Structure Learning and Modulation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Achievable Fairness on Your Data With Utility Guarantees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Achievable distributional robustness when the robust risk is only partially identified |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Achieving $\tilde{O}(1/\epsilon)$ Sample Complexity for Constrained Markov Decision Process |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Achieving Constant Regret in Linear Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Achieving Domain-Independent Certified Robustness via Knowledge Continuity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Achieving Optimal Clustering in Gaussian Mixture Models with Anisotropic Covariance Structures |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Achieving Tractable Minimax Optimal Regret in Average Reward MDPs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Acoustic Volume Rendering for Neural Impulse Response Fields |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ActAnywhere: Subject-Aware Video Background Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Action Imitation in Common Action Space for Customized Action Image Synthesis |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Activating Self-Attention for Multi-Scene Absolute Pose Regression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Activation Map Compression through Tensor Decomposition for Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Active Classification with Few Queries under Misspecification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Active Learning of General Halfspaces: Label Queries vs Membership Queries |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Active Learning with LLMs for Partially Observed and Cost-Aware Scenarios |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Active Perception for Grasp Detection via Neural Graspness Field |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Active Set Ordering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Active learning of neural population dynamics using two-photon holographic optogenetics |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Active preference learning for ordering items in- and out-of-sample |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Active, anytime-valid risk controlling prediction sets |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Ad Auctions for LLMs via Retrieval Augmented Generation |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AdaNovo: Towards Robust \emph{De Novo} Peptide Sequencing in Proteomics against Data Biases |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adam with model exponential moving average is effective for nonconvex optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| AdanCA: Neural Cellular Automata As Adaptors For More Robust Vision Transformer |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptable Logical Control for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Adaptive $Q$-Aid for Conditional Supervised Learning in Offline Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adaptive Depth Networks with Skippable Sub-Paths |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adaptive Domain Learning for Cross-domain Image Denoising |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Experimentation When You Can't Experiment |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Adaptive Exploration for Data-Efficient General Value Function Evaluations |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adaptive Labeling for Efficient Out-of-distribution Model Evaluation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adaptive Layer Sparsity for Large Language Models via Activation Correlation Assessment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Passive-Aggressive Framework for Online Regression with Side Information |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Preference Scaling for Reinforcement Learning with Human Feedback |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Proximal Gradient Method for Convex Optimization |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Sampling for Efficient Softmax Approximation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive Visual Scene Understanding: Incremental Scene Graph Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| AdaptiveISP: Learning an Adaptive Image Signal Processor for Object Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Addressing Bias in Online Selection with Limited Budget of Comparisons |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Addressing Hidden Confounding with Heterogeneous Observational Datasets for Recommendation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Addressing Spatial-Temporal Heterogeneity: General Mixed Time Series Analysis via Latent Continuity Recovery and Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Addressing Spectral Bias of Deep Neural Networks by Multi-Grade Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AdjointDEIS: Efficient Gradients for Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adjust Pearson's $r$ to Measure Arbitrary Monotone Dependence |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Advancing Cross-domain Discriminability in Continual Learning of Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Advection Augmented Convolutional Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Environment Design via Regret-Guided Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Moment-Matching Distillation of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversarial Representation Engineering: A General Model Editing Framework for Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adversarial Schrödinger Bridge Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarially Robust Decision Transformer |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarially Robust Multi-task Representation Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Adversarially Trained Weighted Actor-Critic for Safe Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Agent Planning with World Knowledge Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Aggregate-and-Adapt Natural Language Prompts for Downstream Generalization of CLIP |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
5 |
| AirSketch: Generative Motion to Sketch |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Algebraic Positional Encodings |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Algorithmic Capabilities of Random Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Algorithmic progress in language models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Alias-Free Mamba Neural Operator |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Aligner: Efficient Alignment by Learning to Correct |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Aligning Audio-Visual Joint Representations with an Agentic Workflow |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Aligning Diffusion Models by Optimizing Human Utility |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Aligning Individual and Collective Objectives in Multi-Agent Cooperation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Aligning LLM Agents by Learning Latent Preference from User Edits |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Aligning Large Language Models with Representation Editing: A Control Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Aligning Model Properties via Conformal Risk Control |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Aligning to Thousands of Preferences via System Message Generalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Alignment at Pre-training! Towards Native Alignment for Arabic LLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Alignment for Honesty |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Alleviate Anchor-Shift: Explore Blind Spots with Cross-View Reconstruction for Incomplete Multi-View Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Almost Free: Self-concordance in Natural Exponential Families and an Application to Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Almost Surely Asymptotically Constant Graph Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AlphaMath Almost Zero: Process Supervision without Process |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AlphaTablets: A Generic Plane Representation for 3D Planar Reconstruction from Monocular Videos |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Amortized Active Causal Induction with Deep Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Amortized Bayesian Experimental Design for Decision-Making |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Amortized Eigendecomposition for Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Amortized Fourier Neural Operators |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Amortized Planning with Large-Scale Transformers: A Case Study on Chess |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Amortizing intractable inference in diffusion models for vision, language, and control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| An Adaptive Approach for Infinitely Many-armed Bandits under Generalized Rotting Constraints |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Analysis of Elo Rating Systems via Markov Chains |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Analysis of Tokenization: Transformers under Markov Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Analytical Study of Utility Functions in Multi-Objective Reinforcement Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| An Autoencoder-Like Nonnegative Matrix Co-Factorization for Improved Student Cognitive Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An End-To-End Graph Attention Network Hashing for Cross-Modal Retrieval |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Equivalence Between Static and Dynamic Regret Minimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| An Expectation-Maximization Algorithm for Training Clean Diffusion Models from Corrupted Observations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| An Image is Worth 32 Tokens for Reconstruction and Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Improved Empirical Fisher Approximation for Natural Gradient Descent |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| An Information Theoretic Perspective on Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| An Offline Adaptation Framework for Constrained Multi-Objective Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An effective framework for estimating individualized treatment rules |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| An engine not a camera: Measuring performative power of online search |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Analysing the Generalisation and Reliability of Steering Vectors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Analysis of Corrected Graph Convolutions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Analyzing & Reducing the Need for Learning Rate Warmup in GPT Training |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Animal-Bench: Benchmarking Multimodal Video Models for Animal-centric Video Understanding |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Animate3D: Animating Any 3D Model with Multi-view Video Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal Transport Loss |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Any2Policy: Learning Visuomotor Policy with Any-Modality |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Approaching Human-Level Forecasting with Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Approximately Equivariant Neural Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Approximately Pareto-optimal Solutions for Bi-Objective k-Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Approximating mutual information of high-dimensional variables using learned representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Approximating the Top Eigenvector in Random Order Streams |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Approximation Rate of the Transformer Architecture for Sequence Modeling |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Approximation-Aware Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Are Graph Neural Networks Optimal Approximation Algorithms? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Are Language Models Actually Useful for Time Series Forecasting? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Are More LLM Calls All You Need? Towards the Scaling Properties of Compound AI Systems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Are Multiple Instance Learning Algorithms Learnable for Instances? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Are Self-Attentions Effective for Time Series Forecasting? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Are We on the Right Way for Evaluating Large Vision-Language Models? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Artemis: Towards Referential Understanding in Complex Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Artificial Generational Intelligence: Cultural Accumulation in Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Ask, Attend, Attack: An Effective Decision-Based Black-Box Targeted Attack for Image-to-Text Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Association Pattern-aware Fusion for Biological Entity Relationship Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Asymptotics of Alpha-Divergence Variational Inference Algorithms with Exponential Families |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Asynchronous Perception Machine for Efficient Test Time Training |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Attack-Aware Noise Calibration for Differential Privacy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Attack-Resilient Image Watermarking Using Stable Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Attention boosted Individualized Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Auditing Local Explanations is Hard |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Auditing Privacy Mechanisms via Label Inference Attacks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AutoMix: Automatically Mixing Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AutoPSV: Automated Process-Supervised Verifier |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AutoSurvey: Large Language Models Can Automatically Write Surveys |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| AutoTimes: Autoregressive Time Series Forecasters via Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Autobidder's Dilemma: Why More Sophisticated Autobidders Lead to Worse Auction Efficiency |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Automated Efficient Estimation using Monte Carlo Efficient Influence Functions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Automated Multi-Task Learning for Joint Disease Prediction on Electronic Health Records |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Automated Multi-level Preference for MLLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Automatic Outlier Rectification via Optimal Transport |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Automatically Learning Hybrid Digital Twins of Dynamical Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Autonomous Agents for Collaborative Task under Information Asymmetry |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Autonomous Driving with Spiking Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Autoregressive Image Diffusion: Generation of Image Sequence and Application in MRI |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Autoregressive Image Generation without Vector Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Autoregressive Policy Optimization for Constrained Allocation Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Average gradient outer product as a mechanism for deep neural collapse |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Axioms for AI Alignment from Human Feedback |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BAKU: An Efficient Transformer for Multi-Task Policy Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| BAN: Detecting Backdoors Activated by Adversarial Neuron Noise |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BERTs are Generative In-Context Learners |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BMRS: Bayesian Model Reduction for Structured Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BOLD: Boolean Logic Deep Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Back to the Continuous Attractor |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| BackTime: Backdoor Attacks on Multivariate Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| BackdoorAlign: Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Balancing Context Length and Mixing Times for Reinforcement Learning at Scale |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Banded Square Root Matrix Factorization for Differentially Private Model Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Bandits with Abstention under Expert Advice |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Bandits with Preference Feedback: A Stackelberg Game Perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bandits with Ranking Feedback |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Barely Random Algorithms and Collective Metrical Task Systems |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Base of RoPE Bounds Context Length |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Batched Energy-Entropy acquisition for Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Bayes-optimal learning of an extensive-width neural network from quadratically many samples |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayesian Adaptive Calibration and Optimal Design |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bayesian Online Natural Gradient (BONG) |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Optimization of Functions over Node Subsets in Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bayesian Strategic Classification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Bayesian-guided Label Mapping for Visual Reprogramming |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Be Confident in What You Know: Bayesian Parameter Efficient Fine-Tuning of Vision Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Beating Adversarial Low-Rank MDPs with Unknown Transition and Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Belief-State Query Policies for User-Aligned POMDPs |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| BendVLM: Test-Time Debiasing of Vision-Language Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Benign overfitting in leaky ReLU networks with moderate input dimension |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Better by default: Strong pre-tuned MLPs and boosted trees on tabular data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond Accuracy: Ensuring Correct Predictions With Correct Rationales |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Accuracy: Tracking more like Human via Visual Search |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable? |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Beyond Euclidean: Dual-Space Representation Learning for Weakly Supervised Video Violence Detection |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Beyond Optimism: Exploration With Partially Observable Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Beyond Single Stationary Policies: Meta-Task Players as Naturally Superior Collaborators |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond Slow Signs in High-fidelity Model Extraction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond task diversity: provable representation transfer for sequential multitask linear bandits |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BiDM: Pushing the Limit of Quantization for Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bias Amplification in Language Model Evolution: An Iterated Learning Perspective |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bias Detection via Signaling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bigger, Regularized, Optimistic: scaling for compute and sample efficient continuous control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Binarized Diffusion Model for Image Super-Resolution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Binary Search with Distributional Predictions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Biologically Inspired Learning Model for Instructed Vision |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| BitDelta: Your Fine-Tune May Only Be Worth One Bit |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BitsFusion: 1.99 bits Weight Quantization of Diffusion Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Black-Box Forgetting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Blind Image Restoration via Fast Diffusion Inversion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Block Sparse Bayesian Learning: A Diversified Scheme |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Block Transformer: Global-to-Local Language Modeling for Fast Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosted Conformal Prediction Intervals |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Boosting Alignment for Post-Unlearning Text-to-Image Generative Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Generalization in Parametric PDE Neural Solvers through Adaptive Conditioning |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Graph Pooling with Persistent Homology |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Boosting Semi-Supervised Scene Text Recognition via Viewing and Summarizing |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Boosting Text-to-Video Generative Model with MLLMs Feedback |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Boosting Transferability and Discriminability for Time Series Domain Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Vision-Language Models with Transduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Boosting Weakly Supervised Referring Image Segmentation via Progressive Comprehension |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Boosting the Potential of Large Language Models with an Intelligent Information Assistant |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bootstrapping Top-down Information for Self-modulating Slot Attention |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boundary Decomposition for Nadir Objective Vector Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Boundary Matters: A Bi-Level Active Finetuning Method |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning and Spatiotemporal Masking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BrainBits: How Much of the Brain are Generative Reconstruction Methods Using? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Breaking Long-Tailed Learning Bottlenecks: A Controllable Paradigm with Hypernetwork-Generated Diverse Experts |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Breaking Semantic Artifacts for Generalized AI-generated Image Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Breaking the curse of dimensionality in structured density estimation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bridge the Modality and Capability Gaps in Vision-Language Model Selection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bridge the Points: Graph-based Few-shot Segment Anything Semantically |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridge-IF: Learning Inverse Protein Folding with Markov Bridges |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bridging Geometric States via Geometric Diffusion Bridge |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridging OOD Detection and Generalization: A Graph-Theoretic View |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Bridging semantics and pragmatics in information-theoretic emergent communication |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Bridging the Divide: Reconsidering Softmax and Linear Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Building a stable classifier with the inflated argmax |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Building on Efficient Foundations: Effective Training of LLMs with Structured Feedforward Layers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CA-SSLR: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CALANet: Cheap All-Layer Aggregation for Human Activity Recognition |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CALVIN: Improved Contextual Video Captioning via Instruction Tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CAT3D: Create Anything in 3D with Multi-View Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor Segmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CIFD: Controlled Information Flow to Enhance Knowledge Distillation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CLIP in Mirror: Disentangling text from visual images through reflection |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CLIPAway: Harmonizing focused embeddings for removing objects via diffusion models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CLUES: Collaborative Private-domain High-quality Data Selection for LLMs via Training Dynamics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CODA: A Correlation-Oriented Disentanglement and Augmentation Modeling Scheme for Better Resisting Subpopulation Shifts |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| COLD: Causal reasOning in cLosed Daily activities |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| CONTRAST: Continual Multi-source Adaptation to Dynamic Distributions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| COSMIC: Compress Satellite Image Efficiently via Diffusion Compensation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CRAYM: Neural Field Optimization via Camera RAY Matching |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CRONOS: Enhancing Deep Learning with Scalable GPU Accelerated Convex Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| CRT-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CV-VAE: A Compatible Video VAE for Latent Generative Video Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CYCLO: Cyclic Graph Transformer Approach to Multi-Object Relationship Modeling in Aerial Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Calibrated Self-Rewarding Vision Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Calibrating Reasoning in Language Models with Internal Consistency |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Can Graph Learning Improve Planning in LLM-based Agents? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Can Graph Neural Networks Expose Training Data Properties? An Efficient Risk Assessment Approach |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Can LLMs Implicitly Learn Numeric Parameter Constraints in Data Science APIs? |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Can Language Models Learn to Skip Steps? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Can Large Language Model Agents Simulate Human Trust Behavior? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Can Learned Optimization Make Reinforcement Learning Less Difficult? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Can Models Learn Skill Composition from Examples? |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Can Simple Averaging Defeat Modern Watermarks? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can Transformers Smell Like Humans? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can We Leave Deepfake Data Behind in Training Deepfake Detector? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Can large language models explore in-context? |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
2 |
| Can neural operators always be continuously discretized? |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Capturing the denoising effect of PCA via compression ratio |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Cardinality-Aware Set Prediction and Top-$k$ Classification |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Carrot and Stick: Eliciting Comparison Data and Beyond |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Cascade Speculative Drafting for Even Faster LLM Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cascade of phase transitions in the training of energy-based models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Categorical Flow Matching on Statistical Manifolds |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Causal Context Adjustment Loss for Learned Image Compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Causal Contrastive Learning for Counterfactual Regression Over Time |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Causal Dependence Plots |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Causal Discovery from Event Sequences by Local Cause-Effect Attribution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Causal Effect Identification in a Sub-Population with Latent Variables |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Causal Imitation for Markov Decision Processes: a Partial Identification Approach |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Causal Temporal Representation Learning with Nonstationary Sparse Transition |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Causal discovery with endogenous context variables |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Causal language modeling can elicit search and reasoning capabilities on logic puzzles |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Causal vs. Anticausal merging of predictors |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CausalStock: Deep End-to-end Causal Discovery for News-driven Multi-stock Movement Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cell ontology guided transcriptome foundation model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Certified Adversarial Robustness via Randomized $\alpha$-Smoothing for Regression Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Certified Machine Unlearning via Noisy Stochastic Gradient Descent |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Chain of Agents: Large Language Models Collaborating on Long-Context Tasks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Chain of Thoughtlessness? An Analysis of CoT in Planning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Chain-of-Thought Reasoning Without Prompting |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Challenges of Generating Structurally Diverse Graphs |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ChatCam: Empowering Camera Control through Conversational AI |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ChatQA: Surpassing GPT-4 on Conversational QA and RAG |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CigTime: Corrective Instruction Generation Through Inverse Motion Editing |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Classification Diffusion Models: Revitalizing Density Ratio Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Classification Done Right for Vision-Language Pre-Training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Classifier-guided Gradient Modulation for Enhanced Multimodal Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ClavaDDPM: Multi-relational Data Synthesis with Cluster-guided Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cloud Object Detector Adaptation by Integrating Different Source Knowledge |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Cluster-Learngene: Inheriting Adaptive Clusters for Vision Transformers |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Clustering in Causal Attention Masking |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Clustering then Propagation: Select Better Anchors for Knowledge Graph Embedding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Clustering with Non-adaptive Subset Queries |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Co-occurrence is not Factual Association in Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoBo: Collaborative Learning via Bilevel Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CoFie: Learning Compact Neural Surface Representations with Coordinate Fields |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CoSW: Conditional Sample Weighting for Smoke Segmentation with Label Noise |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| CoSy: Evaluating Textual Explanations of Neurons |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Coarse-to-Fine Concept Bottleneck Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Code Repair with LLMs gives an Exploration-Exploitation Tradeoff |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CogVLM: Visual Expert for Pretrained Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Coherence-free Entrywise Estimation of Eigenvectors in Low-rank Signal-plus-noise Matrix Models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Coherent 3D Scene Diffusion From a Single RGB Image |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ColJailBreak: Collaborative Generation and Editing for Jailbreaking Text-to-Image Deep Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Collaboration! Towards Robust Neural Methods for Routing Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Collaborative Cognitive Diagnosis with Disentangled Representation Learning for Learner Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Collaborative Refining for Learning from Inaccurate Labels |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Color-Oriented Redundancy Reduction in Dataset Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Combining Observational Data and Language for Species Range Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Combining Statistical Depth and Fermat Distance for Uncertainty Quantification |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Communication Bounds for the Distributed Experts Problem |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Communication Efficient Distributed Training with Distributed Lion |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Communication-Efficient Federated Group Distributionally Robust Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Community Detection Guarantees using Embeddings Learned by Node2Vec |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compact Language Models via Pruning and Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compact Proofs of Model Performance via Mechanistic Interpretability |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| Complete Graphical Criterion for Sequential Covariate Adjustment in Causal Inference |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Compositional 3D-aware Video Generation with LLM Director |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Compositional Generalization Across Distributional Shifts with Sparse Tree Operations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compositional PAC-Bayes: Generalization of GNNs with persistence and beyond |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compressing Large Language Models using Low Rank and Low Precision Decomposition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Computational Aspects of Bayesian Persuasion under Approximate Best Response |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Computerized Adaptive Testing via Collaborative Ranking |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ConStat: Performance-Based Contamination Detection in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Conditional Controllable Image Fusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conditional Density Estimation with Histogram Trees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Conditional Outcome Equivalence: A Quantile Alternative to CATE |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Conditional Synthesis of 3D Molecules with Time Correction Sampler |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Conditioning non-linear and infinite-dimensional diffusion processes |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Confidence Calibration of Classifiers with Many Classes |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Confidence Regulation Neurons in Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Confident Natural Policy Gradient for Local Planning in $q_\pi$-realizable Constrained MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformal Classification with Equalized Coverage for Adaptively Selected Groups |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conformal Inverse Optimization |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Conformal Prediction for Class-wise Coverage via Augmented Label Rank Calibration |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conformalized Credal Set Predictors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformalized Multiple Testing after Data-dependent Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Conformalized Time Series with Semantic Features |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Confusion-Resistant Federated Learning via Diffusion-Based Data Harmonization on Non-IID Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Conjugate Bayesian Two-step Change Point Detection for Hawkes Process |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Connecting Joint-Embedding Predictive Architecture with Contrastive Self-supervised Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Connectivity-Driven Pseudo-Labeling Makes Stronger Cross-Domain Segmenters |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Consensus Learning with Deep Sets for Essential Matrix Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Consistency Diffusion Bridge Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Consistency Models for Scalable and Fast Simulation-Based Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Consistency of Neural Causal Partial Identification |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Constant Acceleration Flow |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Constrained Binary Decision Making |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Constrained Diffusion Models via Dual Training |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Constrained Diffusion with Trust Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Constrained Sampling with Primal-Dual Langevin Monte Carlo |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Constrained Synthesis with Projected Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Constructing Semantics-Aware Adversarial Examples with a Probabilistic Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Construction and Application of Materials Knowledge Graph in Multidisciplinary Materials Science via Large Language Model |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ContactField: Implicit Field Representation for Multi-Person Interaction Geometry |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Context and Geometry Aware Voxel Transformer for Semantic Scene Completion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| ContextCite: Attributing Model Generation to Context |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ContextGS : Compact 3D Gaussian Splatting with Anchor Level Context Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Contextual Active Model Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Contextual Bilevel Reinforcement Learning for Incentive Alignment |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Contextual Decision-Making with Knapsacks Beyond the Worst Case |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Contextual Linear Optimization with Bandit Feedback |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Contextual Multinomial Logit Bandits with General Value Functions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Continual Audio-Visual Sound Separation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Continual Counting with Gradual Privacy Expiration |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Continual Learning in the Frequency Domain |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Continual Learning with Global Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Continual learning with the neural tangent ensemble |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Continuous Heatmap Regression for Pose Estimation via Implicit Neural Representation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Continuous Partitioning for Graph-Based Semi-Supervised Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Continuous Product Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Continuous Temporal Domain Generalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Contracting with a Learning Agent |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Contrastive dimension reduction: when and how? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Contrastive losses as generalized models of global epistasis |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Contrastive-Equivariant Self-Supervised Learning Improves Alignment with Primate Visual Area IT |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Controlled maximal variability along with reliable performance in recurrent neural networks |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Controlling Continuous Relaxation for Combinatorial Optimization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Controlling Multiple Errors Simultaneously with a PAC-Bayes Bound |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Convergence Analysis of Split Federated Learning on Heterogeneous Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Convergence of $\text{log}(1/\epsilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Convergence of No-Swap-Regret Dynamics in Self-Play |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Convolutional Differentiable Logic Gate Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| CosAE: Learnable Fourier Series for Image Restoration |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Cost-efficient Knowledge-based Question Answering with Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CountGD: Multi-Modal Open-World Counting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Counterfactual Fairness by Combining Factual and Counterfactual Predictions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Covariate Shift Corrected Conditional Randomization Test |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Crafting Interpretable Embeddings for Language Neuroscience by Asking LLMs Questions |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Credal Deep Ensembles for Uncertainty Quantification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Credal Learning Theory |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Credit Attribution and Stable Compression |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| CriticEval: Evaluating Large-scale Language Model as Critic |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Cross-Device Collaborative Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Cross-Modality Perturbation Synergy Attack for Person Re-identification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cross-Scale Self-Supervised Blind Image Deblurring via Implicit Neural Representation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Cross-modal Representation Flattening for Multi-modal Domain Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cross-model Control: Improving Multiple Large Language Models in One-time Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cross-video Identity Correlating for Person Re-identification Pre-training |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CryoGEM: Physics-Informed Generative Cryo-Electron Microscopy |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Cryptographic Hardness of Score Estimation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CultureLLM: Incorporating Cultural Differences into Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| CulturePark: Boosting Cross-cultural Understanding in Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Curvature Clues: Decoding Deep Learning Privacy with Input Loss Curvature |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Customized Subgraph Selection and Encoding for Drug-drug Interaction Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Customizing Language Models with Instance-wise LoRA for Sequential Recommendation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| D2R2: Diffusion-based Representation with Random Distance Matching for Tabular Few-shot Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DAGER: Exact Gradient Inversion for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DALD: Improving Logits-based Detector without Logits from Black-box LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DAPE: Data-Adaptive Positional Encoding for Length Extrapolation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DARNet: Dual Attention Refinement Network with Spatiotemporal Construction for Auditory Attention Detection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DC-Gaussian: Improving 3D Gaussian Splatting for Reflective Dash Cam Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DCDepth: Progressive Monocular Depth Estimation in Discrete Cosine Domain |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DDK: Distilling Domain Knowledge for Efficient Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DDN: Dual-domain Dynamic Normalization for Non-stationary Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DDR: Exploiting Deep Degradation Response as Flexible Image Descriptor |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DEFT: Efficient Fine-tuning of Diffusion Models by Learning the Generalised $h$-transform |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| DEPrune: Depth-wise Separable Convolution Pruning for Maximizing GPU Parallelism |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DG-SLAM: Robust Dynamic Gaussian Splatting SLAM with Hybrid Pose Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DI-MaskDINO: A Joint Object Detection and Instance Segmentation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DINTR: Tracking via Diffusion-based Interpolation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DMesh: A Differentiable Mesh Representation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DOFEN: Deep Oblivious Forest ENsemble |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DPIC: Decoupling Prompt and Intrinsic Characteristics for LLM Generated Text Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| DRACO: A Denoising-Reconstruction Autoencoder for Cryo-EM |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DRIP: Unleashing Diffusion Priors for Joint Foreground and Alpha Prediction in Image Matting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DarkSAM: Fooling Segment Anything Model to Segment Nothing |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Data Acquisition via Experimental Design for Data Markets |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Data Attribution for Text-to-Image Models by Unlearning Synthesized Images |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data Augmentation with Diffusion for Open-Set Semi-Supervised Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Data Distribution Valuation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data Free Backdoor Attacks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data Mixture Inference Attack: BPE Tokenizers Reveal Training Data Compositions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Data subsampling for Poisson regression with pth-root-link |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data-Driven Discovery of Dynamical Systems in Pharmacology using Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Data-Efficient Learning with Neural Programs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Data-faithful Feature Attribution: Mitigating Unobservable Confounders via Instrumental Variables |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DeBaRA: Denoising-Based 3D Room Arrangement Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeNetDM: Debiasing by Network Depth Modulation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DePLM: Denoising Protein Language Models for Property Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| DeTrack: In-model Latent Denoising Learning for Visual Object Tracking |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Dealing with Synthetic Data Contamination in Online Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Debiasing Synthetic Data Generated by Deep Generative Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decentralized Noncooperative Games with Coupled Decision-Dependent Distributions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Decision Mamba: A Multi-Grained State Space Model with Self-Evolution Regularization for Offline RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decision-Focused Learning with Directional Gradients |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Decoding-Time Language Model Alignment with Multiple Objectives |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decomposable Transformer Point Processes |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decoupled Kullback-Leibler Divergence Loss |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decoupling Semantic Similarity from Spatial Alignment for Neural Networks. |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Bayesian Active Learning for Preference Modeling in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep Correlated Prompting for Visual Recognition with Missing Modalities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Equilibrium Algorithmic Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Deep Graph Mating |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Homomorphism Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Learning for Computing Convergence Rates of Markov Chains |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Deep Learning in Medical Image Registration: Magic or Mirage? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Deep Submodular Peripteral Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Deep Support Vectors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep linear networks for regression are implicitly regularized towards flat minima |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DeepITE: Designing Variational Graph Autoencoders for Intervention Target Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DeepLag: Discovering Deep Lagrangian Dynamics for Intuitive Fluid Prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DeformableTST: Transformer for Time Series Forecasting without Over-reliance on Patching |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DeiSAM: Segment Anything with Deictic Prompting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DeltaDEQ: Exploiting Heterogeneous Convergence for Accelerating Deep Equilibrium Iterations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DeltaDock: A Unified Framework for Accurate, Efficient, and Physically Reliable Molecular Docking |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Delving into the Reversal Curse: How Far Can Large Language Models Generalize? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Demystify Mamba in Vision: A Linear Attention Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dendritic Integration Inspired Artificial Neural Networks Capture Data Correlation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DenoiseRep: Denoising Model for Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Denoising Diffusion Path: Attribution Noise Reduction with An Auxiliary Diffusion Model |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dense Associative Memory Through the Lens of Random Features |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dense Connector for MLLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Depth Anything V2 |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Derandomizing Multi-Distribution Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Derivative-enhanced Deep Operator Network |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Derivatives of Stochastic Gradient Descent in parametric optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Detecting Bugs with Substantial Monetary Consequences by LLM and Rule-based Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Detecting and Measuring Confounding Using Causal Mechanism Shifts |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Deterministic Policies for Constrained Reinforcement Learning in Polynomial Time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiGRAF: Diffeomorphic Graph-Adaptive Activation Function |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiMSUM: Diffusion Mamba - A Scalable and Unified Spatial-Frequency Method for Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| DiPEx: Dispersing Prompt Expansion for Class-Agnostic Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiTFastAttn: Attention Compression for Diffusion Transformer Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized Cut |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffGS: Functional Gaussian Splatting Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DiffLight: A Partial Rewards Conditioned Diffusion Model for Traffic Signal Control with Missing Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiffNorm: Self-Supervised Normalization for Non-autoregressive Speech-to-speech Translation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffPO: A causal diffusion model for learning distributions of potential outcomes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| DiffPhyCon: A Generative Approach to Control Complex Physical Systems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DiffSF: Diffusion Models for Scene Flow Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffeomorphic interpolation for efficient persistence-based topological optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentiable Modal Synthesis for Physical Modeling of Planar String Sound and Motion Simulation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Differentiable Quantum Computing for Large-scale Linear Control |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentiable Structure Learning with Partial Orders |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differential Privacy in Scalable General Kernel Learning via $K$-means Nystr{\"o}m Random Features |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentially Private Equivalence Testing for Continuous Distributions and Applications |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Differentially Private Graph Diffusion with Applications in Personalized PageRanks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentially Private Optimization with Sparse Gradients |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Differentially Private Reinforcement Learning with Self-Play |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Differentially Private Set Representations |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Differentially Private Stochastic Gradient Descent with Fixed-Size Minibatches: Tighter RDP Guarantees with or without Replacement |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiffuBox: Refining 3D Object Detection with Point Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffuLT: Diffusion for Long-tail Recognition Without External Knowledge |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DiffuPac: Contextual Mimicry in Adversarial Packets Generation via Diffusion Model |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffuserLite: Towards Real-time Diffusion Planning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusing Differentiable Representations |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Diffusion Actor-Critic with Entropy Regulator |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Imitation from Observation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Models With Learned Adaptive Noise |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Models are Certifiably Robust Classifiers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion PID: Interpreting Diffusion via Partial Information Decomposition |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Policies Creating a Trust Region for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion Priors for Variational Likelihood Estimation and Image Denoising |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Spectral Representation for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Tuning: Transferring Diffusion Models via Chain of Forgetting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Twigs with Loop Guidance for Conditional Graph Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Diffusion for World Modeling: Visual Details Matter in Atari |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion-Inspired Truncated Sampler for Text-Video Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Diffusion-Reward Adversarial Imitation Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion-based Curriculum Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DiffusionBlend: Learning 3D Image Prior through Position-aware Diffusion Score Blending for 3D Computed Tomography Reconstruction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| DiffusionPDE: Generative PDE-Solving under Partial Observation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dimension-free Private Mean Estimation for Anisotropic Distributions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Dimension-free deterministic equivalents and scaling laws for random feature regression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Direct Preference-Based Evolutionary Multi-Objective Optimization with Dueling Bandits |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Direct Unlearning Optimization for Robust and Safe Text-to-Image Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Directional Smoothness and Gradient Methods: Convergence and Adaptivity |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| DisC-GS: Discontinuity-aware Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DisCEdit: Model Editing by Identifying Discriminative Components |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Discovering Preference Optimization Algorithms with and for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discovering plasticity rules that organize and maintain neural circuits |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Discovery of the Hidden World with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discrete Dictionary-based Decomposition Layer for Structured Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Discrete Flow Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discrete Modeling via Boundary Conditional Diffusion Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discrete-state Continuous-time Diffusion for Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discretely beyond $1/e$: Guided Combinatorial Algortihms for Submodular Maximization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DisenGCD: A Meta Multigraph-assisted Disentangled Graph Learning Framework for Cognitive Diagnosis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Disentangled Representation Learning in Non-Markovian Causal Systems |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Disentangled Style Domain for Implicit $z$-Watermark Towards Copyright Protection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Disentangling Linear Quadratic Control with Untrusted ML Predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Disentangling and mitigating the impact of task similarity for continual learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Dissect Black Box: Interpreting for Rule-Based Explanations in Unsupervised Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Dissecting Query-Key Interaction in Vision Transformers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Dissecting the Failure of Invariant Learning on Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Distributed Least Squares in Small Space via Sketching and Bias Reduction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Distributed-Order Fractional Graph Operating Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Distribution Learning with Valid Outputs Beyond the Worst-Case |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Distribution-Aware Data Expansion with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Distributional Preference Alignment of LLMs via Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Distributional Reinforcement Learning with Regularized Wasserstein Loss |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Distributional Successor Features Enable Zero-Shot Policy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Distributionally Robust Performative Prediction |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| DistrictNet: Decision-aware learning for geographical districting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Divergences between Language Models and Human Brains |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Do Finetti: On Causal Effects for Exchangeable Data |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Do LLMs Build World Representations? Probing Through the Lens of State Abstraction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Do causal predictors generalize better to new domains? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Do's and Don'ts: Learning Desirable Skills with Instruction Videos |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| DoFIT: Domain-aware Federated Instruction Tuning with Alleviated Catastrophic Forgetting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Does Egalitarian Fairness Lead to Instability? The Fairness Bounds in Stable Federated Learning Under Altruistic Behaviors |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Does Reasoning Emerge? Examining the Probabilities of Causation in Large Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection? |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Domain Adaptation for Large-Vocabulary Object Detectors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Don't Look Twice: Faster Video Transformers with Run-Length Tokenization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Double-Ended Synthesis Planning with Goal-Constrained Bidirectional Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Doubly Mild Generalization for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Drones Help Drones: A Collaborative Framework for Multi-Drone Object Trajectory Prediction and Beyond |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dual Cone Gradient Descent for Training Physics-Informed Neural Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Dual Critic Reinforcement Learning under Partial Observability |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Dual Lagrangian Learning for Conic Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dual-Diffusion for Binocular 3D Human Pose Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dual-Personalizing Adapter for Federated Foundation Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dual-Perspective Activation: Efficient Channel Denoising via Joint Forward-Backward Criterion for Artificial Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dual-frame Fluid Motion Estimation with Test-time Optimization and Zero-divergence Loss |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dueling over Dessert, Mastering the Art of Repeated Cake Cutting |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Dynamic 3D Gaussian Fields for Urban Areas |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Conditional Optimal Transport through Simulation-Free Flows |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamic Neural Regeneration: Enhancing Deep Learning Generalization on Small Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Rescaling for Training GNNs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Dynamic Subgroup Identification in Covariate-adjusted Response-adaptive Randomization Experiments |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Déjà Vu Memorization in Vision–Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| E-Motion: Future Motion Simulation via Event Sequence Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EAGLE: Efficient Adaptive Geometry-based Learning in Cross-view Understanding |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| EASI: Evolutionary Adversarial Simulator Identification for Sim-to-Real Transfer |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| ECLipsE: Efficient Compositional Lipschitz Constant Estimation for Deep Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| EDT: An Efficient Diffusion Transformer Framework Inspired by Human-like Sketching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EEGPT: Pretrained Transformer for Universal and Reliable Representation of EEG Signals |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EGODE: An Event-attended Graph ODE Framework for Modeling Rigid Dynamics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EGSST: Event-based Graph Spatiotemporal Sensitive Transformer for Object Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| EGonc : Energy-based Open-Set Node Classification with substitute Unknowns |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EM Distillation for One-step Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EMR-Merging: Tuning-Free High-Performance Model Merging |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| EMVP: Embracing Visual Foundation Model for Visual Place Recognition with Centroid-Free Probing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EPIC: Effective Prompting for Imbalanced-Class Data Synthesis in Tabular Data Classification via Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ESPACE: Dimensionality Reduction of Activations for Model Compression |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ET-Flow: Equivariant Flow-Matching for Molecular Conformer Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Easy Regional Contrastive Learning of Expressive Fashion Representations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Edit Distance Robust Watermarks via Indexing Pseudorandom Codes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Effective Exploration Based on the Structural Information Principles |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficiency for Free: Ideal Data Are Transportable Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficiency of the First-Price Auction in the Autobidding World |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Efficient $\Phi$-Regret Minimization with Low-Degree Swap Deviations in Extensive-Form Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Adversarial Training in LLMs with Continuous Attacks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Centroid-Linkage Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Combinatorial Optimization via Heat Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Discrepancy Testing for Learning with Distribution Shift |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Graph Matching for Correlated Stochastic Block Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Efficient LLM Jailbreak via Adaptive Dense-to-sparse Constrained Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient LLM Scheduling by Learning to Rank |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Large Multi-modal Models via Visual Context Compression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Leverage Score Sampling for Tensor Train Decomposition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Lifelong Model Evaluation in an Era of Rapid Progress |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Minimum Bayes Risk Decoding using Low-Rank Matrix Completion Algorithms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Multi-task LLM Quantization and Serving for Multiple LoRA Adapters |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Multi-task Reinforcement Learning with Cross-Task Policy Guidance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Policy Evaluation Across Multiple Different Experimental Datasets |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Efficient Prompt Optimization Through the Lens of Best Arm Identification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Recurrent Off-Policy RL Requires a Context-Encoder-Specific Learning Rate |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Reinforcement Learning by Discovering Neural Pathways |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Sketches for Training Data Attribution and Studying the Loss Landscape |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Streaming Algorithms for Graphlet Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Temporal Action Segmentation via Boundary-aware Query Voting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient and Private Marginal Reconstruction with Local Non-Negativity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Efficient multi-prompt evaluation of LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EfficientCAPER: An End-to-End Framework for Fast and Robust Category-Level Articulated Object Pose Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficiently Learning Significant Fourier Feature Pairs for Statistical Independence Testing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EigenVI: score-based variational inference with orthogonal function expansions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ElasTST: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Elliptical Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Elo Uncovered: Robustness and Best Practices in Language Model Evaluation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Elucidating the Design Space of Dataset Condensation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Embedding Dimension of Contrastive Learning and $k$-Nearest Neighbors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Embedding-Aligned Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Emergence of heavy tails in homogenized stochastic gradient descent |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Empowering Active Learning for 3D Molecular Graphs with Geometric Graph Isomorphism |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Empowering Visible-Infrared Person Re-Identification with Large Foundation Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| EnOF-SNN: Training Accurate Spiking Neural Networks via Enhancing the Output Feature |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Enabling Adaptive Agent Training in Open-Ended Simulators by Targeting Diversity |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| End-To-End Causal Effect Estimation from Unstructured Natural Language Data |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| End-to-End Ontology Learning with Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| End-to-End Video Semantic Segmentation in Adverse Weather using Fusion Blocks and Temporal-Spatial Teacher-Student Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| End-to-end Learnable Clustering for Intent Learning in Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Energy-Guided Continuous Entropic Barycenter Estimation for General Costs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Energy-based Epistemic Uncertainty for Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Energy-based Hopfield Boosting for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Chess Reinforcement Learning with Graph Representation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Domain Adaptation through Prompt Gradient Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Graph Transformers with Hierarchical Distance Structural Encoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing LLM Reasoning via Vision-Augmented Prompting |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enhancing LLM’s Cognition via Structurization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Large Language Models through Adaptive Tokenizers |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Enhancing Large Vision Language Models with Self-Training on Image Comprehension |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enhancing Preference-based Linear Bandits via Human Response Time |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Protein Mutation Effect Prediction through a Retrieval-Augmented Framework |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enriching Disentanglement: From Logical Definitions to Quantitative Metrics |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ensemble sampling for linear bandits: small ensembles suffice |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Entity Alignment with Noisy Annotations from Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Entropy testing and its application to testing Bayesian networks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Entropy-regularized Diffusion Policy with Q-Ensembles for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Entrywise error bounds for low-rank approximations of kernel matrices |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Epipolar-Free 3D Gaussian Splatting for Generalizable Novel View Synthesis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Equivariant Blurring Diffusion for Hierarchical Molecular Conformer Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Equivariant Neural Diffusion for Molecule Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Equivariant spatio-hemispherical networks for diffusion MRI deconvolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Estimating Epistemic and Aleatoric Uncertainty with a Single Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Estimating Heterogeneous Treatment Effects by Combining Weak Instruments and Observational Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Estimating the Hallucination Rate of Generative AI |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Euclidean distance compression via deep random features |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Evaluating alignment between humans and neural network representations in image-based learning tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Evaluating the World Model Implicit in a Generative Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Evaluating the design space of diffusion-based generative models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Evaluation of Text-to-Video Generation Models: A Dynamics Perspective |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Even Sparser Graph Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Everyday Object Meets Vision-and-Language Navigation Agent via Backdoor |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evidence of Learned Look-Ahead in a Chess-Playing Neural Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evidential Mixture Machines: Deciphering Multi-Label Correlations for Active Learning Sensitivity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Evidential Stochastic Differential Equations for Time-Aware Sequential Recommendation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Ex Uno Pluria: Insights on Ensembling in Low Precision Number Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exact Gradients for Stochastic Spiking Neural Networks Driven by Rough Signals |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exact, Tractable Gauss-Newton Optimization in Deep Reversible Architectures Reveal Poor Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Exactly Minimax-Optimal Locally Differentially Private Sampling |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exclusively Penalized Q-learning for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exocentric-to-Egocentric Video Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expanding Sparse Tuning for Low Memory Usage |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Expected Probabilistic Hierarchies |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expert-level protocol translation for self-driving labs |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Explaining Datasets in Words: Statistical Models with Natural Language Parameters |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Explanations that reveal all through the definition of encoding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Explicit Eigenvalue Regularization Improves Sharpness-Aware Minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploiting Descriptive Completeness Prior for Cross Modal Hashing with Incomplete Labels |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exploiting LLM Quantization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploiting Representation Curvature for Boundary Detection in Time Series |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Exploiting the Replay Memory Before Exploring the Environment: Enhancing Reinforcement Learning Through Empirical MDP Iteration |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploration by Learning Diverse Skills through Successor State Representations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploratory Retrieval-Augmented Planning For Continual Embodied Instruction Following |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Exploring Adversarial Robustness of Deep State Space Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exploring Consistency in Graph Representations: from Graph Kernels to Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Exploring Context Window of Large Language Models via Decomposed Positional Vectors |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploring DCN-like architecture for fast image generation with arbitrary resolution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Exploring Low-Dimensional Subspace in Diffusion Models for Controllable Image Editing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exploring Molecular Pretraining Model at Scale |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploring Token Pruning in Vision State Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Exploring the Edges of Latent State Clusters for Goal-Conditioned Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exploring the Precise Dynamics of Single-Layer GAN Models: Leveraging Multi-Feature Discriminators for High-Dimensional Subspace Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Exponential Quantum Communication Advantage in Distributed Inference and Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expressive Gaussian Human Avatars from Monocular RGB Video |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Extending Multi-modal Contrastive Representations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Extending Video Masked Autoencoders to 128 frames |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Extensive-Form Game Solving via Blackwell Approachability on Treeplexes |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Extracting Training Data from Molecular Pre-trained Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FASTopic: Pretrained Transformer is a Fast, Adaptive, Stable, and Transferable Topic Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FEEL-SNN: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| FIDE: Frequency-Inflated Conditional Diffusion Model for Extreme-Aware Time Series Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FIFO-Diffusion: Generating Infinite Videos from Text without Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FINALLY: fast and universal speech enhancement with studio-like quality |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FLAME : Factuality-Aware Alignment for Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FM-Delta: Lossless Compression for Storing Massive Fine-tuned Foundation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| FNP: Fourier Neural Processes for Arbitrary-Resolution Data Assimilation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| FSP-Laplace: Function-Space Priors for the Laplace Approximation in Bayesian Deep Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FUGAL: Feature-fortified Unrestricted Graph Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FUSE: Fast Unified Simulation and Estimation for PDEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Face2QR: A Unified Framework for Aesthetic, Face-Preserving, and Scannable QR Code Generation |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Facilitating Multimodal Classification via Dynamically Learning Modality Gap |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FactorSim: Generative Simulation via Factorized Representation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fair Allocation in Dynamic Mechanism Design |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fair GLASSO: Estimating Fair Graphical Models with Unbiased Statistical Behavior |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fair Kernel K-Means: from Single Kernel to Multiple Kernel |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fair Online Bilateral Trade |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fair Secretaries with Unfair Predictions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fair Wasserstein Coresets |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fair and Welfare-Efficient Constrained Multi-Matchings under Uncertainty |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FairWire: Fair Graph Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fairness and Efficiency in Online Class Matching |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fairness in Social Influence Maximization via Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Fairness without Harm: An Influence-Guided Active Sampling Approach |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Fairness-Aware Estimation of Graphical Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fairness-Aware Meta-Learning via Nash Bargaining |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast Best-of-N Decoding via Speculative Rejection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast Channel Simulation via Error-Correcting Codes |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Fast Encoder-Based 3D from Casual Videos via Point Track Processing |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fast Proxy Experiment Design for Causal Effect Identification |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Rates for Bandit PAC Multiclass Classification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fast Rates in Stochastic Online Convex Optimization by Exploiting the Curvature of Feasible Sets |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Fast Sampling via Discrete Non-Markov Diffusion Models with Predetermined Transition Time |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast and Memory-Efficient Video Diffusion Using Streamlined Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast samplers for Inverse Problems in Iterative Refinement models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fast yet Safe: Early-Exiting with Risk Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FastDrag: Manipulate Anything in One Step |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FastSurvival: Hidden Computational Blessings in Training Cox Proportional Hazards Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Faster Algorithms for User-Level Private Stochastic Convex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Faster Differentially Private Top-$k$ Selection: A Joint Exponential Mechanism with Pruning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Faster Diffusion: Rethinking the Role of the Encoder for Diffusion Model Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Faster Local Solvers for Graph Diffusion Equations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Faster Repeated Evasion Attacks in Tree Ensembles |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fearless Stochasticity in Expectation Propagation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Feature-Level Adversarial Attacks and Ranking Disruption for Visible-Infrared Person Re-identification |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FedAvP: Augment Local Data via Shared Policy in Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FedGMKD: An Efficient Prototype Federated Learning Framework through Knowledge Distillation and Discrepancy-Aware Aggregation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FedGMark: Certifiably Robust Watermarking for Federated Graph Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Federated Black-Box Adaptation for Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Federated Ensemble-Directed Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Federated Graph Learning for Cross-Domain Recommendation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Federated Learning over Connected Modes |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Federated Model Heterogeneous Matryoshka Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Federated Natural Policy Gradient and Actor Critic Methods for Multi-task Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Feedback control guides credit assignment in recurrent neural networks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Feint Behaviors and Strategies: Formalization, Implementation and Evaluation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fetch and Forge: Efficient Dataset Condensation for Object Detection |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Few-Shot Adversarial Prompt Learning on Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Few-Shot Diffusion Models Escape the Curse of Dimensionality |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Few-Shot Task Learning through Inverse Generative Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Fight Back Against Jailbreaking via Prompt Adversarial Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FilterNet: Harnessing Frequency Filters for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Finding Transformer Circuits With Edge Pruning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Finding good policies in average-reward Markov Decision Processes without prior knowledge |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fine Tuning Out-of-Vocabulary Item Recommendation with User Sequence Imagination |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fine-Grained Dynamic Framework for Bias-Variance Joint Optimization on Data Missing Not at Random |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fine-Tuning is Fine, if Calibrated |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fine-grained Analysis of In-context Linear Estimation: Data, Architecture, and Beyond |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Fine-grained Control of Generative Data Augmentation in IoT Sensing |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Fine-grained Image-to-LiDAR Contrastive Distillation with Visual Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| FineStyle: Fine-grained Controllable Style Personalization for Text-to-image Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| First-Explore, then Exploit: Meta-Learning to Solve Hard Exploration-Exploitation Trade-Offs |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| First-Order Methods for Linearly Constrained Bilevel Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| First-Order Minimax Bilevel Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fisher Flow Matching for Generative Modeling over Discrete Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fixed Confidence Best Arm Identification in the Bayesian Setting |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Flatten Anything: Unsupervised Neural Surface Parameterization |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FlexCap: Describe Anything in Images in Controllable Detail |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FlexSBDD: Structure-Based Drug Design with Flexible Protein Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Flexible Context-Driven Sensory Processing in Dynamical Vision Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Flexible task abstractions emerge in linear networks with fast and bounded units |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flipping-based Policy for Chance-Constrained Markov Decision Processes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Flow Snapshot Neurons in Action: Deep Neural Networks Generalize to Biological Motion Perception |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Focus On What Matters: Separated Models For Visual-Based RL Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FouRA: Fourier Low-Rank Adaptation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Foundation Inference Models for Markov Jump Processes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Foundations of Multivariate Distributional Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fourier-enhanced Implicit Neural Fusion Network for Multispectral and Hyperspectral Image Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fractal Patterns May Illuminate the Success of Next-Token Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Free Lunch in Pathology Foundation Model: Task-specific Model Adaptation with Concept-Guided Feature Enhancement |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Free-Rider and Conflict Aware Collaboration Formation for Cross-Silo Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FreeLong: Training-Free Long Video Generation with SpectralBlend Temporal Attention |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FreeSplat: Generalizable 3D Gaussian Splatting Towards Free View Synthesis of Indoor Scenes |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| FreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| FreqMark: Invisible Image Watermarking via Frequency Based Optimization in Latent Space |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Frequency Adaptive Normalization For Non-stationary Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Frequency-aware Generative Models for Multivariate Time Series Imputation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| From Causal to Concept-Based Representation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| From Chaos to Clarity: 3DGS in the Dark |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| From Dictionary to Tensor: A Scalable Multi-View Subspace Clustering Framework with Triple Information Enhancement |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| From Instance Training to Instruction Learning: Task Adapters Generation from Instructions |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| From Similarity to Superiority: Channel Clustering for Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Text to Trajectory: Exploring Complex Constraint Representation and Decomposition in Safe Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $\alpha$-NeuS |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From an Image to a Scene: Learning to Imagine the World from a Million 360° Videos |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Frustratingly Easy Test-Time Adaptation of Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Full-Atom Peptide Design with Geometric Latent Diffusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Full-Distance Evasion of Pedestrian Detectors in the Physical World |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Fully Distributed, Flexible Compositional Visual Representations via Soft Tensor Products |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fully Explicit Dynamic Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fully Unconstrained Online Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Functional Bilevel Optimization for Machine Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Functional Gradient Flows for Constrained Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Functionally Constrained Algorithm Solves Convex Simple Bilevel Problem |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Fundamental Convergence Analysis of Sharpness-Aware Minimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| G3: An Effective and Adaptive Framework for Worldwide Geolocalization Using Large Multi-Modality Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GACL: Exemplar-Free Generalized Analytic Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GAMap: Zero-Shot Object Goal Navigation with Multi-Scale Geometric-Affordance Guidance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GAVEL: Generating Games via Evolution and Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GDeR: Safeguarding Efficiency, Balancing, and Robustness via Prototypical Graph Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GENOT: Entropic (Gromov) Wasserstein Flow Matching with Applications to Single-Cell Genomics |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| GFT: Graph Foundation Model with Transferable Tree Vocabulary |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GFlowNet Assisted Biological Sequence Editing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GO4Align: Group Optimization for Multi-Task Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GOMAA-Geo: GOal Modality Agnostic Active Geo-localization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GRANOLA: Adaptive Normalization for Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GREATS: Online Selection of High-Quality Data for LLM Training in Every Iteration |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| GS-Hider: Hiding Messages into 3D Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| GUIDE: Real-Time Human-Shaped Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GVKF: Gaussian Voxel Kernel Functions for Highly Efficient Surface Reconstruction in Open Scenes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Gated Inference Network: Inference and Learning State-Space Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gated Slot Attention for Efficient Linear-Time Sequence Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to TD Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gaussian Process Bandits for Top-k Recommendations |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GaussianCut: Interactive segmentation via graph cut for 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GenRL: Multimodal-foundation world models for generalization in embodied agents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GenRec: Unifying Video Generation and Recognition with Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| General Articulated Objects Manipulation in Real Images via Part-Aware Diffusion Process |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| General Detection-based Text Line Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| General bounds on the quality of Bayesian coresets |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generalizable Implicit Motion Modeling for Video Frame Interpolation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generalizable Person Re-identification via Balancing Alignment and Uniformity |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalizable and Animatable Gaussian Head Avatar |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalizablity of Memorization Neural Network |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalization Analysis for Label-Specific Representation Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Generalization Bounds via Conditional $f$-Information |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalization Error Bounds for Two-stage Recommender Systems with Tree Structure |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalization of Hamiltonian algorithms |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalized Eigenvalue Problems with Generative Priors |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Generalized Fast Exact Conformalization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalized Linear Bandits with Limited Adaptivity |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalized Protein Pocket Generation with Prior-Informed Flow Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalized Tensor Decomposition for Understanding Multi-Output Regression under Combinatorial Shifts |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalizing CNNs to graphs with learnable neighborhood quantization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generalizing Consistency Policy to Visual RL with Prioritized Proximal Experience Regularization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Generalizing Weather Forecast to Fine-grained Temporal Scales via Physics-AI Hybrid Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generate Universal Adversarial Perturbations for Few-Shot Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generated and Pseudo Content guided Prototype Refinement for Few-shot Point Cloud Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Generating Highly Designable Proteins with Geometric Algebra Flow Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Generating Origin-Destination Matrices in Neural Spatial Interaction Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generating compositional scenes via Text-to-image RGBA Instance Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Generative Adversarial Model-Based Optimization via Source Critic Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generative Forests |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Generative Fractional Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Hierarchical Materials Search |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Modeling of Molecular Dynamics Trajectories |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generative Modelling of Structurally Constrained Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generative Retrieval Meets Multi-Graded Relevance |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Generative Semi-supervised Graph Anomaly Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Genetic-guided GFlowNets for Sample Efficient Molecular Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Geodesic Optimization for Predictive Shift Adaptation on EEG data |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Geometric Analysis of Nonlinear Manifold Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Geometric Exploitation for Indoor Panoramic Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometric Trajectory Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Geometric-Averaged Preference Optimization for Soft Preference Labels |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Geometry of naturalistic object representations in recurrent neural network models of working memory |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Geometry-aware training of factorized layers in tensor Tucker format |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gliding over the Pareto Front with Uniform Designs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Global Convergence in Training Large-Scale Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Global Rewards in Restless Multi-Armed Bandits |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Globally Convergent Variational Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| GoMatching: A Simple Baseline for Video Text Spotting via Long and Short Term Matching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Goal Conditioned Reinforcement Learning for Photo Finishing Tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Goal Reduction with Loop-Removal Accelerates RL and Models Human Brain Activity in Goal-Directed Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Goal-Conditioned On-Policy Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Going Beyond Heuristics by Imposing Policy Improvement as a Constraint |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gorilla: Large Language Model Connected with Massive APIs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gradient Guidance for Diffusion Models: An Optimization Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gradient Methods for Online DR-Submodular Maximization with Stochastic Long-Term Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Gradient Rewiring for Editable Graph Neural Network Training |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Gradient-Variation Online Learning under Generalized Smoothness |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Gradient-based Discrete Sampling with Automatic Cyclical Scheduling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gradient-free Decoder Inversion in Latent Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gradients of Functions of Large Matrices |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Grammar-Aligned Decoding |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Graph Classification via Reference Distribution Learning: Theory and Practice |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Coarsening with Message-Passing Guarantees |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Graph Convolutions Enrich the Self-Attention in Transformers! |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Diffusion Policy Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Diffusion Transformers for Multi-Conditional Molecular Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Graph Edit Distance with General Costs Using Neural Set Divergence |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Graph Learning for Numeric Planning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Neural Networks Do Not Always Oversmooth |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Graph Neural Networks Need Cluster-Normalize-Activate Modules |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph Neural Networks and Arithmetic Circuits |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Graph Structure Inference with BAM: Neural Dependency Processing via Bilinear Attention |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Graph neural networks and non-commuting operators |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Graph-based Uncertainty Metrics for Long-form Language Model Generations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-enhanced Optimizers for Structure-aware Recommendation Embedding Evolution |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GraphTrail: Translating GNN Predictions into Human-Interpretable Logical Rules |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GraphVis: Boosting LLMs with Visual Knowledge Graph Integration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Graphcode: Learning from multiparameter persistent homology using graph neural networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Grasp as You Say: Language-guided Dexterous Grasp Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Great Minds Think Alike: The Universal Convergence Trend of Input Salience |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Grid4D: 4D Decomposed Hash Encoding for High-Fidelity Dynamic Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Grokking of Implicit Reasoning in Transformers: A Mechanistic Journey to the Edge of Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Grounded Answers for Multi-agent Decision-making Problem through Generative World Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Grounding Multimodal Large Language Models in Actions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Group Robust Preference Optimization in Reward-free RLHF |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Group and Shuffle: Efficient Structured Orthogonal Parametrization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Group-wise oracle-efficient algorithms for online multi-group learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| GuardT2I: Defending Text-to-Image Models from Adversarial Prompts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Guiding a Diffusion Model with a Bad Version of Itself |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| HAWK: Learning to Understand Open-World Video Anomalies |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HC-GAE: The Hierarchical Cluster-based Graph Auto-Encoder for Graph Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HENASY: Learning to Assemble Scene-Entities for Interpretable Egocentric Video-Language Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HGDL: Heterogeneous Graph Label Distribution Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| HHD-GP: Incorporating Helmholtz-Hodge Decomposition into Gaussian Processes for Learning Dynamical Systems |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HOPE: Shape Matching Via Aligning Different K-hop Neighbourhoods |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| HORSE: Hierarchical Representation for Large-Scale Neural Subset Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HYDRA: Model Factorization Framework for Black-Box LLM Personalization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| HairDiffusion: Vivid Multi-Colored Hair Editing via Latent Diffusion |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hallo3D: Multi-Modal Hallucination Detection and Mitigation for Consistent 3D Content Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Hamiltonian Score Matching and Generative Flows |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Happy: A Debiased Learning Framework for Continual Generalized Category Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hardness of Learning Neural Networks under the Manifold Hypothesis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Harmonizing Visual Text Comprehension and Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Harnessing Multiple Correlated Networks for Exact Community Recovery |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Harnessing small projectors and multiple views for efficient vision pretraining |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HiCoM: Hierarchical Coherent Motion for Dynamic Streamable Scenes with 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hierarchical Federated Learning with Multi-Timescale Gradient Correction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hierarchical Object-Aware Dual-Level Contrastive Learning for Domain Generalized Stereo Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Programmatic Option Framework |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Selective Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Hierarchical Uncertainty Exploration via Feedforward Posterior Trees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hierarchical and Density-based Causal Clustering |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Hierarchy-Agnostic Unsupervised Segmentation: Parsing Semantic Image Structure |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| High Rank Path Development: an approach to learning the filtration of stochastic processes |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| High-Resolution Image Harmonization with Adaptive-Interval Color Transformation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| High-dimensional (Group) Adversarial Training in Linear Regression |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| High-probability complexity bounds for stochastic non-convex minimax optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Higher-Order Causal Message Passing for Experimentation with Complex Interference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Historical Test-time Prompt Tuning for Vision Foundation Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Homology Consistency Constrained Efficient Tuning for Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HonestLLM: Toward an Honest and Helpful Large Language Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Honor Among Bandits: No-Regret Learning for Online Fair Division |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How Control Information Influences Multilingual Text Image Generation and Editing? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How Diffusion Models Learn to Factorize and Compose |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| How Do Large Language Models Acquire Factual Knowledge During Pretraining? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How Does Black-Box Impact the Learning Guarantee of Stochastic Compositional Optimization? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How Does Message Passing Improve Collaborative Filtering? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Does Variance Shape the Regret in Contextual Bandits? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| How do Large Language Models Handle Multilingualism? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How does PDE order affect the convergence of PINNs? |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| How many classifiers do we need? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| How to Boost Any Loss Function |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization? |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| How to Solve Contextual Goal-Oriented Problems with Offline Datasets? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| How to Use Diffusion Priors under Sparse Views? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| HuRef: HUman-REadable Fingerprint for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Human Expertise in Algorithmic Prediction |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HumanVLA: Towards Vision-Language Directed Object Rearrangement by Physical Humanoid |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Humanoid Locomotion as Next Token Prediction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hybrid Mamba for Few-Shot Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hybrid Reinforcement Learning Breaks Sample Size Barriers In Linear MDPs |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HydraViT: Stacking Heads for a Scalable ViT |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HyperLogic: Enhancing Diversity and Accuracy in Rule Learning with HyperNets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hyperbolic Embeddings of Supervised Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Hypothesis Testing the Circuit Hypothesis in LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| IF-Font: Ideographic Description Sequence-Following Font Generation |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| IMAGPose: A Unified Conditional Framework for Pose-Guided Person Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| IPO: Interpretable Prompt Optimization for Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| IR-CM: The Fast and General-purpose Image Restoration Method Based on Consistency Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| IRCAN: Mitigating Knowledge Conflicts in LLM Generation via Identifying and Reweighting Context-Aware Neurons |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| IWBVT: Instance Weighting-based Bias-Variance Trade-off for Crowdsourcing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Identifiability Analysis of Linear ODE Systems with Hidden Confounders |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Identifiability Guarantees for Causal Disentanglement from Purely Observational Data |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Identify Then Recommend: Towards Unsupervised Group Recommendation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Identifying Causal Effects Under Functional Dependencies |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Identifying Equivalent Training Dynamics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Identifying General Mechanism Shifts in Linear Causal Representations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Identifying Latent State-Transition Processes for Individualized Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Identifying Selections for Unsupervised Subtask Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Identifying Spatio-Temporal Drivers of Extreme Events |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Idiographic Personality Gaussian Process for Psychological Assessment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| If You Want to Be Robust, Be Wary of Initialization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| IllumiNeRF: 3D Relighting Without Inverse Rendering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ImOV3D: Learning Open Vocabulary Point Clouds 3D Object Detection from Only 2D Images |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Image Copy Detection for Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Image Reconstruction Via Autoencoding Sequential Deep Image Prior |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Image Understanding Makes for A Good Tokenizer for Image Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Image-aware Evaluation of Generated Medical Reports |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Images that Sound: Composing Images and Sounds on a Single Canvas |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Imitating Language via Scalable Inverse Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Implicit Bias of Mirror Flow on Separable Data |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Implicit Curriculum in Procgen Made Explicit |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Implicit Optimization Bias of Next-token Prediction in Linear Models |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Implicit Regularization Paths of Weighted Neural Representations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Implicit Regularization of Decentralized Gradient Descent for Sparse Regression |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improved Algorithms for Contextual Dynamic Pricing |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improved Analysis for Bandit Learning in Matching Markets |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Improved Distribution Matching Distillation for Fast Image Synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Generation of Adversarial Examples Against Safety-aligned LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Particle Approximation Error for Mean Field Neural Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Improved Regret for Bandit Convex Optimization with Delayed Feedback |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improved Regret of Linear Ensemble Sampling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Sample Complexity Bounds for Diffusion Model Training |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Sample Complexity for Multiclass PAC Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved learning rates in multi-unit uniform price auctions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved off-policy training of diffusion samplers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Adaptivity via Over-Parameterization in Sequence Models |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Alignment and Robustness with Circuit Breakers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Context-Aware Preference Modeling for Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Improving Decision Sparsity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Deep Learning Optimization through Constrained Parameter Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Environment Novelty Quantification for Effective Unsupervised Environment Design |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving Equivariant Model Training via Constraint Relaxation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Generalization and Convergence by Enhancing Implicit Regularization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Generalization in Federated Learning with Model-Data Mutual Information Regularization: A Posterior Inference Approach |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Improving Generalization of Dynamic Graph Learning via Environment Prompt |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Improving Gloss-free Sign Language Translation by Reducing Representation Density |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Neural Network Surface Processing with Principal Curvatures |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Neural ODE Training with Temporal Adaptive Batch Normalization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Robustness of 3D Point Cloud Recognition from a Fourier Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Improving Subgroup Robustness via Data Selection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Temporal Link Prediction via Temporal Walk Matrix Projection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Viewpoint-Independent Object-Centric Representations through Active Viewpoint Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving robustness to corruptions with multiplicative weight perturbations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving self-training under distribution shifts via anchored confidence with theoretical guarantees |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving the Training of Rectified Flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| In Pursuit of Causal Label Correlations for Multi-label Image Recognition |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| In-Context Learning State Vector with Inner and Momentum Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| In-Context Learning with Representations: Contextual Generalization of Trained Transformers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| In-Context Symmetries: Self-Supervised Learning through Contextual World Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| In-Trajectory Inverse Reinforcement Learning: Learn Incrementally Before an Ongoing Trajectory Terminates |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| In-and-Out: Algorithmic Diffusion for Sampling Convex Bodies |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Incentivizing Quality Text Generation via Statistical Contracts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh Recovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Induced Model Matching: Restricted Models Help Train Full-Featured Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Inexact Augmented Lagrangian Methods for Conic Optimization: Quadratic Growth and Linear Convergence |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Inference of Neural Dynamics Using Switching Recurrent Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inferring stochastic low-rank recurrent neural networks from neural data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Infinite Limits of Multi-head Transformer Dynamics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Infinite-Dimensional Feature Interaction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Information Re-Organization Improves Reasoning in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Information-theoretic Generalization Analysis for Expected Calibration Error |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Information-theoretic Limits of Online Classification with Noisy Labels |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Initializing Services in Interactive ML Systems for Diverse Users |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Initializing Variable-sized Vision Transformers from Learngene with Learnable Transformation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Instance-Optimal Private Density Estimation in the Wasserstein Distance |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Instance-Specific Asymmetric Sensitivity in Differential Privacy |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Instance-adaptive Zero-shot Chain-of-Thought Prompting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| InstructG2I: Synthesizing Images from Multimodal Attributed Graphs |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Instruction Tuning With Loss Over Instructions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Instruction-Guided Visual Masking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Instructor-inspired Machine Learning for Robust Molecular Property Prediction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Interaction-Force Transport Gradient Flows |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Interactive Deep Clustering via Value Mining |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Interfacing Foundation Models' Embeddings |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpolating Item and User Fairness in Multi-Sided Recommendations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpretable Concept-Based Memory Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Interpretable Generalized Additive Models for Datasets with Missing Values |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpretable Image Classification with Adaptive Prototype-based Vision Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Interpretable Mesomorphic Networks for Tabular Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE) |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpreting Learned Feedback Patterns in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual Knowledge |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpreting the Weight Space of Customized Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Intervention and Conditioning in Causal Bayesian Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Interventional Causal Discovery in a Mixture of DAGs |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Interventionally Consistent Surrogates for Complex Simulation Models |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Invariant Tokenization of Crystalline Materials for Language Model Enabled Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Invariant subspaces and PCA in nearly matrix multiplication time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Inverse M-Kernels for Linear Universal Approximators of Non-Negative Functions |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Inversion-based Latent Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| InversionView: A General-Purpose Method for Reading Information from Neural Activations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Invisible Image Watermarks Are Provably Removable Using Generative AI |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Is Cross-validation the Gold Standard to Estimate Out-of-sample Model Performance? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is Knowledge Power? On the (Im)possibility of Learning from Strategic Interactions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Is Multiple Object Tracking a Matter of Specialization? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is O(log N) practical? Near-Equivalence Between Delay Robustness and Bounded Regret in Bandits and RL |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models. |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Is Programming by Example Solved by LLMs? |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Is Score Matching Suitable for Estimating Point Processes? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is Value Learning Really the Main Bottleneck in Offline RL? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is Your LiDAR Placement Optimized for 3D Scene Understanding? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Iteration Head: A Mechanistic Study of Chain-of-Thought |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Iterative Methods via Locally Evolving Set Process |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Iterative Reasoning Preference Optimization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Iteratively Refined Behavior Regularization for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| John Ellipsoids via Lazy Updates |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Just Add $100 More: Augmenting Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| KFNN: K-Free Nearest Neighbor For Crowdsourcing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| KOALA: Empirical Lessons Toward Memory-Efficient and Fast Diffusion Models for Text-to-Image Synthesis |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Kermut: Composite kernel regression for protein variant effects |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Kernel PCA for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| KnowGPT: Knowledge Graph based Prompting for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Knowledge Circuits in Pretrained Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Knowledge Composition using Task Vectors with Learned Anisotropic Scaling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Knowledge Graph Completion by Intermediate Variables Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KptLLM: Unveiling the Power of Large Language Model for Keypoint Comprehension |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| L-TTA: Lightweight Test-Time Adaptation Using a Versatile Stem Layer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| L4GM: Large 4D Gaussian Reconstruction Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LACIE: Listener-Aware Finetuning for Calibration in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LAM3D: Large Image-Point Clouds Alignment Model for 3D Reconstruction from Single Image |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LCGen: Mining in Low-Certainty Generation for View-consistent Text-to-3D |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| LESS: Label-Efficient and Single-Stage Referring 3D Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LG-CAV: Train Any Concept Activation Vector with Language Guidance |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LG-VQ: Language-Guided Codebook Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LION: Linear Group RNN for 3D Object Detection in Point Clouds |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LIVE: Learnable In-Context Vector for Visual Question Answering |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLM Circuit Analyses Are Consistent Across Training and Scale |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LLM Dataset Inference: Did you train on my dataset? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLM Evaluators Recognize and Favor Their Own Generations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LLM-AutoDA: Large Language Model-Driven Automatic Data Augmentation for Long-tailed Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LLM-Check: Investigating Detection of Hallucinations in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| LLM-based Skill Diffusion for Zero-shot Policy Adaptation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LLMDFA: Analyzing Dataflow in Code with Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LLMs Can Evolve Continually on Modality for $\mathbb{X}$-Modal Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LLaMo: Large Language Model-based Molecular Graph Assistant |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLaNA: Large Language and NeRF Assistant |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through Learnable Multi-hierarchical Threshold Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LOVA3: Learning to Visual Question Answering, Asking and Assessment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LP-3DGS: Learning to Prune 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LRM-Zero: Training Large Reconstruction Models with Synthesized Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| LT-Defense: Searching-free Backdoor Defense via Exploiting the Long-tailed Effect |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LaKD: Length-agnostic Knowledge Distillation for Trajectory Prediction with Any Length Observations |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LaSCal: Label-Shift Calibration without target labels |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LaSe-E2V: Towards Language-guided Semantic-aware Event-to-Video Reconstruction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Label Delay in Online Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Label Noise: Ignorance Is Bliss |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling Cases |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Language Generation in the Limit |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Language Grounded Multi-agent Reinforcement Learning with Human-interpretable Communication |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Language Model as Visual Explainer |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Language Models as Hierarchy Encoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Language-Driven Interactive Traffic Trajectory Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Large Language Model Unlearning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Language Model Unlearning via Embedding-Corrupted Prompts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Language Models Must Be Taught to Know What They Don’t Know |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Language Models Play StarCraft II:Benchmarks and A Chain of Summarization Approach |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Pre-trained time series models for cross-domain Time series analysis tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Large Scale Transfer Learning for Tabular Data via Language Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Spatial Model: End-to-end Unposed Images to Semantic 3D |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Large language model validity via enhanced conformal prediction methods |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Latent Diffusion for Neural Spiking Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Latent Functional Maps: a spectral framework for representation alignment |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Latent Intrinsics Emerge from Training to Relight |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Latent Neural Operator for Solving Forward and Inverse PDE Problems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Layer-Adaptive State Pruning for Deep State Space Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LeDex: Training LLMs to Better Self-Debug and Explain Code |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learn To be Efficient: Build Structured Sparsity in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learn more, but bother less: parameter efficient continual learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learnability Matters: Active Learning for Video Captioning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learnability of high-dimensional targets by two-parameter models and gradient flow |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Learning 1D Causal Visual Representation with De-focus Attention Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning 3D Equivariant Implicit Function with Patch-Level Pose-Invariant Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning 3D Garment Animation from Trajectories of A Piece of Cloth |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Better Representations From Less Data For Propositional Satisfiability |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Bregman Divergences with Application to Robustness |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identification |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Complete Protein Representation by Dynamically Coupling of Sequence and Structure |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Cooperative Trajectory Representations for Motion Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Cut Generating Functions for Integer Programming |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Learning De-Biased Representations for Remote-Sensing Imagery |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Diffusion Priors from Observations by Expectation Maximization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Discrete Concepts in Latent Hierarchical Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Discrete Latent Variable Structures with Tensor Rank Conditions |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Distinguishable Trajectory Representation with Contrastive Loss |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Distributions on Manifolds with Free-Form Flows |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Elastic Costs to Shape Monge Displacements |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Formal Mathematics From Intrinsic Motivation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Frequency-Adapted Vision Foundation Model for Domain Generalized Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Generalized Linear Programming Value Functions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Goal-Conditioned Representations for Language Reward Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learning Group Actions on Latent Representations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Human-like Representations to Enable Learning Human Values |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Identifiable Factorized Causal Representations of Cellular Responses |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Image Priors Through Patch-Based Diffusion Models for Solving Inverse Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Infinitesimal Generators of Continuous Symmetries from Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Linear Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning Low-Rank Feature for Thorax Disease Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Macroscopic Dynamics from Partial Microscopic Observations |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Mixtures of Unknown Causal Interventions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Neural Contracting Dynamics: Extended Linearization and Global Guarantees |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Noisy Halfspaces with a Margin: Massart is No Harder than Random |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Optimal Lattice Vector Quantizers for End-to-end Neural Image Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Optimal Tax Design in Nonatomic Congestion Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Partitions from Context |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Learning Place Cell Representations and Context-Dependent Remapping |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning Plaintext-Ciphertext Cryptographic Problems via ANF-based SAT Instance Representation |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Learning Representations for Hierarchies with Minimal Support |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Segmentation from Point Trajectories |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Learning Social Welfare Functions |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Spatially-Aware Language and Audio Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Structure-Aware Representations of Dependent Types |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Structured Representations with Hyperbolic Embeddings |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning Successor Features the Simple Way |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Learning Transferable Features for Implicit Neural Representations |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learning Truncated Causal History Model for Video Restoration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Versatile Skills with Curriculum Masking |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Where to Edit Vision Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning World Models for Unconstrained Goal Navigation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning and Transferring Sparse Contextual Bigrams with Linear Transformers |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning diffusion at lightspeed |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning diverse causally emergent representations from time series data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning from Highly Sparse Spatio-temporal Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning from Noisy Labels via Conditional Distributionally Robust Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning from Offline Foundation Features with Tensor Augmentations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning from Pattern Completion: Self-supervised Controllable Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning from Snapshots of Discrete and Continuous Data Streams |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning from Uncertain Data: From Possible Worlds to Possible Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning on Large Graphs using Intersecting Communities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning predictable and robust neural representations by straightening image sequences |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning rigid-body simulators over implicit shapes for large-scale scenes and vision |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning symmetries via weight-sharing with doubly stochastic tensors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning the Expected Core of Strictly Convex Stochastic Cooperative Games |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning the Infinitesimal Generator of Stochastic Diffusion Processes |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning the Latent Causal Structure for Modeling Label Noise |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning the Optimal Policy for Balancing Short-Term and Long-Term Rewards |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning to Assist Humans without Inferring Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning to Cooperate with Humans using Generative Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Decouple the Lights for 3D Face Texture Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning to Edit Visual Programs with Self-Supervision |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning to Embed Distributions via Maximum Kernel Entropy |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning to Handle Complex Constraints for Vehicle Routing Problems |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning to Merge Tokens via Decoupled Embedding for Efficient Vision Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning to Predict Structural Vibrations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Price Homogeneous Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning to Reason via Program Generation, Emulation, and Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning to Shape In-distribution Feature Space for Out-of-distribution Detection |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning to Solve Quadratic Unconstrained Binary Optimization in a Classification Way |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning to Understand: Identifying Interactions via the Möbius Transform |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning to be Smooth: An End-to-End Differentiable Particle Smoother |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to compute Gröbner bases |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning via Surrogate PAC-Bayes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning with Fitzpatrick Losses |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning-Augmented Algorithms for the Bahncard Problem |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning-Augmented Algorithms with Explicit Predictors |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Learning-Augmented Dynamic Submodular Maximization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning-Augmented Priority Queues |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Least Squares Regression Can Exhibit Under-Parameterized Double Descent |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Length Optimization in Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Catastrophic Forgetting to Develop Safe Diffusion Models against Malicious Finetuning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Leveraging Environment Interaction for Automated PDDL Translation and Planning with Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Leveraging Separated World Model for Exploration in Visually Distracted Environments |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Tumor Heterogeneity: Heterogeneous Graph Representation Learning for Cancer Survival Prediction in Whole Slide Images |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging an ECG Beat Diffusion Model for Morphological Reconstruction from Indirect Signals |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Leveraging partial stragglers within gradient coding |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LiT: Unifying LiDAR "Languages" with LiDAR Translator |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Light Unbalanced Optimal Transport |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Limits of Transformer Language Models on Learning to Compose Algorithms |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| LinNet: Linear Network for Efficient Point Cloud Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Linear Causal Bandits: Unknown Graph and Soft Interventions |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Linear Causal Representation Learning from Unknown Multi-node Interventions |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Linear Regression using Heterogeneous Data Batches |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Linear Time Approximation Algorithm for Column Subset Selection with Local Search |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Linear Transformers are Versatile In-Context Learners |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Linear Uncertainty Quantification of Graphical Model Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Linearly Decomposing and Recomposing Vision Transformers for Diverse-Scale Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Linguistic Collapse: Neural Collapse in (Large) Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Linking In-context Learning in Transformers to Human Episodic Memory |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Listenable Maps for Zero-Shot Audio Classifiers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Control and Rendering |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LoCo: Learning 3D Location-Consistent Image Features with a Memory-Efficient Ranking Loss |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LoFiT: Localized Fine-tuning on LLM Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LoQT: Low-Rank Adapters for Quantized Pretraining |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LoRA-GA: Low-Rank Adaptation with Gradient Approximation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LoTLIP: Improving Language-Image Pre-training for Long Text Understanding |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LocCa: Visual Pretraining with Location-aware Captioners |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Local Curvature Smoothing with Stein's Identity for Efficient Score Matching |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Local Linearity: the Key for No-regret Reinforcement Learning in Continuous MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Local Superior Soups: A Catalyst for Model Merging in Cross-Silo Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Local and Adaptive Mirror Descents in Extensive-Form Games |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Local to Global: Learning Dynamics and Effect of Initialization for Transformers |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Localized Adaptive Risk Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Localized Zeroth-Order Prompt Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Localizing Memorization in SSL Vision Encoders |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Locally Private and Robust Multi-Armed Bandits |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Locating What You Need: Towards Adapting Diffusion Models to OOD Concepts In-the-Wild |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Logical characterizations of recurrent graph neural networks with reals and floats |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Loki: Low-rank Keys for Efficient Sparse Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Long-Range Feedback Spiking Network Captures Dynamic and Static Representations of the Visual Cortex under Movie Stimuli |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Long-Tailed Out-of-Distribution Detection via Normalized Outlier Distribution Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Long-form factuality in large language models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Long-range Brain Graph Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Long-range Meta-path Search on Large-scale Heterogeneous Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Long-tailed Object Detection Pretraining: Dynamic Rebalancing Contrastive Learning with Dual Reconstruction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Lookback Prophet Inequalities |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Loss Landscape Characterization of Neural Networks without Over-Parametrization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Low Degree Hardness for Broadcasting on Trees |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Low Precision Local Training is Enough for Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Low-Rank Optimal Transport through Factor Relaxation with Latent Coupling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Lumina-Next : Making Lumina-T2X Stronger and Faster with Next-DiT |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| M$^3$GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MAC Advice for facility location mechanism design |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MADiff: Offline Multi-agent Learning with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MALT Powers Up Adversarial Attacks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MAmmoTH2: Scaling Instructions from the Web |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MG-Net: Learn to Customize QAOA with Circuit Depth Awareness |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| MGF: Mixed Gaussian Flow for Diverse Trajectory Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MIDGArD: Modular Interpretable Diffusion over Graphs for Articulated Designs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MILP-StuDio: MILP Instance Generation via Block Structure Decomposition |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MKGL: Mastery of a Three-Word Language |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MMSite: A Multi-modal Framework for the Identification of Active Sites in Proteins |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MOTE-NAS: Multi-Objective Training-based Estimate for Efficient Neural Architecture Search |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MSA Generation with Seqs2Seqs Pretraining: Advancing Protein Structure Predictions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-Training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MSPE: Multi-Scale Patch Embedding Prompts Vision Transformers to Any Resolution |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encoding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MVGamba: Unify 3D Content Generation as State Space Sequence Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MVSDet: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MaNo: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Maia-2: A Unified Model for Human-AI Alignment in Chess |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Make Continual Learning Stronger via C-Flat |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Make Your LLM Fully Utilize the Context |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MambaAD: Exploring State Space Models for Multi-class Unsupervised Anomaly Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MambaLLIE: Implicit Retinex-Aware Low Light Enhancement with Global-then-Local State Space |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MambaLRP: Explaining Selective State Space Sequence Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MambaSCI: Efficient Mamba-UNet for Quad-Bayer Patterned Video Snapshot Compressive Imaging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MambaTree: Tree Topology is All You Need in State Space Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Many-Shot In-Context Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Many-shot Jailbreaking |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Marginal Causal Flows for Validation and Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Markov Equivalence and Consistency in Differentiable Structure Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Marrying Causal Representation Learning with Dynamical Systems for Science |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MaskFactory: Towards High-quality Synthetic Data Generation for Dichotomous Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Masked Pre-training Enables Universal Zero-shot Denoiser |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MatFormer: Nested Transformer for Elastic Inference |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Matching the Statistical Query Lower Bound for $k$-Sparse Parity Problems with Sign Stochastic Gradient Descent |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| MatrixNet: Learning over symmetry groups using learned group representations |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Matryoshka Query Transformer for Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Maximizing utility in multi-agent environments by anticipating the behavior of other learners |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MeLLoC: Lossless Compression with High-order Mechanism Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MeMo: Meaningful, Modular Controllers via Noise Injection |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Mean-Field Langevin Dynamics for Signed Measures via a Bilevel Approach |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Measuring Dejavu Memorization Efficiently |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Measuring Goal-Directedness |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Measuring Per-Unit Interpretability at Scale Without Humans |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mechanism design augmented with output advice |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MemVLT: Vision-Language Tracking with Adaptive Memory-based Prompts |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Membership Inference Attacks against Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Memorize What Matters: Emergent Scene Decomposition from Multitraverse |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Memory-Efficient LLM Training with Online Subspace Descent |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MemoryFormer : Minimize Transformer Computation by Removing Fully-Connected Layers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MeshXL: Neural Coordinate Field for Generative 3D Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Meta-Controller: Few-Shot Imitation of Unseen Embodiments and Tasks in Continuous Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Meta-Diffu$B$: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-Learning Universal Priors Using Non-Injective Change of Variables |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-Reinforcement Learning with Universal Policy Adaptation: Provable Near-Optimality under All-task Optimum Comparator |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MetaCURL: Non-stationary Concave Utility Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Metric Flow Matching for Smooth Interpolations on the Data Manifold |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Metric Space Magnitude for Evaluating the Diversity of Latent Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Metric Transforms and Low Rank Representations of Kernels for Fast Attention |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Metric from Human: Zero-shot Monocular Metric Depth Estimation via Test-time Adaptation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MiSO: Optimizing brain stimulation to create neural activity states |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Microstructures and Accuracy of Graph Recall by Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mimicking To Dominate: Imitation Learning Strategies for Success in Multiagent Games |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Mind the Gap Between Prototypes and Images in Cross-domain Finetuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Mind the Graph When Balancing Data for Fairness or Robustness |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| MindMerger: Efficiently Boosting LLM Reasoning in non-English Languages |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MiniCache: KV Cache Compression in Depth Dimension for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Minimizing UCB: a Better Local Search Strategy in Local Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Minimum Entropy Coupling with Bottleneck |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Mining and Transferring Feature-Geometry Coherence for Unsupervised Point Cloud Registration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mirror and Preconditioned Gradient Descent in Wasserstein Space |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mission Impossible: A Statistical Perspective on Jailbreaking LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mitigating Biases in Blackbox Feature Extractors for Image Classification Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mitigating Covariate Shift in Behavioral Cloning via Robust Stationary Distribution Correction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mitigating Object Hallucination via Concentric Causal Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mitigating Reward Overoptimization via Lightweight Uncertainty Estimation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mitigating Spurious Correlations via Disagreement Probability |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mixture of Demonstrations for In-Context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mixture of Experts Meets Prompt-Based Continual Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mixture of In-Context Experts Enhance LLMs' Long Context Awareness |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mixture of Link Predictors on Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mixture of Nested Experts: Adaptive Processing of Visual Tokens |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Mixture of Tokens: Continuous MoE through Cross-Example Aggregation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mixture of neural fields for heterogeneous reconstruction in cryo-EM |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Mixtures of Experts for Audio-Visual Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MoEUT: Mixture-of-Experts Universal Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MoGU: A Framework for Enhancing Safety of LLMs While Preserving Their Usability |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MoMu-Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MoVA: Adapting Mixture of Vision Experts to Multimodal Context |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model Based Inference of Synaptic Plasticity Rules |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Model Collapse Demystified: The Case of Regression |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Model Decides How to Tokenize: Adaptive DNA Sequence Tokenization with MxDNA |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Model Fusion through Bayesian Optimization in Language Model Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Model Sensitivity Aware Continual Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Model-Based Transfer Learning for Contextual Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Model-based Diffusion for Trajectory Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Model-free Low-Rank Reinforcement Learning via Leveraged Entry-wise Matrix Estimation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Molecule Design by Latent Prompt Transformer |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Molecule Generation with Fragment Retrieval Augmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Monoculture in Matching Markets |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Monomial Matrix Group Equivariant Neural Functional Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Monte Carlo Tree Search based Space Transfer for Black Box Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Most Influential Subset Selection: Challenges, Promises, and Beyond |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Motif-oriented influence maximization for viral marketing in large-scale social networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Motion Forecasting in Continuous Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Motion Graph Unleashed: A Novel Approach to Video Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MotionBooth: Motion-Aware Customized Text-to-Video Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MotionCraft: Physics-Based Zero-Shot Video Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Moving Off-the-Grid: Scene-Grounded Video Representations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-Agent Coordination via Multi-Level Communication |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Agent Domain Calibration with a Handful of Offline Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-Agent Imitation Learning: Value is Easy, Regret is Hard |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Multi-Group Proportional Representation in Retrieval |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-Head Mixture-of-Experts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-Instance Partial-Label Learning with Margin Adjustment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-LLM Debate: Framework, Principals, and Interventions |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Multi-Label Learning with Stronger Consistency Guarantees |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Multi-Label Open Set Recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Object Hallucination in Vision Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-Reward Best Policy Identification |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Multi-Scale Representation Learning for Protein Fitness Prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Scale VMamba: Hierarchy in Hierarchy Visual State Space Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Stage Predict+Optimize for (Mixed Integer) Linear Programs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-Winner Reconfiguration |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Multi-hypotheses Conditioned Point Cloud Diffusion for 3D Human Reconstruction from Occluded Images |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Multi-language Diversity Benefits Autoformalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-modal Transfer Learning between Biological Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-model Ensemble Conformal Prediction in Dynamic Environments |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-scale Consistency for Robust 3D Registration via Hierarchical Sinkhorn Tree |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Multi-turn Reinforcement Learning with Preference Human Feedback |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-view Masked Contrastive Representation Learning for Endoscopic Video Analysis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MultiPull: Detailing Signed Distance Functions by Pulling Multi-Level Queries at Multi-Step |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multiclass Transductive Online Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Multidimensional Fractional Programming for Normalized Cuts |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multilingual Diversity Improves Vision-Language Representations |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Multimodal Large Language Models Make Text-to-Image Generative Models Align Better |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multiple Physics Pretraining for Spatiotemporal Surrogate Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Multistable Shape from Shading Emerges from Patch Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multistep Distillation of Diffusion Models via Moment Matching |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multivariate Probabilistic Time Series Forecasting with Correlated Errors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Multiview Scene Graph |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mutli-Armed Bandits with Network Interference |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mutual Information Estimation via $f$-Divergence and Data Derangements |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Mutual Information Estimation via Normalizing Flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| N-agent Ad Hoc Teamwork |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| NVRC: Neural Video Representation Compression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Natural Counterfactuals With Necessary Backtracking |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Nature-Inspired Local Propagation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Navigating Chemical Space with Latent Flows |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Navigating Extremes: Dynamic Sparsity in Large Output Spaces |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Navigating the Effect of Parametrization for Dimensionality Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-Optimality of Contrastive Divergence Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Nearest Neighbor Speculative Decoding for LLM Generation and Attribution |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Nearly Minimax Optimal Regret for Multinomial Logistic Bandit |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Nearly Minimax Optimal Submodular Maximization with Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nearly Optimal Approximation of Matrix Functions by the Lanczos Method |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nearly Tight Black-Box Auditing of Differentially Private Machine Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neglected Hessian component explains mysteries in sharpness regularization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| NeoRL: Efficient Exploration for Nonepisodic RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Nesterov acceleration despite very noisy gradients |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| NeuMA: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neuc-MDS: Non-Euclidean Multidimensional Scaling Through Bilinear Forms |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neur2BiLO: Neural Bilevel Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Neural Characteristic Activation Analysis and Geometric Parameterization for ReLU Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Collapse To Multiple Centers For Imbalanced Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Combinatorial Optimization for Robust Routing Problem with Uncertain Travel Times |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Concept Binder |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Conditional Probability for Uncertainty Quantification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Cover Selection for Image Steganography |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Embeddings Rank: Aligning 3D latent dynamics with movements |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Experts: Mixture of Experts for Implicit Neural Representations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Gaffer: Relighting Any Object via Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Isometries: Taming Transformations for Equivariant ML |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Krylov Iteration for Accelerating Linear System Solving |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Model Checking |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Persistence Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Neural Residual Diffusion Models for Deep Scalable Vision Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural collapse vs. low-rank bias: Is deep neural collapse really optimal? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neural decoding from stereotactic EEG: accounting for electrode variability across subjects |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| NeuralFluid: Nueral Fluidic System Design and Control with Differentiable Simulation |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| NeuralSolver: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NeuralSteiner: Learning Steiner Tree for Overflow-avoiding Global Routing in Chip Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neuro-Symbolic Data Generation for Math Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| NeuroGauss4D-PCI: 4D Neural Fields and Gaussian Deformation Fields for Point Cloud Interpolation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neuronal Competition Groups with Supervised STDP for Spike-Based Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Newton Informed Neural Operator for Solving Nonlinear Partial Differential Equations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Nimbus: Secure and Efficient Two-Party Inference for Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| No Free Delivery Service: Epistemic limits of passive data collection in complex social systems |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| No Free Lunch Theorem and Black-Box Complexity Analysis for Adversarial Optimisation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| No-Regret Bandit Exploration based on Soft Tree Ensemble Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| No-Regret Learning for Fair Multi-Agent Social Welfare Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| No-Regret M${}^{\natural}$-Concave Function Maximization: Stochastic Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| No-regret Learning in Harmonic Games: Extrapolation in the Face of Conflicting Interests |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Noether's Razor: Learning Conserved Quantities |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Noise Contrastive Alignment of Language Models with Explicit Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Noise-Aware Differentially Private Regression via Meta-Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NoiseGPT: Label Noise Detection and Rectification through Probability Curvature |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Noisy Dual Mirror Descent: A Near Optimal Algorithm for Jointly-DP Convex Resource Allocation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Noisy Label Learning with Instance-Dependent Outliers: Identifiability via Crowd Wisdom |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Non-Euclidean Mixture Model for Social Network Embedding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Non-asymptotic Approximation Error Bounds of Parameterized Quantum Circuits |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Non-asymptotic Convergence of Training Transformers for Next-token Prediction |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Non-asymptotic Global Convergence Analysis of BFGS with the Armijo-Wolfe Line Search |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Non-convolutional graph neural networks. |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Non-geodesically-convex optimization in the Wasserstein space |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Non-parametric classification via expand-and-sparsify representation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nonlinear dynamics of localization in neural receptive fields |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Nonparametric Evaluation of Noisy ICA Solutions |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Nonparametric Instrumental Variable Regression through Stochastic Approximate Gradients |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Nonstationary Sparse Spectral Permanental Process |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Normal-GS: 3D Gaussian Splatting with Normal-Involved Rendering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Normalization and effective learning rates in reinforcement learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Not All Tokens Are What You Need for Pretraining |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Not Just Object, But State: Compositional Incremental Learning without Forgetting |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Not so griddy: Internal representations of RNNs path integrating more than one agent |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Novel Object Synthesis via Adaptive Text-Image Harmony |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Nuclear Norm Regularization for Deep Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| OPEL: Optimal Transport Guided ProcedurE Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| OPUS: Occupancy Prediction Using a Sparse Set |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OSLO: One-Shot Label-Only Membership Inference Attacks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OT4P: Unlocking Effective Orthogonal Group Path for Permutation Relaxation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| OTTER: Effortless Label Distribution Adaptation of Zero-shot Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Observational Scaling Laws and the Predictability of Langauge Model Performance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| OccFusion: Rendering Occluded Humans with Generative Diffusion Priors |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Occupancy-based Policy Gradient: Estimation, Convergence, and Optimality |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Octopus: A Multi-modal LLM with Parallel Recognition and Sequential Understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| OctreeOcc: Efficient and Multi-Granularity Occupancy Prediction Using Octree Queries |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Off-Policy Selection for Initiating Human-Centric Experimental Design |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Off-policy estimation with adaptively collected data: the power of online learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Offline Behavior Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Offline Multitask Representation Learning for Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Offline Oracle-Efficient Learning for Contextual MDPs via Layerwise Exploration-Exploitation Tradeoff |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Offline Reinforcement Learning with OOD State Correction and OOD Action Suppression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Oja's Algorithm for Streaming Sparse PCA |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Omnigrasp: Grasping Diverse Objects with Simulated Humanoids |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On $f$-Divergence Principled Domain Adaptation: An Improved Framework |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Affine Homotopy between Language Encoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Causal Discovery in the Presence of Deterministic Relations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Differentially Private Subspace Estimation in a Distribution-Free Setting |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On Differentially Private U Statistics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Divergence Measures for Training GFlowNets |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On Feature Learning in Structured State Space Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Learning Multi-Modal Forgery Representation for Diffusion Generated Video Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Mesa-Optimization in Autoregressively Trained Transformers: Emergence and Capability |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On Sampling Strategies for Spectral Model Sharding |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On Socially Fair Low-Rank Approximation and Column Subset Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Softmax Direct Preference Optimization for Recommendation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| On Sparse Canonical Correlation Analysis |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs) |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On Tractable $\Phi$-Equilibria in Non-Concave Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Weak Regret Analysis for Dueling Bandits |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| On conditional diffusion models for PDE simulations |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| On improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On provable privacy vulnerabilities of graph representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On scalable oversight with weak LLMs judging strong LLMs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Ability of Developers' Training Data Preservation of Learnware |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Adversarial Robustness of Benjamini Hochberg |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Comparison between Multi-modal and Single-modal Contrastive Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Complexity of Identification in Linear Structural Causal Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On the Complexity of Teaching a Family of Linear Behavior Cloning Learners |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Computational Complexity of Private High-dimensional Model Selection |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On the Computational Landscape of Replicable Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Convergence of Loss and Uncertainty-based Active Learning Algorithms |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Efficiency of ERM in Feature Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Expressive Power of Tree-Structured Probabilistic Circuits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Expressivity and Sample Complexity of Node-Individualized Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Identifiability of Hybrid Deep Generative Models: Meta-Learning as a Solution |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the Identifiability of Poisson Branching Structural Causal Model Using Probability Generating Function |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Inductive Bias of Stacking Towards Improving Reasoning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the Limitations of Fractal Dimension as a Measure of Generalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Minimax Regret for Contextual Linear Bandits and Multi-Armed Bandits with Expert Advice |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Necessity of Collaboration for Online Model Selection with Decentralized Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Noise Robustness of In-Context Learning for Text Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Parameter Identifiability of Partially Observed Linear Causal Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Power of Decision Trees in Auto-Regressive Language Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Power of Small-size Graph Neural Networks for Linear Programming |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| On the Robustness of Spectral Algorithms for Semirandom Stochastic Block Models |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| On the Role of Attention Masks and LayerNorm in Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Saturation Effects of Spectral Algorithms in Large Dimensions |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| On the Scalability of Certified Adversarial Robustness with Generated Data |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| On the Scalability of GNNs for Molecular Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Sparsity of the Strong Lottery Ticket Hypothesis |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Stability and Generalization of Meta-Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Surprising Effectiveness of Attention Transfer for Vision Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Target-kernel Alignment: a Unified Analysis with Kernel Complexity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Use of Anchoring for Training Vision Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Worst Prompt Performance of Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the cohesion and separability of average-link for hierarchical agglomerative clustering |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| One Sample Fits All: Approximating All Probabilistic Values Simultaneously and Efficiently |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| One-Layer Transformer Provably Learns One-Nearest Neighbor In Context |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| One-Shot Safety Alignment for Large Language Models via Optimal Dualization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| One-Step Diffusion Distillation through Score Implicit Matching |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| One-Step Effective Diffusion Network for Real-World Image Super-Resolution |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| One-shot Federated Learning via Synthetic Distiller-Distillate Communication |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| One-to-Multiple: A Progressive Style Transfer Unsupervised Domain-Adaptive Framework for Kidney Tumor Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| One-to-Normal: Anomaly Personalization for Few-shot Anomaly Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| OneActor: Consistent Subject Generation via Cluster-Conditioned Guidance |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| OneBit: Towards Extremely Low-bit Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Adaptation of Language Models with a Memory of Amortized Contexts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Bayesian Persuasion Without a Clue |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Budgeted Matching with General Bids |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Online Classification with Predictions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Composite Optimization Between Stochastic and Adversarial Environments |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Consistency of the Nearest Neighbor Rule |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Control in Population Dynamics |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Control with Adversarial Disturbance for Continuous-time Linear Systems |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Estimation via Offline Estimation: An Information-Theoretic Framework |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Feature Updates Improve Online (Generalized) Label Shift Adaptation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Online Iterative Reinforcement Learning from Human Feedback with General Preference Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Learning of Delayed Choices |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Online Learning with Sublinear Best-Action Queries |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Non-convex Learning in Dynamic Environments |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Posterior Sampling with a Diffusion Prior |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Online Relational Inference for Evolving Multi-agent Interacting Systems |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Online Weighted Paging with Unknown Weights |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| OnlineTAS: An Online Baseline for Temporal Action Segmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Only Strict Saddles in the Energy Landscape of Predictive Coding Networks? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Open-Book Neural Algorithmic Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Open-Vocabulary Object Detection via Language Hierarchy |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| OpenDlign: Open-World Point Cloud Understanding with Depth-Aligned Images |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Operator World Models for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Opponent Modeling based on Subgoal Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Opponent Modeling with In-context Search |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optical Diffusion Models for Image Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Optimal Algorithms for Augmented Testing of Discrete Distributions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimal Algorithms for Learning Partitions with Faulty Oracles |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimal Algorithms for Online Convex Optimization with Adversarial Constraints |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal Batched Best Arm Identification |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimal Classification under Performative Distribution Shift |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimal Design for Human Preference Elicitation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimal Flow Matching: Learning Straight Trajectories in Just One Step |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal Hypothesis Selection in (Almost) Linear Time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimal Multi-Fidelity Best-Arm Identification |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimal Multiclass U-Calibration Error and Beyond |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimal Parallelization of Boosting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimal Private and Communication Constraint Distributed Goodness-of-Fit Testing for Discrete Distributions in the Large Sample Regime |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimal Scalarizations for Sublinear Hypervolume Regret |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimal Top-Two Method for Best Arm Identification and Fluid Analysis |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Optimal Transport-based Labor-free Text Prompt Modeling for Sketch Re-identification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Optimal ablation for interpretability |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal and Approximate Adaptive Stochastic Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
5 |
| Optimal deep learning of holomorphic operators between Banach spaces |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Optimistic Critic Reconstruction and Constrained Fine-Tuning for General Offline-to-Online RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimistic Verifiable Training by Controlling Hardware Nondeterminism |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Optimization Algorithm Design via Electric Circuits |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Optimization Can Learn Johnson Lindenstrauss Embeddings |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Optimizing Automatic Differentiation with Deep Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Oracle-Efficient Differentially Private Learning with Public Data |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Oracle-Efficient Reinforcement Learning for Max Value Ensembles |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Order-Independence Without Fine Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Ordered Momentum for Asynchronous SGD |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Ordering-Based Causal Discovery for Linear and Nonlinear Relations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Out-Of-Distribution Detection with Diversification (Provably) |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Out-of-Distribution Detection with a Single Unconditional Diffusion Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Overcoming Common Flaws in the Evaluation of Selective Classification Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| OwMatch: Conditional Self-Labeling with Consistency for Open-World Semi-Supervised Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| OxonFair: A Flexible Toolkit for Algorithmic Fairness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| P$^2$C$^2$Net: PDE-Preserved Coarse Correction Network for efficient prediction of spatiotemporal dynamics |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PAC-Bayes-Chernoff bounds for unbounded losses |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PLIP: Language-Image Pre-training for Person Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PRODuctive bandits: Importance Weighting No More |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PTQ4DiT: Post-training Quantization for Diffusion Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PaCE: Parsimonious Concept Engineering for Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PageRank Bandits for Link Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Panacea: Pareto Alignment via Preference Adaptation for LLMs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Pandora's Box: Towards Building Universal Attackers against Real-World Large Vision-Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Parallel Backpropagation for Shared-Feature Visualization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ParallelEdits: Efficient Multi-Aspect Text-Driven Image Editing with Attention Grouping |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Parallelizing Linear Transformers with the Delta Rule over Sequence Length |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Parallelizing Model-based Reinforcement Learning Over the Sequence Length |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Parameter Competition Balancing for Model Merging |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parameter Disparities Dissection for Backdoor Defense in Heterogeneous Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Parameter-Inverted Image Pyramid Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Parameter-free Clipped Gradient Descent Meets Polyak |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Parameterized Approximation Schemes for Fair-Range Clustering |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Parametric model reduction of mean-field and stochastic systems via higher-order action matching |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parseval Regularization for Continual Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Partial Transportability for Domain Generalization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Particle Semi-Implicit Variational Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Paths to Equilibrium in Games |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pearls from Pebbles: Improved Confidence Functions for Auto-labeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Perceptual Fairness in Image Restoration |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Performative Control for Linear Dynamical Systems |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Peri-midFormer: Periodic Pyramid Transformer for Time Series Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Periodic agent-state based Q-learning for POMDPs |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Perplexity-aware Correction for Robust Alignment with Noisy Preferences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Persistence Homology Distillation for Semi-supervised Continual Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Persistent Homology for High-dimensional Data Based on Spectral Methods |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Persistent Test-time Adaptation in Recurring Testing Scenarios |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Personalized Adapter for Large Meteorology Model on Devices: Towards Weather Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Personalized Federated Learning via Feature Distribution Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Pessimistic Backward Policy for GFlowNets |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Phased Consistency Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless Imaging |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PhyRecon: Physically Plausible Neural Scene Reconstruction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Physically Compatible 3D Object Modeling from a Single Image |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Physics-Constrained Comprehensive Optical Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Physics-Informed Variational State-Space Gaussian Processes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Physics-Regularized Multi-Modal Image Assimilation for Brain Tumor Localization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Piecewise deterministic generative models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Piecewise-Stationary Bandits with Knapsacks |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Pipeline Parallelism with Controllable Memory |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Plant-and-Steal: Truthful Fair Allocations via Predictions |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Point-PRC: A Prompt Learning Based Regulation Framework for Generalizable Point Cloud Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PointAD: Comprehending 3D Anomalies from Points and Pixels for Zero-shot 3D Anomaly Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PointMamba: A Simple State Space Model for Point Cloud Analysis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Poisson Variational Autoencoder |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Policy Aggregation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Policy Improvement using Language Feedback Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Policy Learning from Tutorial Books via Understanding, Rehearsing and Introspecting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Policy Mirror Descent with Lookahead |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Policy Optimization for Robust Average Reward MDPs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Policy-shaped prediction: avoiding distractions in model-based reinforcement learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Polyhedral Complex Derivation from Piecewise Trilinear Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Polynomial-Time Computation of Exact $\Phi$-Equilibria in Polyhedral Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Poseidon: Efficient Foundation Models for PDEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Post-Hoc Reversal: Are We Selecting Models Prematurely? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PowerPM: Foundation Model for Power Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Practical Bayesian Algorithm Execution via Posterior Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Practical Shuffle Coding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pre-Trained Multi-Goal Transformers with Prompt Optimization for Efficient Online Adaptation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pre-trained Large Language Models Use Fourier Features to Compute Addition |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Pre-training Differentially Private Models with Limited Public Data |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Precipitation Downscaling with Spatiotemporal Video Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Predicting Future Actions of Reinforcement Learning Agents |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Predicting Label Distribution from Ternary Labels |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Predicting the Performance of Foundation Models via Agreement-on-the-Line |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prediction with Action: Visual Policy Learning via Joint Denoising Process |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Prediction-Powered Ranking of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Predictive Attractor Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preference Alignment with Flow Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Preference Learning Algorithms Do Not Learn Preference Rankings |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preference-based Pure Exploration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Preferential Normalizing Flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pretrained Optimization Model for Zero-Shot Black Box Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Pretraining with Random Noise for Fast and Robust Learning without Weight Transport |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Preventing Dimensional Collapse in Self-Supervised Learning via Orthogonality Regularization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Pricing and Competition for Generative AI |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Principled Bayesian Optimization in Collaboration with Human Experts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| PrivCirNet: Efficient Private Inference via Block Circulant Transformation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Private Attribute Inference from Images with Vision-Language Models |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Private Edge Density Estimation for Random Graphs: Optimal, Efficient and Robust |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Private Geometric Median |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Private Online Learning via Lazy Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Private and Personalized Frequency Estimation in a Federated Setting |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ProSST: Protein Language Modeling with Quantized Structure and Disentangled Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ProTransformer: Robustify Transformers via Plug-and-Play Paradigm |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Probabilistic Conformal Distillation for Enhancing Missing Modality Robustness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Probabilistic Graph Rewiring via Virtual Nodes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Probabilistic size-and-shape functional mixed models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Probablistic Emulation of a Global Climate Model with Spherical DYffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Probing the Decision Boundaries of In-context Learning in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Progressive Entropic Optimal Transport Solvers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Progressive Exploration-Conformal Learning for Sparsely Annotated Object Detection in Aerial Images |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Promoting Fairness Among Dynamic Agents in Online-Matching Markets under Known Stationary Arrival Distributions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PromptFix: You Prompt and We Fix the Photo |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Propensity Score Alignment of Unpaired Multimodal Data |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Proportional Fairness in Clustering: A Social Choice Perspective |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Proportional Fairness in Non-Centroid Clustering |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Prospective Learning: Learning for a Dynamic Future |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Prospective Representation Learning for Non-Exemplar Class-Incremental Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ProtGO: Function-Guided Protein Modeling for Unified Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Protecting Your LLMs with Information Bottleneck |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Protein-Nucleic Acid Complex Modeling with Frame Averaging Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Prototypical Hash Encoding for On-the-Fly Fine-Grained Category Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable Benefit of Cutout and CutMix for Feature Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Provable Benefits of Complex Parameterizations for Structured State Space Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Provable Editing of Deep Neural Networks using Parametric Linear Relaxation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Provable Partially Observable Reinforcement Learning with Privileged Information |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Provable Posterior Sampling with Denoising Oracles via Tilted Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Provable Tempered Overfitting of Minimal Nets and Typical Nets |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Provable and Efficient Dataset Distillation for Kernel Ridge Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Provably Efficient Interactive-Grounded Learning with Personalized Reward |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codes |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Provably Safe Neural Network Controllers via Differential Dynamic Logic |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Proving Theorems Recursively |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Proximal Causal Inference With Text Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ProxyFusion: Face Feature Aggregation Through Sparse Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prune and Repaint: Content-Aware Image Retargeting for any Ratio |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Pruning neural network models for gene regulatory dynamics using data and domain knowledge |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Pseudo-Private Data Guided Model Inversion Attacks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Pseudo-Siamese Blind-spot Transformers for Self-Supervised Real-World Denoising |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| PuLID: Pure and Lightning ID Customization via Contrastive Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Public-data Assisted Private Stochastic Optimization: Power and Limitations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Pure Message Passing Can Estimate Common Neighbor for Link Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Q-VLM: Post-training Quantization for Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| QBB: Quantization with Binary Bases for LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| QGFN: Controllable Greediness with Action Values |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| QKFormer: Hierarchical Spiking Transformer using Q-K Attention |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QT-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| QTIP: Quantization with Trellises and Incoherence Processing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| QUEST: Quadruple Multimodal Contrastive Learning with Constraints and Self-Penalization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| QUEST: Quality-Aware Metropolis-Hastings Sampling for Machine Translation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QVAE-Mole: The Quantum VAE with Spherical Latent Variable Learning for 3-D Molecule Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quadratic Quantum Variational Monte Carlo |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Qualitative Mechanism Independence |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Quality-Improved and Property-Preserved Polarimetric Imaging via Complementarily Fusing |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quantifying and Optimizing Global Faithfulness in Persona-driven Role-playing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Quantifying the Gain in Weak-to-Strong Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Quantitative Convergences of Lie Group Momentum Optimizers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quantum Algorithms for Non-smooth Non-convex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Quantum Deep Equilibrium Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Quantum algorithm for large-scale market equilibrium computation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Quasi-Bayes meets Vines |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| QueST: Self-Supervised Skill Abstractions for Learning Continuous Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Query-Based Adversarial Prompt Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Query-Efficient Correlation Clustering with Noisy Oracle |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Questioning the Survey Responses of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Queueing Matching Bandits with Preference Feedback |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| RAGraph: A General Retrieval-Augmented Graph Learning Framework |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations for Universal Robustness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| REBEL: Reinforcement Learning via Regressing Relative Rewards |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| REDUCR: Robust Data Downsampling using Class Priority Reweighting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RETR: Multi-View Radar Detection Transformer for Indoor Perception |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RGFN: Synthesizable Molecular Generation Using GFlowNets |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| RGMDT: Return-Gap-Minimizing Decision Tree Extraction in Non-Euclidean Metric Space |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RL-GPT: Integrating Reinforcement Learning and Code-as-policy |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-Identification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RMLR: Extending Multinomial Logistic Regression into General Geometries |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ROIDICE: Offline Return on Investment Maximization for Efficient Decision Making |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RTify: Aligning Deep Neural Networks with Human Behavioral Decisions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rad-NeRF: Ray-decoupled Training of Neural Radiance Field |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RanDumb: Random Representations Outperform Online Continually Learned Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RandNet-Parareal: a time-parallel PDE solver using Random Neural Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Random Function Descent |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Randomized Sparse Matrix Compression for Large-Scale Constrained Optimization in Cancer Radiotherapy |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Randomized Strategic Facility Location with Predictions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Randomized Truthful Auctions with Learning Agents |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RankUp: Boosting Semi-Supervised Regression with an Auxiliary Ranking Classifier |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rapid Plug-in Defenders |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReF-LDM: A Latent Diffusion Model for Reference-based Face Image Restoration |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ReFIR: Grounding Large Restoration Models with Retrieval Augmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReFT: Representation Finetuning for Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ReMAP: Neural Model Reprogramming with Network Inversion and Retrieval-Augmented Mapping for Adaptive Motion Forecasting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ReMoDetect: Reward Models Recognize Aligned LLM's Generations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ReVideo: Remake a Video with Motion and Content Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Real-Time Recurrent Learning using Trace Units in Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Real-Time Selection Under General Constraints via Predictive Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Real-time Core-Periphery Guided ViT with Smart Data Layout Selection on Mobile Devices |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Real-time Stereo-based 3D Object Detection for Streaming Perception |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Real-world Image Dehazing with Coherence-based Pseudo Labeling and Cooperative Unfolding Network |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Realizable $H$-Consistent and Bayes-Consistent Loss Functions for Learning to Defer |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reasons and Solutions for the Decline in Model Performance after Editing |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reciprocal Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Recognize Any Regions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Reconstruct and Match: Out-of-Distribution Robustness via Topological Homogeneity |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Reconstructing the Image Stitching Pipeline: Integrating Fusion and Rectangling into a Unified Inpainting Model |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Reconstruction of Manipulated Garment with Guided Deformation Prior |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Recovering Complete Actions for Cross-dataset Skeleton Action Recognition |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Recurrent Reinforcement Learning with Memoroids |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Recurrent neural network dynamical systems for biological vision |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Recurrent neural networks: vanishing and exploding gradients are not the end of the story |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Recursive Introspection: Teaching Language Model Agents How to Self-Improve |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Recursive PAC-Bayes: A Frequentist Approach to Sequential Prior Updates with No Information Loss |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reducing Transformer Key-Value Cache Size with Cross-Layer Attention |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Referencing Where to Focus: Improving Visual Grounding with Referential Query |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Referring Human Pose and Mask Estimation In the Wild |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reflective Multi-Agent Collaboration based on Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Refusal in Language Models Is Mediated by a Single Direction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Regression under demographic parity constraints via unlabeled post-processing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Regret Minimization in Stackelberg Games with Side Information |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Regularized Conditional Diffusion Model for Multi-Task Preference Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Regularized Q-Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reinforced Cross-Domain Knowledge Distillation on Time Series Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reinforcement Learning Gradients as Vitamin for Online Finetuning Decision Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reinforcement Learning Guided Semi-Supervised Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Reinforcement Learning with Euclidean Data Augmentation for State-Based Continuous Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reinforcement Learning with LTL and $\omega$-Regular Objectives via Optimality-Preserving Translation to Average Rewards |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Reinforcement Learning with Lookahead Information |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Reinforcing LLM Agents via Policy Optimization with Action Decomposition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rejection via Learning Density Ratios |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Relating Hopfield Networks to Episodic Control |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Relational Concept Bottleneck Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Relational Verification Leaps Forward with RABBit |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reliable Learning of Halfspaces under Gaussian Marginals |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Renovating Names in Open-Vocabulary Segmentation Benchmarks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reparameterization invariance in approximate Bayesian inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ReplaceAnything3D: Text-Guided Object Replacement in 3D Scenes with Compositional Scene Representations |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Replicable Uniformity Testing |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Representation Noising: A Defence Mechanism Against Harmful Finetuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reproducibility of predictive networks for mouse visual cortex |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reprogramming Pretrained Target-Specific Diffusion Models for Dual-Target Drug Design |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reranking Laws for Language Generation: A Communication-Theoretic Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ResAD: A Simple Framework for Class Generalizable Anomaly Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Resolving Discrepancies in Compute-Optimal Scaling of Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Resource-Aware Federated Self-Supervised Learning with Global Class Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Rethinking 3D Convolution in $\ell_p$-norm Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking Decoders for Transformer-based Semantic Segmentation: A Compression Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Exploration in Reinforcement Learning with Effective Metric-Based Exploration Bonus |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking Fourier Transform from A Basis Functions Perspective for Long-term Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability, Reproducibility, and Practicality |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Imbalance in Image Super-Resolution for Efficient Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Rethinking Inverse Reinforcement Learning: from Data Alignment to Task Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking LLM Memorization through the Lens of Adversarial Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking Memory and Communication Costs for Efficient Data Parallel Training of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking Model-based, Policy-based, and Value-based Reinforcement Learning via the Lens of Representation Complexity |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Optimal Transport in Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rethinking Out-of-Distribution Detection on Imbalanced Data Distribution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Score Distillation as a Bridge Between Image Distributions |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Rethinking The Training And Evaluation of Rich-Context Layout-to-Image Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking the Capacity of Graph Neural Networks for Branching Strategy |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Rethinking the Diffusion Models for Missing Data Imputation: A Gradient Flow Perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Retrieval & Fine-Tuning for In-Context Tabular Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Retrieval-Augmented Diffusion Models for Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Retrieval-Retro: Retrieval-based Inorganic Retrosynthesis with Expert Knowledge |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Return of Unconditional Generation: A Self-supervised Representation Generation Method |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revealing Distribution Discrepancy by Sampling Transfer in Unlabeled Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Revisiting Differentially Private ReLU Regression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Ensembling in One-Shot Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisiting Score Propagation in Graph Out-of-Distribution Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Self-Supervised Heterogeneous Graph Learning from Spectral Clustering Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Revisiting motion information for RGB-Event tracking with MOT philosophy |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Revisiting the Integration of Convolution and Attention for Vision Backbone |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Reward Machines for Deep RL in Noisy and Uncertain Environments |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Right this way: Can VLMs Guide Us to See More to Answer Questions? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Risk-Averse Fine-tuning of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Risk-sensitive control as inference with Rényi divergence |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RoPINN: Region Optimized Physics-Informed Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Road Network Representation Learning with the Third Law of Geography |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| RobIR: Robust Inverse Rendering for High-Illumination Scenes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Robot Policy Learning with Temporal Optimal Transport Reward |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Robust Conformal Prediction Using Privileged Information |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust Contrastive Multi-view Clustering against Dual Noisy Correspondence |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Robust Fine-tuning of Zero-shot Models via Variance Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust Gaussian Processes via Relevance Pursuit |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Graph Neural Networks via Unbiased Aggregation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Mixture Learning when Outliers Overwhelm Small Groups |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Neural Contextual Bandit against Adversarial Corruptions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Offline Active Learning on Graphs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust Reinforcement Learning from Corrupted Human Feedback |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Robust Reinforcement Learning with General Utility |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Robust Sleep Staging over Incomplete Multimodal Physiological Signals via Contrastive Imagination |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Robust Sparse Regression with Non-Isotropic Designs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust group and simultaneous inferences for high-dimensional single index model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robustly overfitting latents for flexible neural image compression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rough Transformers: Lightweight and Continuous Time Series Modelling through Signature Patching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Rule Based Rewards for Language Model Safety |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD Prompts |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| SA3DIP: Segment Any 3D Instance with Potential 3D Priors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SAM-Guided Masked Token Prediction for 3D Scene Understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SAMPa: Sharpness-aware Minimization Parallelized |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SAND: Smooth imputation of sparse and noisy functional data with Transformer networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SARAD: Spatial Association-Aware Anomaly Detection and Diagnosis for Multivariate Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SCOREQ: Speech Quality Assessment with Contrastive Regression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SCaR: Refining Skill Chaining for Long-Horizon Robotic Manipulation via Dual Regularization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SCube: Instant Large-Scale Scene Reconstruction using VoxSplats |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SDformer: Similarity-driven Discrete Transformer For Time Series Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SE(3)-bi-equivariant Transformers for Point Cloud Assembly |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SEA: State-Exchange Attention for High-Fidelity Physics Based Transformers |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| SEL-BALD: Deep Bayesian Active Learning with Selective Labels |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SF-V: Single Forward Video Generation Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SGD vs GD: Rank Deficiency in Linear Networks |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| SGLang: Efficient Execution of Structured Language Model Programs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SILENCE: Protecting privacy in offloaded speech understanding on resource-constrained devices |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SIRIUS : Contexual Sparisty with Correction for Efficient LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SLTrain: a sparse plus low rank approach for parameter and memory efficient pretraining |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SLowcalSGD : Slow Query Points Improve Local-SGD for Stochastic Convex Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SMART: Towards Pre-trained Missing-Aware Model for Patient Health Status Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SOI: Scaling Down Computational Complexity by Estimating Partial States of the Model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SPEAR: Exact Gradient Inversion of Batches in Federated Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SPO: Sequential Monte Carlo Policy Optimisation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SS1: Accelerating Inference with Fast and Expressive Sketch Structured Transform |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SSA-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SSDM: Scalable Speech Dysfluency Modeling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ST$_k$: A Scalable Module for Solving Top-k Problems |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| START: A Generalized State Space Model with Saliency-Driven Token-Aware Transformation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| STL: Still Tricky Logic (for System Validation, Even When Showing Your Work) |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| STONE: A Submodular Optimization Framework for Active 3D Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Safe Exploitative Play with Untrusted Type Beliefs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Safe LoRA: The Silver Lining of Reducing Safety Risks when Finetuning Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Safe and Efficient: A Primal-Dual Method for Offline Convex CMDPs under Partial Data Coverage |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Safe and Sparse Newton Method for Entropic-Regularized Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SafeWorld: Geo-Diverse Safety Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Safety through feedback in Constrained RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Saliency-driven Experience Replay for Continual Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Samba: Severity-aware Recurrent Modeling for Cross-domain Medical Image Grading |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Sample Complexity Reduction via Policy Difference Estimation in Tabular Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Sample Complexity of Interventional Causal Representation Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sample Complexity of Posted Pricing for a Single Item |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Sample Efficient Bayesian Learning of Causal Graphs from Interventions |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Sample Selection via Contrastive Fragmentation for Noisy Label Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-Efficient Agnostic Boosting |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sample-Efficient Constrained Reinforcement Learning with General Parameterization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sample-Efficient Private Learning of Mixtures of Gaussians |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample-efficient Bayesian Optimisation Using Known Invariances |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Satformer: Accurate and Robust Traffic Data Estimation for Satellite Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Scalable Constrained Policy Optimization for Safe Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Scalable DBSCAN with Random Projections |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Scalable DP-SGD: Shuffling vs. Poisson Subsampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Kernel Inverse Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Scalable Optimization in the Modular Norm |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable and Effective Arithmetic Tree Generation for Adder and Multiplier Designs |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Scale Equivariant Graph Metanetworks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scale-invariant Optimal Sampling for Rare-events Data and Sparse Models |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| ScaleKD: Strong Vision Transformers Could Be Excellent Teachers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Continuous Latent Variable Models as Probabilistic Integral Circuits |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Scaling Law for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Laws in Linear Regression: Compute, Parameters, and Data |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Retrieval-Based Language Models with a Trillion-Token Datastore |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Scaling Sign Language Translation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling White-Box Transformers for Vision |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scaling laws for learning with real and surrogate data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99% |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scaling transformer neural networks for skillful and reliable medium-range weather forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scanning Trojaned Models Using Out-of-Distribution Samples |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Scene Graph Generation with Role-Playing Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SceneCraft: Layout-Guided 3D Scene Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Schedule Your Edit: A Simple yet Effective Diffusion Noise Schedule for Image Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Schrodinger Bridge Flow for Unpaired Data Translation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Schur Nets: exploiting local structure for equivariance in higher order graph neural networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Score Distillation via Reparametrized DDIM |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Score-Optimal Diffusion Schedules |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Score-based 3D molecule generation with neural fields |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Score-based generative models are provably robust: an uncertainty quantification perspective |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Search for Efficient Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Second-order forward-mode optimization of recurrent neural networks for neuroscience |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Secret Collusion among AI Agents: Multi-Agent Deception via Steganography |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Seeing Beyond the Crop: Using Language Priors for Out-of-Bounding Box Keypoint Prediction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Seek Commonality but Preserve Differences: Dissected Dynamics Modeling for Multi-modal Visual RL |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SegVol: Universal and Interactive Volumetric Medical Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Segment Any Change |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Segment Anything without Supervision |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Segmenting Watermarked Texts From Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Selective Attention: Enhancing Transformer through Principled Context Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Selective Explanations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Selective Generation for Controllable Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Self-Calibrating Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Self-Distilled Depth Refinement with Noisy Poisson Fusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Self-Guided Masked Autoencoder |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Guiding Exploration for Combinatorial Problems |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
3 |
| Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Self-Labeling the Job Shop Scheduling Problem |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Play Fine-tuning of Diffusion Models for Text-to-image Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Self-Retrieval: End-to-End Information Retrieval with One Large Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-playing Adversarial Language Game Enhances LLM Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Self-supervised Transformation Learning for Equivariant Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SelfCodeAlign: Self-Alignment for Code Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SemFlow: Binding Semantic Segmentation and Image Synthesis via Rectified Flow |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic Space |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Semantic Routing via Autoregressive Modeling |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Semantics and Spatiality of Emergent Communication |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Semi-Open 3D Object Retrieval via Hierarchical Equilibrium on Hypergraph |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Semi-Supervised Sparse Gaussian Classification: Provable Benefits of Unlabeled Data |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Semi-supervised Knowledge Transfer Across Multi-omic Single-cell Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Semi-supervised Multi-label Learning with Balanced Binary Angular Margin Loss |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Semidefinite Relaxations of the Gromov-Wasserstein Distance |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
5 |
| Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Separations in the Representational Capabilities of Transformers and Recurrent Architectures |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sequential Harmful Shift Detection Without Labels |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sequoia: Scalable and Robust Speculative Decoding |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Set-based Neural Network Encoding Without Weight Tying |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| SfPUEL: Shape from Polarization under Unknown Environment Light |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Shape analysis for time series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Shaping the distribution of neural responses with interneurons in a recurrent circuit model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Shared Autonomy with IDA: Interventional Diffusion Assistance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sharing Key Semantics in Transformer Makes Efficient Image Restoration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sharpness-Aware Minimization Activates the Interactive Teaching's Understanding and Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Should We Really Edit Language Models? On the Evaluation of Edited Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ShowMaker: Creating High-Fidelity 2D Human Video via Fine-Grained Diffusion Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| SimGen: Simulator-conditioned Driving Scene Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SimPO: Simple Preference Optimization with a Reference-Free Reward |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Similarity-Navigated Conformal Prediction for Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Simple and Effective Masked Diffusion Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Simple and Fast Distillation of Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Simplified and Generalized Masked Diffusion for Discrete Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Simplifying Constraint Inference with Inverse Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Simplifying Latent Dynamics with Softly State-Invariant World Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Simulation-Free Training of Neural ODEs on Paired Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Single Image Reflection Separation via Dual-Stream Interactive Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Sketched Lanczos uncertainty score: a low-memory summary of the Fisher information |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sketching for Distributed Deep Learning: A Sharper Analysis |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Skill-aware Mutual Information Optimisation for Zero-shot Generalisation in Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Skinned Motion Retargeting with Dense Geometric Interaction Perception |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SkipPredict: When to Invest in Predictions for Scheduling |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Slack-Free Spiking Neural Network Formulation for Hypergraph Minimum Vertex Cover |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Slicing Vision Transformer for Flexible Inference |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Slight Corruption in Pre-training Data Makes Better Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SlimGPT: Layer-wise Structured Pruning for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SlimSAM: 0.1% Data Makes Segment Anything Slim |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Slot State Space Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Slot-VLM: Object-Event Slots for Video-Language Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Sm: enhanced localization in Multiple Instance Learning for medical imaging classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Small coresets via negative dependence: DPPs, linear statistics, and concentration |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Smoke and Mirrors in Causal Downstream Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Smoothed Online Classification can be Harder than Batch Classification |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Smoothie: Label Free Language Model Routing |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| SnapKV: LLM Knows What You are Looking for Before Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SocraticLM: Exploring Socratic Personalized Teaching with Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Soft Superpixel Neighborhood Attention |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Soft ascent-descent as a stable and flexible alternative to flooding |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Soft-Label Integration for Robust Toxicity Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Solving Inverse Problems via Diffusion Optimal Control |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Solving Minimum-Cost Reach Avoid using Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Solving Sparse \& High-Dimensional-Output Regression via Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Solving Zero-Sum Markov Games with Continuous State via Spectral Dynamic Embedding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SongCreator: Lyrics-based Universal Song Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| SpaFL: Communication-Efficient Federated Learning With Sparse Models And Low Computational Overhead |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Space-Time Continuous PDE Forecasting using Equivariant Neural Fields |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpaceByte: Towards Deleting Tokenization from Large Language Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sparse Bayesian Generative Modeling for Compressive Sensing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sparse High Rank Adapters |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sparse maximal update parameterization: A holistic approach to sparse training dynamics |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SparseLLM: Towards Global Pruning of Pre-trained Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sparsity-Agnostic Linear Bandits with Adaptive Adversaries |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpatialRGPT: Grounded Spatial Reasoning in Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spatio-Spectral Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpeAr: A Spectral Approach for Zero-Shot Node Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SpecExec: Massively Parallel Speculative Decoding For Interactive LLM Inference on Consumer Devices |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Spectral Adapter: Fine-Tuning in Spectral Space |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Spectral Editing of Activations for Large Language Model Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Spectral Graph Pruning Against Over-Squashing and Over-Smoothing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spectral Learning of Shared Dynamics Between Generalized-Linear Processes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Speculative Decoding with CTC-based Draft Model for LLM Inference Acceleration |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Speculative Monte-Carlo Tree Search |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| SpeechAlign: Aligning Speech Generation to Human Preferences |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| SpeedLoader: An I/O efficient scheme for heterogeneous and distributed LLM operation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SpelsNet: Surface Primitive Elements Segmentation by B-Rep Graph Structure Supervision |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Spherical Frustum Sparse Convolution Network for LiDAR Point Cloud Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spike-based Neuromorphic Model for Sound Source Localization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpikedAttention: Training-Free and Fully Spike-Driven Transformer-to-SNN Conversion with Winner-Oriented Spike Shift for Softmax Operation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Spiking Graph Neural Network on Riemannian Manifolds |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Spiking Neural Network as Adaptive Event Stream Slicer |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Spiking Token Mixer: An event-driven friendly Former structure for spiking neural networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spiking Transformer with Experts Mixture |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Splatter a Video: Video Gaussian Representation for Versatile Processing |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Stability and Generalization of Asynchronous SGD: Sharper Bounds Beyond Lipschitz and Smoothness |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stabilized Proximal-Point Methods for Federated Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Stable-Pose: Leveraging Transformers for Pose-Guided Text-to-Image Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| State Chrono Representation for Enhancing Generalization in Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| State Space Models on Temporal Graphs: A First-Principles Study |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| State-free Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Statistical Efficiency of Distributional Temporal Difference Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Statistical Multicriteria Benchmarking via the GSD-Front |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Statistical-Computational Trade-offs for Density Estimation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Stealth edits to large language models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| StepbaQ: Stepping backward as Correction for Quantized Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stepping Forward on the Last Mile |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Stepping on the Edge: Curvature Aware Learning Rate Tuners |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Stepwise Alignment for Constrained Language Model Policy Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Stochastic Concept Bottleneck Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable Improvements |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stochastic Newton Proximal Extragradient Method |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stochastic Optimal Control Matching |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stochastic Optimal Control for Diffusion Bridges in Function Spaces |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Stochastic contextual bandits with graph feedback: from independence number to MAS number |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Stopping Bayesian Optimization with Probabilistic Regret Bounds |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Strategic Linear Contextual Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Strategic Multi-Armed Bandit Problems Under Debt-Free Reporting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Stratified Prediction-Powered Inference for Effective Hybrid Evaluation of Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Streaming Bayes GFlowNets |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Streaming Long Video Understanding with Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stress-Testing Capability Elicitation With Password-Locked Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Structural Inference of Dynamical Systems with Conjoined State Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Structured Learning of Compositional Sequential Interventions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Structured Matrix Basis for Multivariate Time Series Forecasting with Interpretable Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Structured Multi-Track Accompaniment Arrangement via Style Prior Modelling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Structured flexibility in recurrent neural networks via neuromodulation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Style Adaptation and Uncertainty Estimation for Multi-Source Blended-Target Domain Adaptation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Stylus: Automatic Adapter Selection for Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sub-optimal Experts mitigate Ambiguity in Inverse Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SubgDiff: A Subgraph Diffusion Model to Improve Molecular Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Subject-driven Text-to-Image Generation via Preference-based Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Subsurface Scattering for Gaussian Splatting |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Suitable is the Best: Task-Oriented Knowledge Fusion in Vulnerability Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Super Consistency of Neural Network Landscapes and Learning Rate Transfer |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SuperDeepFool: a new fast and accurate minimal adversarial attack |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SuperVLAD: Compact and Robust Image Descriptors for Visual Place Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Supervised Kernel Thinning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Supra-Laplacian Encoding for Transformer on Dynamic Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Swift Sampler: Efficient Learning of Sampler by 10 Parameters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SymILO: A Symmetry-Aware Learning Framework for Integer Linear Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Symbolic Regression with a Learned Concept Library |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Symmetric Linear Bandits with Hidden Symmetry |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Symmetries in Overparametrized Neural Networks: A Mean Field View |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Symmetry Discovery Beyond Affine Transformations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Symmetry-Informed Governing Equation Discovery |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SyncTweedies: A General Generative Framework Based on Synchronized Diffusions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SyncVIS: Synchronized Video Instance Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Synergistic Dual Spatial-aware Generation of Image-to-text and Text-to-image |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Synthesize, Partition, then Adapt: Eliciting Diverse Samples from Foundation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Synthetic Programming Elicitation for Text-to-Code in Very Low-Resource Programming and Formal Languages |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TAIA: Large Language Models are Out-of-Distribution Data Learners |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TAPTRv2: Attention-based Position Update Improves Tracking Any Point |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| TARP-VP: Towards Evaluation of Transferred Adversarial Robustness and Privacy on Label Mapping Visual Prompting Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TARSS-Net: Temporal-Aware Radar Semantic Segmentation Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TFG: Unified Training-Free Guidance for Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TOPA: Extending Large Language Models for Video Understanding via Text-Only Pre-Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TPC: Test-time Procrustes Calibration for Diffusion-based Human Image Animation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TSDS: Data Selection for Task-Specific Model Finetuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TableRAG: Million-Token Table Understanding with Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Tackling Uncertain Correspondences for Multi-Modal Entity Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Talking Heads: Understanding Inter-Layer Communication in Transformer Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Taming "data-hungry" reinforcement learning? Stability in continuous state-action spaces |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Taming Generative Diffusion Prior for Universal Blind Image Restoration |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Taming Heavy-Tailed Losses in Adversarial Bandits and the Best-of-Both-Worlds Setting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Taming the Long Tail in Human Mobility Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Tangent Space Causal Inference: Leveraging Vector Fields for Causal Discovery in Dynamical Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Target-Guided Adversarial Point Cloud Transformer Towards Recognition Against Real-world Corruptions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Targeted Sequential Indirect Experiment Design |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Task-Agnostic Machine-Learning-Assisted Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Task-oriented Time Series Imputation Evaluation via Generalized Representers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Task-recency bias strikes back: Adapting covariances in Exemplar-Free Class Incremental Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Tell What You Hear From What You See - Video to Audio Generation Through Text |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Temporal Graph Neural Tangent Kernel with Graphon-Guaranteed |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Temporal Sentence Grounding with Relevance Feedback in Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Temporal-Difference Learning Using Distributed Error Signals |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Tensor-Based Synchronization and the Low-Rankness of the Block Trifocal Tensor |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Test-Time Dynamic Image Fusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Test-time Adaptation in Non-stationary Environments via Adaptive Representation Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Testably Learning Polynomial Threshold Functions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Testing Calibration in Nearly-Linear Time |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Testing Semantic Importance via Betting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tetrahedron Splatting for 3D Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Text-Aware Diffusion for Policy Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Text-Infused Attention and Foreground-Aware Modeling for Zero-Shot Temporal Action Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Text2CAD: Generating Sequential CAD Designs from Beginner-to-Expert Level Text Prompts |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TextCtrl: Diffusion-based Scene Text Editing with Prior Guidance Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The ALCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Benefits of Balance: From Information Projections to Variance Reduction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Closeness of In-Context Learning and Weight Shifting for Softmax Regression |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| The Collusion of Memory and Nonlinearity in Stochastic Approximation With Constant Stepsize |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| The Expressive Capacity of State Space Models: A Formal Language Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Factorization Curse: Which Tokens You Predict Underlie the Reversal Curse and More |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Fairness-Quality Tradeoff in Clustering |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Fine-Grained Complexity of Gradient Computation for Training Large Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The GAN is dead; long live the GAN! A Modern GAN Baseline |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Group Robustness is in the Details: Revisiting Finetuning under Spurious Correlations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The High Line: Exact Risk and Learning Rate Curves of Stochastic Adaptive Learning Rate Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Impact of Geometric Complexity on Neural Collapse in Transfer Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Impact of Initialization on LoRA Finetuning Dynamics |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Implicit Bias of Adam on Separable Data |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| The Implicit Bias of Gradient Descent on Separable Multiclass Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| The Implicit Bias of Gradient Descent toward Collaboration between Layers: A Dynamic Analysis of Multilayer Perceptions |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| The Implicit Bias of Heterogeneity towards Invariance: A Study of Multi-Environment Matrix Sensing |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Importance of Online Data: Understanding Preference Fine-tuning via Coverage |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Intelligible and Effective Graph Neural Additive Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Ladder in Chaos: Improving Policy Learning by Harnessing the Parameter Evolving Path in A Low-dimensional Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Limits of Differential Privacy in Online Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Limits of Transfer Reinforcement Learning with Latent Low-rank Structure |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Mamba in the Llama: Distilling and Accelerating Hybrid Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Many Faces of Optimal Weak-to-Strong Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Minimax Rate of HSIC Estimation for Translation-Invariant Kernels |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Poisson Midpoint Method for Langevin Dynamics: Provably Efficient Discretization for Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Power of Extrapolation in Federated Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Power of Resets in Online Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Prevalence of Neural Collapse in Neural Multivariate Regression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Price of Implicit Bias in Adversarially Robust Generalization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Reliability of OKRidge Method in Solving Sparse Ridge Regression Problems |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| The Representation Landscape of Few-Shot Learning and Fine-Tuning in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Road Less Scheduled |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| The Sample Complexity of Gradient Descent in Stochastic Convex Optimization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Sample-Communication Complexity Trade-off in Federated Q-Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Secretary Problem with Predicted Additive Gap |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| The Selective $G$-Bispectrum and its Inversion: Applications to $G$-Invariant Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Space Complexity of Approximating Logistic Loss |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Star Geometry of Critic-Based Regularizer Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| The Surprising Effectiveness of SP Voting with Partial Preferences |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| The Value of Reward Lookahead in Reinforcement Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The motion planning neural circuit in goal-directed navigation as Lie group operator search |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| The surprising efficiency of temporal difference learning for rare event prediction |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| The tree autoencoder model, with application to hierarchical data visualization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Theoretical Analysis of Weak-to-Strong Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Theoretical Foundations of Deep Selective State-Space Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Theoretical guarantees in KL for Diffusion Flow Matching |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Thinking Forward: Memory-Efficient Federated Finetuning of Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Thought of Search: Planning with Language Models Through The Lens of Efficiency |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Tight Bounds for Learning RUMs from Small Slates |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Tight Rates for Bandit Control Beyond Quadratics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Time-Constrained Robust MDPs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Time-FFM: Towards LM-Empowered Federated Foundation Model for Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Time-Reversal Provides Unsupervised Feedback to LLMs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Time-Varying LoRA: Towards Effective Cross-Domain Fine-Tuning of Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TinyLUT: Tiny Look-Up Table for Efficient Image Restoration at the Edge |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| To Believe or Not to Believe Your LLM: Iterative Prompting for Estimating Epistemic Uncertainty |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| To Learn or Not to Learn, That is the Question — A Feature-Task Dual Learning Model of Perceptual Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tolerant Algorithms for Learning with Arbitrary Covariate Shift |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| TopoFR: A Closer Look at Topology Alignment on Face Recognition |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Topological obstruction to the training of shallow ReLU neural networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Toward Approaches to Scalability in 3D Human Pose Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Toward Conditional Distribution Calibration in Survival Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Toward Efficient Inference for Mixture of Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Toward Real Ultra Image Segmentation: Leveraging Surrounding Context to Cultivate General Segmentation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Toward Semantic Gaze Target Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Calibrated Robust Fine-Tuning of Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Combating Frequency Simplicity-biased Learning for Domain Generalization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Croppable Implicit Neural Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Dynamic Message Passing on Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Editing Time Series |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Effective Planning Strategies for Dynamic Opinion Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Towards Estimating Bounds on the Effect of Policies under Unobserved Confounding |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Exact Gradient-based Training on Analog In-memory Computing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Flexible Visual Relationship Segmentation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Global Optimal Visual In-Context Learning Prompt Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Harmless Rawlsian Fairness Regardless of Demographic Prior |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Human-AI Complementarity with Prediction Sets |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Learning Group-Equivariant Features for Domain Adaptive 3D Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Multi-Domain Learning for Generalizable Video Anomaly Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Multi-dimensional Explanation Alignment for Medical Classification |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards Neuron Attributions in Multi-Modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Next-Generation Logic Synthesis: A Scalable Neural Circuit Generation Framework |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards Principled Graph Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Robust Multimodal Sentiment Analysis with Incomplete Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards Safe Concept Transfer of Multi-Modal Diffusion via Causal Representation Editing |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Scalable and Stable Parallelization of Nonlinear RNNs |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards Stable Representations for Protein Interface Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Understanding Evolving Patterns in Sequential Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Towards Understanding Extrapolation: a Causal Lens |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Universal Mesh Movement Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
3 |
| Towards Unsupervised Model Selection for Domain Adaptive Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards a "Universal Translator" for Neural Dynamics at Single-Cell, Single-Spike Resolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards a Scalable Reference-Free Evaluation of Generative Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards a theory of how the structure of language is acquired by deep neural networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards the Dynamics of a DNN Learning Symbolic Interactions |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards training digitally-tied analog blocks via hybrid gradient computation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Toxicity Detection for Free |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TrAct: Making First-layer Pre-Activations Trainable |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tracing Hyperparameter Dependencies for Model Parsing via Learnable Graph Pooling Network |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TrackIME: Enhanced Video Point Tracking via Instance Motion Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Trade-Offs of Diagonal Fisher Information Matrix Estimators |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Binary Neural Networks via Gaussian Variational Inference and Low-Rank Semidefinite Programming |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Compute-Optimal Protein Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Training Data Attribution via Approximate Unrolling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Training an Open-Vocabulary Monocular 3D Detection Model without 3D Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Training for Stable Explanation for Free |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| TrajCLIP: Pedestrian trajectory prediction method using contrastive learning and idempotent networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear $q^\pi$-Realizability and Concentrability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Trajectory Diffusion for ObjectGoal Navigation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Trajectory Flow Matching with Applications to Clinical Time Series Modelling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transcendence: Generative Models Can Outperform The Experts That Train Them |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transcoders find interpretable LLM feature circuits |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transductive Active Learning: Theory and Applications |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Transductive Learning is Compact |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Transfer Learning for Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transfer Learning for Latent Variable Network Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Transfer Q-star : Principled Decoding for LLM Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Transferable Adversarial Attacks on SAM and Its Downstream Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transferable Boltzmann Generators |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Transferring disentangled representations: bridging the gap between synthetic and real images |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Transformer Doctor: Diagnosing and Treating Vision Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transformers Can Do Arithmetic with the Right Embeddings |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Transformers Represent Belief State Geometry in their Residual Stream |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Transformers are Minimax Optimal Nonparametric In-Context Learners |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transformers need glasses! Information over-squashing in language tasks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Transformers on Markov data: Constant depth suffices |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learner |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transition Constrained Bayesian Optimization via Markov Decision Processes |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Trap-MID: Trapdoor-based Defense against Model Inversion Attacks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Tree of Attacks: Jailbreaking Black-Box LLMs Automatically |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TreeVI: Reparameterizable Tree-structured Variational Inference for Instance-level Correlation Capturing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Treeffuser: probabilistic prediction via conditional diffusions with gradient-boosted trees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Truncated Variance Reduced Value Iteration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Truth is Universal: Robust Detection of Lies in LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Truthful High Dimensional Sparse Linear Regression |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Truthfulness of Calibration Measures |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TurboHopp: Accelerated Molecule Scaffold Hopping with Consistency Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Typicalness-Aware Learning for Failure Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UDON: Universal Dynamic Online distillatioN for generic image representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UDPM: Upsampling Diffusion Probabilistic Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| UGC: Universal Graph Coarsening |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UMB: Understanding Model Behavior for Open-World Object Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| UNION: Unsupervised 3D Object Detection using Object Appearance-based Pseudo-Classes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UNIT: Unifying Image and Text Recognition in One Vision Encoder |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| UPS: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UQ-Guided Hyperparameter Optimization for Iterative Learners |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UQE: A Query Engine for Unstructured Databases |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UV-free Texture Generation with Denoising and Geodesic Heat Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Ultrafast classical phylogenetic method beats large protein language models on variant effect prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in LLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Uncertainty-aware Fine-tuning of Segmentation Foundation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unchosen Experts Can Contribute Too: Unleashing MoE Models’ Power by Self-Contrast |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unconditional stability of a recurrent neural circuit implementing divisive normalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Uncovering Safety Risks of Large Language Models through Concept Activation Vector |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Uncovering the Redundancy in Graph Self-supervised Learning Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Bias in Large-Scale Visual Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Understanding Emergent Abilities of Language Models from the Loss Perspective |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding Generalizability of Diffusion Models Requires Rethinking the Hidden Gaussian Structure |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Understanding Hallucinations in Diffusion Models through Mode Interpolation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Information Storage and Transfer in Multi-Modal Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Model Selection for Learning in Strategic Environments |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Understanding Multi-Granularity for Open-Vocabulary Part Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Representation of Deep Equilibrium Models from Neural Collapse Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Scaling Laws with Statistical and Approximation Theory for Transformer Neural Networks on Intrinsically Low-dimensional Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Transformer Reasoning Capabilities via Graph Algorithms |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding Transformers via N-Gram Statistics |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding Visual Feature Reliance through the Lens of Complexity |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Understanding and Improving Training-free Loss-based Diffusion Guidance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Understanding and Minimising Outlier Features in Transformer Training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding the Differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding the Expressive Power and Mechanisms of Transformer for Sequence Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding the Gains from Repeated Self-Distillation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Understanding the Role of Equivariance in Self-supervised Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding the Transferability of Representations via Task-Relatedness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unelicitable Backdoors via Cryptographic Transformer Circuits |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniAR: A Unified model for predicting human Attention and Responses on visual content |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| UniAudio 1.5: Large Language Model-Driven Audio Codec is A Few-Shot Audio Task Learner |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UniBias: Unveiling and Mitigating LLM Bias through Internal Attention and FFN Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniDSeg: Unified Cross-Domain 3D Semantic Segmentation via Visual Foundation Models Prior |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniFL: Improve Latent Diffusion Model via Unified Feedback Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UniGAD: Unifying Multi-level Graph Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UniIF: Unified Molecule Inverse Folding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| UniMTS: Unified Pre-training for Motion Time Series |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| UniTS: A Unified Multi-Task Time Series Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unified Covariate Adjustment for Causal Inference |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Unified Generative and Discriminative Training for Multi-modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unified Graph Augmentations for Generalized Contrastive Learning on Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unified Guidance for Geometry-Conditioned Molecular Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unified Insights: Harnessing Multi-modal Data for Phenotype Imputation via View Decoupling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unified Lexical Representation for Interpretable Visual-Language Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uniform Last-Iterate Guarantee for Bandits and Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Unifying Generation and Prediction on Graphs with Latent Graph Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unitary Convolutions for Learning on Graphs and Groups |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| United We Stand, Divided We Fall: Fingerprinting Deep Neural Networks via Adversarial Trajectories |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Unity by Diversity: Improved Representation Learning for Multimodal VAEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Universal Exact Compression of Differentially Private Mechanisms |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Universal In-Context Approximation By Prompting Fully Recurrent Models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Universal Neural Functionals |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Universal Online Convex Optimization with $1$ Projection per Round |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Universal Rates for Active Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Universal Rates of Empirical Risk Minimization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Universal Sample Coding |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Universality in Transfer Learning for Linear Models |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Universality of AdaGrad Stepsizes for Stochastic Optimization: Inexact Oracle, Acceleration and Variance Reduction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Unleashing Multispectral Video's Potential in Semantic Segmentation: A Semi-supervised Viewpoint and New UAV-View Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unleashing Region Understanding in Intermediate Layers for MLLM-based Referring Expression Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unlock the Intermittent Control Ability of Model Free Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Potential of Global Human Expertise |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unraveling the Gradient Descent Dynamics of Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Unravelling in Collaborative Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Unrolled denoising networks provably learn to perform optimal Bayesian inference |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Unscrambling disease progression at scale: fast inference of event permutations with optimal transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Anomaly Detection in The Presence of Missing Values |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unsupervised Discovery of Formulas for Mathematical Constants |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Object Detection with Theoretical Guarantees |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Untrained Neural Nets for Snapshot Compressive Imaging: Theory and Algorithms |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unveiling Causal Reasoning in Large Language Models: Reality or Mirage? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unveiling Encoder-Free Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Unveiling LoRA Intrinsic Ranks via Salience Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Unveiling the Bias Impact on Symmetric Moral Consistency of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unveiling the Hidden: Online Vectorized HD Map Construction with Clip-Level Token Interaction and Propagation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Unveiling the Potential of Robustness in Selecting Conditional Average Treatment Effect Estimators |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unveiling the Tapestry of Consistency in Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Upping the Game: How 2D U-Net Skip Connections Flip 3D Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| User-Creator Feature Polarization in Recommender Systems with Dual Influence |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| User-item fairness tradeoffs in recommendations |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
5 |
| Using Noise to Infer Aspects of Simplicity Without Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Using Surrogates in Covariate-adjusted Response-adaptive Randomization Experiments with Delayed Outcomes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Using Time-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| VFIMamba: Video Frame Interpolation with State Space Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| VISA: Variational Inference with Sequential Sample-Average Approximations |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VLM Agents Generate Their Own Memories: Distilling Experience into Embodied Programs of Thought |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VMamba: Visual State Space Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VQ-Map: Bird's-Eye-View Map Layout Estimation in Tokenized Discrete Space via Vector Quantization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Validating Climate Models with Spherical Convolutional Wasserstein Distance |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Variance estimation in compound decision theory under boundedness |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Variational Delayed Policy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Variational Distillation of Diffusion Policies into Mixture of Experts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Variational Flow Matching for Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| VeXKD: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vector Quantization Prompting for Continual Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Verifiably Robust Conformal Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Verified Code Transpilation with LLMs |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Verified Safe Reinforcement Learning for Neural Network Dynamic Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Video Diffusion Models are Training-free Motion Interpreter and Controller |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Video Token Merging for Long Video Understanding |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VideoTetris: Towards Compositional Text-to-Video Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VisMin: Visual Minimal-Change Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vision Foundation Model Enables Generalizable Object Pose Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Vision Mamba Mender |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Vision-Language Models are Strong Noisy Label Detectors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Vision-Language Navigation with Energy-Based Policy |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language Tasks |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Visual Data Diagnosis and Debiasing with Concept Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Fourier Prompt Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Perception by Large Language Model’s Weights |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Pinwheel Centers Act as Geometric Saliency Detectors |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Visual Prompt Tuning in Null Space for Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Vitron: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Vivid-ZOO: Multi-View Video Generation with Diffusion Model |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Voila-A: Aligning Vision-Language Models with User's Gaze Attention |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Voxel Mamba: Group-Free State Space Models for Point Cloud based 3D Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Voxel Proposal Network via Multi-Frame Knowledge Distillation for Semantic Scene Completion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WATT: Weight Average Test Time Adaptation of CLIP |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Warm-starting Push-Relabel |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Warped Diffusion: Solving Video Inverse Problems with Image Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Wasserstein Distributionally Robust Optimization through the Lens of Structural Causal Models and Individual Fairness |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Wasserstein Gradient Boosting: A Framework for Distribution-Valued Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| WaterMax: breaking the LLM watermark detectability-robustness-quality trade-off |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Watermarking Makes Language Models Radioactive |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Weak Supervision Performance Evaluation via Partial Identification |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| WeiPer: OOD Detection using Weight Perturbations of Class Projections |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Weight decay induces low-rank attention layers |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| What If the Input is Expanded in OOD Detection? |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What Makes Partial-Label Learning Algorithms Effective? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| What Makes and Breaks Safety Fine-tuning? A Mechanistic Study |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| What Matters in Graph Class Incremental Learning? An Information Preservation Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| What Rotary Position Embedding Can Tell Us: Identifying Query and Key Weights Corresponding to Basic Syntactic or High-level Semantic Information |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What Variables Affect Out-of-Distribution Generalization in Pretrained Models? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What do Graph Neural Networks learn? Insights from Tropical Geometry |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| What does guidance do? A fine-grained analysis in a simple setting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What is my quantum computer good for? Quantum capability learning with physics-aware neural networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| What makes unlearning hard and what to do about it |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What matters when building vision-language models? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What type of inference is planning? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| When Is Inductive Inference Possible? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| When are dynamical systems learned from time series data statistically accurate? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| When does perceptual alignment benefit vision representations? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| When is Multicalibration Post-Processing Necessary? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| When is an Embedding Model More Promising than Another? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| When to Act and When to Ask: Policy Learning With Deferral Under Hidden Confounding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| When to Sense and Control? A Time-adaptive Approach for Continuous-Time RL |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Where Do Large Learning Rates Lead Us? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Where does In-context Learning Happen in Large Language Models? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Where's Waldo: Diffusion Features For Personalized Segmentation and Retrieval |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2IScoreScore (TS2) |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
4 |
| Who's asking? User personas and the mechanics of latent misalignment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Who’s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Why Do We Need Weight Decay in Modern Deep Learning? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Why Go Full? Elevating Federated Learning Through Partial Network Updates |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Why Transformers Need Adam: A Hessian Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Why Warmup the Learning Rate? Underlying Mechanisms and Improvements |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Why are Visually-Grounded Language Models Bad at Image Classification? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Why the Metric Backbone Preserves Community Structure |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Wide Two-Layer Networks can Learn from Adversarial Perturbations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| WildGaussians: 3D Gaussian Splatting In the Wild |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Wings: Learning Multimodal LLMs without Text-only Forgetting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Wormhole Loss for Partial Shape Matching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Worst-Case Offline Reinforcement Learning with Arbitrary Data Support |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| X-Ray: A Sequential 3D Representation For Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| YOLOv10: Real-Time End-to-End Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Yo'LLaVA: Your Personalized Language and Vision Assistant |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| You Don’t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| You Only Cache Once: Decoder-Decoder Architectures for Language Models |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Your Diffusion Model is Secretly a Noise Classifier and Benefits from Contrastive Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Your contrastive learning problem is secretly a distribution alignment problem |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ZOPP: A Framework of Zero-shot Offboard Panoptic Perception for Autonomous Driving |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zero-Shot Event-Intensity Asymmetric Stereo via Visual Prompting from Image Domain |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-Shot Reinforcement Learning from Low Quality Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-Shot Tokenizer Transfer |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Zero-Shot Transfer of Neural ODEs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zero-shot Generalizable Incremental Learning for Vision-Language Object Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-shot Image Editing with Reference Imitation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ZeroMark: Towards Dataset Ownership Verification without Disclosing Watermark |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zeroth-Order Sampling Methods for Non-Log-Concave Distributions: Alleviating Metastability by Denoising Diffusion |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Zipfian Whitening |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zipper: Addressing Degeneracy in Algorithm-Agnostic Inference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| e-COP : Episodic Constrained Optimization of Policies |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| einspace: Searching for Neural Architectures from Fundamental Operations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| fMRI predictors based on language models of increasing complexity recover brain left lateralization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| iVideoGPT: Interactive VideoGPTs are Scalable World Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| pcaGAN: Improving Posterior-Sampling cGANs via Principal Component Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| realSEUDO for real-time calcium imaging analysis |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| xLSTM: Extended Long Short-Term Memory |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |