On ranking via sorting by estimated expected utility
Authors: Clement Calauzenes, Nicolas Usunier
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By uniformly sampling q 2 Q, we empirically estimated proportions of distributions q vs number of local minima for (q, .), for different numbers of items n. The results are plotted in Fig. 2 (left). To illustrate the claims of this section, we perform simulations using a non-convex surrogate loss defined by smoothing the task loss |
| Researcher Affiliation | Industry | Clément Calauzènes Criteo AI Lab Paris, France c.calauzenes@criteo.com Nicolas Usunier Facebook AI Research Paris, France usunier@fb.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information for open-source code. |
| Open Datasets | No | The paper describes using sampled theoretical distributions ('By uniformly sampling q 2 Q, we empirically estimated proportions of distributions q vs number of local minima') for analysis rather than a publicly available dataset for training models. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages or counts) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | To illustrate the claims of this section, we perform simulations using a non-convex surrogate loss defined by smoothing the task loss... The distributions q are uniformly sampled over Q, rejecting the distributions q where (q, .) does not have any local minima... Fig. 3 (right) show the proportions or runs on these distributions that end up stuck in a bad local valley when using an initialization close to 0 (which empirically was best to avoid bad local valleys). |