Latent Low-Rank Transfer Subspace Learning for Missing Modality Recognition
Authors: Zhengming Ding, Shao Ming, Yun Fu
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results of multi-modalities knowledge transfer with missing target data demonstrate that our method can successfully inherit knowledge from the auxiliary database to complete the target domain, and therefore enhance the performance when recognizing data from the modality without any training data. Experiments We first introduce the databases and experimental settings, then test the proposed algorithm on convergence property. Ultimately, comparisons of several transfer learning algorithms on two groups of multimodal databases are presented. |
| Researcher Affiliation | Academia | Zhengming Ding1, Ming Shao1 and Yun Fu1,2 Department of Electrical & Computer Engineering1, College of Computer & Information Science2, Northeastern University, Boston, MA, USA {allanzmding, shaoming533}@gmail.com, yunfu@ece.neu.edu |
| Pseudocode | Yes | Algorithm 1 Solving Problem (9) by ALM Input: X = [XS, XT], λ, ϕ, ψ, W, U Initialize: P(using Eq. (6)), Z = J = 0, L = K = 0, D = I, E = 0, Y1 = Y2 = Y3 = Y4 = 0, µ = 10 6 while not converged do 1. Fix the others and update J by J = arg min J 1 µ J + 1 2 J (Z + Y1/µ) 2 F 2. Fix the others and update K by K = arg min K 1 µ K + 1 2 K (L + Y3/µ) 2 F 3. Fix the others and update Z by Z = (I + XT S PP TXS) 1(XT S PP TXT XT S PE XT S PLP TXT + J + (XT S PY2 Y1)/µ) 4. Fix the others and update L by L = (P TXTXT TP P TXSZXT TP EXT TP + K +(Y2XT TP Y3)/µ)(I + P TXTXT TP) 1 5. Fix the others and update P by (2ψW + ϕXXT)P + 2UPY4 = (XTZ XS)Y T 2 + XSY T 2 L + ϕXSTDT, P orthogonal(P) 6. When P is fixed, D and S can be updated via D(P, D, S). 7. Fix the others and update E by E = arg min E λ µ E 2,1 + 1 2 L(P, Z, L, E) Y2/µ 2 F 8. Update Y1, Y2, Y3, Y4, µ 9. Check the convergence conditions. end while output: Z, L, E, J, K, P, D, S |
| Open Source Code | No | No explicit statement or link providing concrete access to the source code for the methodology described in this paper was found. |
| Open Datasets | Yes | Experiments are on two sets of multimodal databases: (1) BUAA (Di, Jia, and Yunhong 2012) and OULU VIS-NIR face databases1; (2) CMU-PIE2 and Yale B face databases3. ... 1http://www.ee.oulu.fi/ gyzhao/ 2http://vasc.ri.cmu.edu/idb/html/face/ 3http://vision.ucsd.edu/ leekc/Ext Yale Database/Ext Yale B.html |
| Dataset Splits | No | No specific details regarding a validation set or explicit training/validation/test splits (e.g., percentages, sample counts, or defined validation procedures) were found, only mentions of training and testing data. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper discusses algorithmic parameters like λ, ϕ, ψ and optimal dimensions, but does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size, optimizer settings) for reproducibility. |