Monolith to Microservices: Representing Application Software through Heterogeneous Graph Neural Network

Authors: Alex Mathai, Sambaran Bandyopadhyay, Utkarsh Desai, Srikanth Tamilselvam

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies show that our approach is effective on monoliths of different types. and 3 Experimental Evaluation To study the efficacy of our approach, we chose four publiclyavailable monoliths namely Daytrader, Plantsby Websphere (PBW), Acme-Air and Gen App.
Researcher Affiliation Industry Alex Mathai1 , Sambaran Bandyopadhyay2 , Utkarsh Desai1 and Srikanth Tamilselvam1 1IBM Research 2Amazon {alexmathai98, samb.bandyo, utk.is.here, srikanthtamilselvam}@gmail.com
Pseudocode Yes Algorithm 1 CHGNN
Open Source Code No The paper refers to an 'extended paper 2' at 'https://arxiv.org/abs/2112.01317' but does not explicitly provide a direct link to the source code for the methodology described in this paper, nor does it state that the code is provided in supplementary materials.
Open Datasets No The paper states it uses 'four publicly-available monoliths namely Daytrader, Plantsby Websphere (PBW), Acme-Air and Gen App', but it does not provide specific links, DOIs, repositories, or formal citations with authors/year for accessing these datasets.
Dataset Splits No The paper mentions 'pre-train the heterogeneous GNN encoder and decoder' but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or defined subsets) within the main text. Details are deferred to an extended paper.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions the use of 'ADAM optimization technique' but does not provide specific version numbers for any software dependencies, libraries, or solvers used in the experiments.
Experiment Setup Yes The paper states 'we use 2 message passing layers (l = 1, 2) as encoders... and next 2 message passing layers (l = 3, 4; L = 4) as decoders' and also specifies 'α1, α2, α3 and α4 are non-negative weights... we set them such that the sum of these non-negative weights always sum up to one' for the loss function. It also mentions 'We use ADAM optimization technique'.