Seoul, n/a,
South Korea
Lg Ai Research Technographics
Discover the latest software purchases and digital transformation initiatives being undertaken by Lg Ai Research and its business and technology executives. Each quarter our research team identifies on-prem and cloud applications that are being used by the 120 Lg Ai Research employees from the public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources.
During our research, we have identified that Lg Ai Research has purchased the following applications: Amazon FSx for Lustre for Cloud Storage in 2022 and the related IT decision-makers and key stakeholders.
Our database provides customer insight and contextual information on which enterprise applications and software systems Lg Ai Research is running and its propensity to invest more and deepen its relationship with Amazon Web Services (AWS) or identify new suppliers as part of their overall Digital and IT transformation projects to stay competitive, fend off threats from disruptive forces, or comply with internal mandates to improve overall enterprise efficiency.
We have been analyzing Lg Ai Research revenues, which have grown to $23.0 million in 2024, plus its IT budget and roadmap, cloud software purchases, aggregating massive amounts of data points that form the basis of our forecast assumptions for Lg Ai Research intention to invest in emerging technologies such as AI, Machine Learning, IoT, Blockchain, Autonomous Database or in cloud-based ERP, HCM, CRM, EPM, Procurement or Treasury applications.
IaaS
Vendor |
Previous System |
Application |
Category |
Market |
VAR/SI |
When |
Live |
Insight |
|---|---|---|---|---|---|---|---|---|
| Amazon Web Services (AWS) | Legacy | Amazon FSx for Lustre | Cloud Storage | IaaS | n/a | 2022 | 2023 |
In 2022, LG AI Research deployed Amazon FSx for Lustre as Cloud Storage to support large scale foundation model training and research. The deployment paired Amazon FSx for Lustre with Amazon SageMaker to train the EXAONE multimodal foundation model, a 300 billion parameter model, as part of LG AI Research efforts in South Korea.
Amazon FSx for Lustre was provisioned as a high performance parallel file system, configured to serve training data and data preparation workflows for distributed SageMaker training instances. The implementation emphasized parallel I O throughput and shared storage semantics typical of Cloud Storage for machine learning, enabling faster data staging and checkpointing across GPU clusters.
Integration was explicitly implemented with Amazon SageMaker, where training jobs consumed data directly from Amazon FSx for Lustre mounts, and data engineering teams in LG AI Research operated the pipelines. The operational scope covered research and engineering functions focused on large scale AI model training, with infrastructure hosted on AWS in the South Korea region.
Governance and rollout followed a research to production path, with EXAONE developed and deployed to production within one year. The case study reports a reduction in model development costs of about 35 percent and an increase in data preparation speed of about 60 percent, outcomes attributed to the Amazon FSx for Lustre Cloud Storage deployment integrated with SageMaker.
|
| First Name | Last Name | Title | Function | Department | Phone | |
|---|---|---|---|---|---|---|
| No data found | ||||||
| Date | Company | Status | Vendor | Product | Category | Market |
|---|---|---|---|---|---|---|
| No data found | ||||||