How South Korea plans to best OpenAI, Google, others with homegrown AI | TechCrunch
Briefly

How South Korea plans to best OpenAI, Google, others with homegrown AI | TechCrunch
"From tech giants to startups, South Korean players are developing large language models tailored to their own language and culture, ready to compete with global heavyweights like OpenAI and Google. Last month, the nation launched its most ambitious sovereign AI initiative to date, pledging ₩530 billion, (about $390 million), to five local companies building large-scale foundational models. The move underscores Seoul's desire to cut reliance on foreign AI technologies, hoping to strengthen national security and keep a tighter control over data in the AI era."
"Every six months, the government will review the first cohort's progress, cut underperformers, and continue funding the frontrunners until just two remain to lead the country's sovereign AI drive. Each player is bringing a different advantage to South Korea's AI race. TechCrunch spoke with several of the selected companies about how they plan to take on OpenAI, Google, Anthropic and the rest on their home turf. NC AI declined to comment."
"LG AI Research, the R&D unit of South Korean giant LG Group, offers Exaone 4.0, a hybrid reasoning AI model. The latest version blends broad language processing with the advanced reasoning features first introduced in the company's earlier Exaone Deep model. Exaone 4.0 (32B) already scores reasonably well against competitors on Artificial Analysis's Intelligence Index benchmark (as does Upstage's Solar Pro2). But it plans to improve and move up the ranks through its deep access to real-world industry data ranging from biotech to advanced materials and manufacturing."
South Korean tech giants and startups are building large language models tailored to the Korean language and culture to compete with global AI firms. The government pledged ₩530 billion (about $390 million) to five companies — LG AI Research, SK Telecom, Naver Cloud, NC AI, and startup Upstage — to develop large-scale foundational models. The program will review progress every six months, cutting underperformers and continuing support until two frontrunners remain. Companies emphasize different strengths, such as LG's Exaone 4.0 hybrid reasoning model using refined, industry-specific data rather than chasing sheer scale.
Read at TechCrunch
Unable to calculate read time
[
|
]