The MEG Vision X AI is MSI’s flagship gaming desktop featuring a 13-inch touchscreen, AI-assisted temperature control, voice command support via Microsoft Copilot, and advanced hardware from Intel and NVIDIA, delivering unparalleled performance and usability.
The MEG Vision X AI represents MSI’s flagship desktop gaming PC equipped with cutting-edge artificial intelligence technologies. It boasts a novel 13-inch touchscreen display known as “AI HMI,” deeply integrated with AI-powered features such as Microsoft Copilot for voice commands and autonomous tools like MSI AI Artist.
Leveraging AI-driven thermal management, the system intelligently adjusts fan speeds to optimize cooling efficiency while minimizing noise levels. Additionally, the screen doubles as a secondary monitor, offering unprecedented flexibility. With state-of-the-art Intel processors, integrated Neural Processing Units (NPUs), and top-tier NVIDIA graphics, the Vision X AI sets new benchmarks for what a personal computer can achieve.
IL1A – AI-based olfactory digital sniffer dog system
IL1A is a system identifying specific scents
IL1A is a sophisticated device capable of detecting diverse odors. By sampling air surrounding individuals, it converts olfactory data into digital format using multichannel gas sensor arrays. Integrated AI systems then automatically compare these results against extensive databases, drawing conclusions accordingly.
Notably, IL1A can identify specific scents emitted by humans during illness, which vary depending on the condition, alongside environmental gases and medication-related aromas.
Ballie
In 2025, Ballie advances as a versatile autonomous companion robot, utilizing enhanced AI for personalized assistance in daily life activities, entertainment, and health monitoring
Ballie is an autonomous mobile domestic robot designed to serve multiple purposes such as companionship, health monitoring, and entertainment provision. Equipped with both verbal interaction capabilities and a video projector for displaying multimedia content, it enhances its utility further by integrating with smart home appliances, facilitating their operation at the user’s convenience.
Initially unveiled in 2020, subsequent advancements in Artificial Intelligence have prompted the company to introduce an upgraded version of this companion robot. Enhanced with new Vision AI functionalities, the updated model promises heightened performance and versatility, reinforcing its position as a reliable assistant within modern households.
In 2025, the innovative Ballie companion robot takes another leap forward in intelligence due to enhanced artificial intelligence integrations, solidifying its role as an indispensable tool for navigating the hectic rhythms of daily life.Ballie serves as a fully-autonomous, mobile domestic robot equipped with capabilities ranging from companionship and health surveillance to providing entertainment. Its interactive functionality includes vocal communication along with visual projection and audiophonic reproduction enabled by a built-in video projector and high-fidelity audio output.
Furthermore, it utilizes voice analysis, facial recognition, and conversational learning algorithms to adapt dynamically to individual preferences, thus executing tailored tasks suited specifically to each user.
Following up on my previous overview, I want to delve into the specific breakthroughs that truly arrested my attention during the show. These are the AI-centric solutions that, in my view, represent the pinnacle of innovation this year.
The “Best of Innovation” Laureates
AI in Education:Woonjgin Thinkbig unveiled Booxtory, an AI reading platform that captivated me. It analyzes a book’s nuances in real-time, seamlessly transmuting static text into immersive audio or dynamic reading formats.
Cybersecurity Reimagined:SK Telecom introduced ScamVanguard, an AI-driven shield against mobile financial fraud. By synthesizing AI with advanced cybersecurity protocols, it identifies and neutralizes rapidly evolving global scams with impressive speed.
Embedded Intelligence: I was particularly intrigued by Suprema AI’s Q-Vision Pro. This on-device module leverages facial recognition and behavioral analytics to anticipate and thwart financial fraud at autonomous terminals like ATMs, flagging suspicious conduct before a crime even occurs.
Robotics & Human Augmentation: The Hypershell Carbon-X is a marvel of ergonomics. This all-terrain exoskeleton uses its M-One motor system to deliver 800W of assistive power. What impressed me most was the AI MotionEngine algorithm; it detects your gait and shifts between 10 different assistance modes in real-time, making strenuous physical exertion feel almost effortless.
Health and Human Security:Poskom’s AirRay-Mini is a masterclass in portable diagnostics. By integrating AI into a handheld X-ray system, it produces clinical-grade imagery with significantly reduced radiation doses—a vital step in minimizing patient exposure.
The Future of Visuals:Samsung didn’t disappoint with The Freestyle AI+. This portable GenAI-enabled projector is incredibly versatile. With features like AI 3D Keystone and Object Avoidance, it dynamically recalibrates the image to fit anything from curved walls to surfaces cluttered with plants or artwork.
The FreestyleAI+ is a portable GenAI-enabled projector
Honoring the Visionaries
The Innovation Award Honorees showcased AI’s versatility across disparate sectors:
Creative Toolkits:Onoma AI debuted a formidable creative suite. It includes Fabulator (story ideation), Artifex (text-to-image), Anima (full-color illustration), and a collaborative Marketplace.
Synthetic Data:GenGenStudio by GenGenAI is solving a massive bottleneck by generating high-fidelity synthetic images and video for model training. Their current focus on the automotive sector—simulating rare “black swan” events like animal crossings or freak weather—is a game-changer for autonomous safety.
GenGenStudio by GenGenAI
Experiential AI:L’Oreal’s Mood Mirror takes AR further by incorporating Emotion AI. It doesn’t just show you how a product looks; it gauges your subconscious emotional reaction to the aesthetic.
Offline Ad-Tech:Triplet’s Deep Lounge AD is a sophisticated CMS that brings digital precision to physical retail. By using AI cameras to analyze foot traffic, dwell time, and behavior—like browsing or using fitting rooms—it serves hyper-personalized ads on digital displays in real-time.
The camera measures the target audience’s attention and delivers advertising effects comparable to those on online platforms
The Big Tech Showdown: NVIDIA, Samsung, and LG
NVIDIA: The “ChatGPT Moment” for Robotics Attending the NVIDIA keynote was a highlight. CEO Jensen Huang dropped a provocative prediction: the “ChatGPT moment” for robotics is arriving sooner than anticipated.
Project DIGITS: A powerhouse AI supercomputer in a form factor small enough for your home office.
Cosmos: Their new “world model” platform. It’s a physical-world simulator with Text2World and Video2World modes designed to train the next generation of robots.
GeForce RTX 50 Series: The RTX 5090 is a behemoth, boasting 4,000 AI TOPS and a staggering 1.8 TB/s bandwidth.
A powerhouse AI supercomputer by NVIDIA: Project DIGITS
Samsung: The Ambient Intelligence Home Samsung’s presence was dominated by “SmartThings” integration. I saw their MICRO LED Beauty Mirror, which provides a dermatological analysis in 30 seconds, and the AI Vision Inside 2.0 fridge that proactively manages your groceries. Their HoloDisplay Floating Screen was a crowd-favorite—a distortion-free 3D projection that looks like something out of a sci-fi film, yet functions as a practical hub for home monitoring.
Samsung HoloDisplay
LG: Innovation for Every Family Member LG took a heartwarming yet high-tech turn with the Pet Care Zone, a smart shelter that monitors your pet’s vitals (heart rate, temperature, weight) and connects you to tele-vet services. On the display front, their LG83G5 Premium OLED remains the gold standard. The new α11 AI processor is reportedly four times faster than the α9, optimizing content with startling precision.
LG83G5 Premium OLED
This second look at CES 2025 confirms one thing: we have moved past the era of “AI as a gimmick.” We are now witnessing the era of AI as infrastructure.
I’m thrilled to kick off a new series: a personal deep dive into the global tech circuit. Having spent the last few years navigating these halls, I’ll be sharing my firsthand insights into the trends and tech currently reshaping the AI landscape. To get things started, I’m sharing my personal highlights and takeaways from the ground in Las Vegas.
On the Ground at CES 2025
Walking through the International Consumer Electronics Show (CES) this year, it was clear that the event remains the undisputed North Star for the industry. Organized by the CTA, it’s where I go to see the theoretical become tangible.
As I moved between the 12 different venues, the sheer scale was palpable—over 230,000 square meters of innovation. This year felt noticeably more crowded than my last visit; you could really feel the energy of those extra 10,000 attendees. While the themes spanned everything from quantum computing to the energy transition, I spent most of my time focusing on where these tracks intersected with Artificial Intelligence.
The 2025 Experience by the Numbers:
Joining 141,000 fellow enthusiasts and industry leaders.
Navigating booths from 4,500 exhibiting companies.
Rubbing shoulders with 6,000 global media representatives.
Seeing heavy-hitters from 60% of the Fortune 500 in action.
What struck me most was how generative algorithms have moved from the “experimental” phase I saw last year into almost every gadget in sight—from the laptops and smartphones I tested to the latest TWS earbuds. There was also a much more somber and urgent focus on climate change adaptation compared to the general “smart home” buzz of 2024.
The Innovations That Caught My Eye
Artificial Intelligence as Infrastructure: What I observed this year wasn’t just “AI for the sake of AI.” It has become foundational. I spent some time at the L’Oreal booth testing the Mood Mirror, and it’s a brilliant example of how AI is becoming a personalized service. Similarly, seeing Triplet’s DeepLounge AD in action showed me how hyper-targeted advertising is evolving from a nuisance into a high-tech utility. AI is no longer a bolt-on feature; it’s the engine.
L’Oreal Mood Mirror with AI
A New Era of Digital Health: The health tech section felt less like a clinic and more like a lifestyle hub. I was particularly impressed by Eli Science’s Hormometer. The idea of tracking salivary hormones via AI to provide real-time wellness data is a game-changer for personalized medicine.
Hormometer for tracking salivary hormones via AI
Mobility Reimagined: I saw autonomy moving far beyond just passenger cars. The floor was packed with autonomous planes, boats, and even heavy machinery. One highlight for me was Sierra BASE’s SIRIUS system—watching a fully autonomous robot generate high-fidelity 3D spatial maps in real-time was a glimpse into the future of construction.
An inspection system that autonomously generates three-dimensional digital spatial maps and independently diagnoses the safety of various structures
Tangible Sustainability: It was refreshing to see sustainability move beyond marketing jargon into actual materials. I got a close look at HP’s Z Captis, which is a sleek, portable system for digital material capture—essential for the eco-conscious design workflows of tomorrow. It uses a type of 3D imaging, called photometry, to capture virtually any material and render it digitally. Powered by an embedded NVIDIA Jetson AGX Xavier system-on-module and HP’s Capture Management SDK, 3D creators can sample and remix reality through direct integration with Adobe Substance 3D Sampler, revolutionizing material digitization across industries including architecture, automotive, entertainment, fashion, and gaming.
The HP Z Captis stands as the pioneering global solution for capturing digital materials, seamlessly integrated with Adobe Substance 3D
The Energy of Eureka Park: I spent a good portion of my afternoon in Eureka Park, weaving through the 1,400 startups. There’s an undeniable grit there; seeing founders from 39 different countries pitching the next generation of electric mobility and green tech is always the most inspiring part of my trip.
This is just my initial overview of the atmosphere and the broad strokes of the event. In my next post, I’ll be drilling down into the specific AI solutions that I believe will truly move the needle this year. Stay tuned!
Taxonomy of machine learning in intelligent robotic process automation. Legend: MC meta-characteristics, M mentions, # total, P practitioner reports, C conceptions, F frameworks
Recent developments in process automation have revolutionized business operations, with Robotic Process Automation (RPA) becoming essential for managing repetitive, rule-based tasks. However, traditional RPA is limited to deterministic processes and lacks the flexibility to handle unstructured data or adapt to changing scenarios. The integration of Machine Learning (ML) into RPA—termed intelligent RPA—represents an evolution towards more dynamic and comprehensive automation solutions. This article presents a structured taxonomy to clarify the multifaceted integration of ML with RPA, benefiting both researchers and practitioners.
RPA and Its Limitations
RPA refers to the automation of business processes using software robots that emulate user actions through graphical user interfaces. While suited for automating structured, rule-based tasks (like “swivel-chair” processes where users copy data between systems), traditional RPAs have intrinsic limits:
They depend on structured data.
They cannot handle unanticipated exceptions or unstructured inputs.
They operate using symbolic, rule-based approaches that lack adaptability.
Despite these challenges, RPA remains valuable due to its non-intrusive nature and quick implementation, as it works “outside-in” without altering existing system architectures.
Machine Learning: Capabilities and Relevance
Machine Learning enables systems to autonomously generate actionable knowledge from data, surpassing expert systems that require manual encoding of rules. ML includes supervised, unsupervised, and reinforcement learning, with distinctions between shallow and deep architectures. In intelligent RPA, ML brings capabilities including data analysis, natural language understanding, and pattern recognition, allowing RPAs to handle tasks previously exclusive to humans.
Existing Literature and Conceptual Gaps
Diverse frameworks explore RPA-ML integration, yet many only address specific facets without offering a comprehensive categorization. Competing industry definitions further complicate the field, as terms like “intelligent RPA” and “cognitive automation” are inconsistently used. Recognizing a need for a clear and encompassing taxonomy, this article synthesizes research to create a systematic classification.
Methodology
An integrative literature review was conducted across leading databases (e.g., AIS eLibrary, IEEE Xplore, ACM Digital Library). The research encompassed both conceptual frameworks and practical applications, ultimately analyzing 45 relevant publications. The taxonomy development followed the method proposed by Nickerson et al., emphasizing meta-characteristics of integration (structural aspects) and interaction (use of ML within RPA).
The Taxonomy: Dimensions and Characteristics
The proposed taxonomy is structured around two meta-characteristics—RPA-ML integration and interaction—comprising eight dimensions. Each dimension is further broken down into specific, observable characteristics.
RPA-ML Integration
1. Architecture and Ecosystem
External integration: Users independently develop and integrate ML models using APIs, requiring advanced programming skills.
Integration platform: RPA evolves into a platform embracing third-party or open-source ML modules, increasing flexibility.
Out-of-the-box (OOTB): ML capabilities are embedded within or addable to RPA software, dictated by the vendor’s offering.
2. ML Capabilities in RPA
Computer Vision: Skills like Optical Character Recognition (OCR) for document processing.
Data Analytics: Classification and pattern recognition, especially for pre-processing data.
Natural Language Processing (NLP): Extraction of meaning from human language, including conversational agents for user interaction.
3. Data Basis
Structured Data: Well-organized datasets such as spreadsheets.
Unstructured Data: Documents, emails, audio, and video files—most business data falls into this category.
UI Logs: Learning from user interaction logs to automate process discovery or robot improvement.
4. Intelligence Level
Symbolic: Traditional, rule-based RPA with little adaptability.
Intelligent: RPA incorporates specific ML capabilities, handling tasks like natural language processing or unstructured data analysis.
Hyperautomation: Advanced stage where robots can learn, improve, and adapt autonomously.
5. Technical Depth of Integration
High Code: ML integration requires extensive programming, suited to IT professionals.
Low Code: No-code or low-code platforms enable users from various backgrounds to build and integrate RPA-ML workflows.
RPA-ML Interaction
6. Deployment Area
Analytics: ML-enabled RPAs focus on analysis-driven, flexible decision-making processes.
Back Office: RPA traditionally automates back-end tasks, now enhanced for unstructured data.
Front Office: RPA integrates with customer-facing applications via conversational agents and real-time data processing.
7. Lifecycle Phase
Process Selection: ML automates the identification of automation candidates through process and task mining.
Robot Development: ML assists in building robots, potentially through autonomous rule derivation from observed user actions.
Robot Execution: ML enhances the execution phase, allowing robots to handle complex, unstructured data.
Robot Improvement: Continuous learning from interactions or errors to improve robot performance and adapt to new contexts.
8. User-Robot Relation
Attended Automation: Human-in-the-loop, where users trigger and guide RPAs in real time.
Unattended Automation: RPAs operate independently, typically on servers.
Hybrid Approaches: Leverage both human strengths and machine analytics for collaborative automation.
Application to Current RPA Products
The taxonomy was evaluated against leading RPA platforms, including UiPath, Automation Anywhere, and Microsoft Power Automate. Findings revealed that:
All platforms support a wide range of ML capabilities, primarily via integration platforms and marketplaces.
Most ML features target process selection and execution phases.
The trend is toward increased low-code usability and the incorporation of conversational agents (“copilots”).
However, genuine hyperautomation with fully autonomous learning and adaptation remains rare in commercial offerings today.
Limitations and Future Directions
The taxonomy reflects the evolving landscape of RPA-ML integration. Limitations include:
The dynamic nature of ML and RPA technologies, making the taxonomy tentative.
Interdependencies between dimensions, such as architecture influencing integration depth.
The need for more granular capability classifications as technologies mature.
Conclusion
Integrating ML with RPA pushes automation beyond deterministic, rule-based workflows into domains requiring adaptability and cognitive capabilities. The proposed taxonomy offers a framework for understanding, comparing, and advancing intelligent automation solutions. As the field evolves—with trends toward generative AI, smart process selection, and low-code platforms—ongoing revision and expansion of the taxonomy will be needed to keep pace with innovation.