– Unlike general-purpose fashions, Medisensing’s AI makes use of contextual reasoningto analyze frequency, depth, and period relative to a house’s regular noise surroundings, offering actionable information for aged care, childcare, and public security.
– The firm has efficiently developed a light-weight Edge SDKthat capabilities on units like smartphones with out server connections and is at present getting ready for seed funding and TIPSmatching in early 2026 to standardize “hearing intelligence” globally.
Social issues concerning caregiving and security are steadily growing. We have reached some extent the place there’s a rising want for expertise that may detect moments resembling an aged particular person dwelling alone collapsing, a child crying, or somebody calling for assist in an alleyway. Particularly as non-face-to-face medical remedy and distant care companies unfold, makes an attempt are being made throughout numerous fields to complement conditions the place people can not instantly see or hear via expertise.
Seong-eun Kim , CEO of Medisensing / supply=IT DongA
The startup Medisensing is responding to those social calls for. Specifically, it’s venturing into areas starting from healthcare to security and caregiving in each day life utilizing Artificial Intelligence (AI) expertise that listens to sounds and interprets their that means and context. Beyond easy voice recognition, the corporate is creating subtle AI expertise that understands non-verbal sounds resembling screams, cries, and requests for assist to evaluate the scenario. We met with Seong-eun Kim, the CEO of Medisensing, to listen to concerning the motivation, technique, and imaginative and prescient behind the enterprise.
Deciding to Start a Business While Researching the AI Field
Founded in 2024, the title Medisensing combines “Medical” and “Sensing.” Kim defined, “I established Medisensing to create technology that can substantially contribute to society by detecting signals generated from the human body and daily life and understanding their meanings.”
Kim can be a researcher and a professor within the Department of Artificial Intelligence at Seoul National University of Science and Technology who has revealed greater than 12 papers in top-tier worldwide journals (throughout the high 10%) over the previous 5 years within the discipline of AI for understanding biosignals, brainwaves, and sound indicators. While conducting analysis within the AI discipline, he determined to start out a enterprise to make sure that the expertise doesn’t merely stay on the paper degree however really contributes to society.
Kim began a enterprise based mostly on AI analysis. / supply=IT DongA
“Through joint research with hospitals, I applied AI technology and found that it could identify abnormal respiratory sounds with higher accuracy than expected. Beyond the technical achievement at the time, I clearly saw the possibility that ‘this technology could actually help someone.’ I thought that if this technology could be implemented using only a smartphone microphone, it could provide basic stethoscopic assistance even in environments with low medical accessibility,” recalled Seong-eun Kim. He added, “Through these experiences, I realized the importance of realizing research results as practical value rather than letting them remain in papers. That decision led to the founding of the company. I also wanted my students to directly experience the entire process of technology being materialized into actual products and services beyond the laboratory.”
Expanding into a Sound Sensing AI Platform Integrating Core Sounds
The core expertise of Medisensing is an AI that precisely acknowledges significant particular person sounds and understands the context even within the complicated noise environments of actuality. They are creating SSI(Sound State Intelligence) expertise, which analyzes sound information to understand the standing and induce motion. While general-purpose fashions have strengths in classification—labeling one thing as ‘it is a cry’ based mostly on realized standards—they inevitably have many errors. In the case of SSI, it makes use of the conventional sound surroundings of every family as a reference to evaluate ‘how significant this sound is in comparison with the same old.’ It primarily solutions the ‘why’ and ‘what’ of the sound by combining time-series patterns with LLM-based contextual reasoning. For instance, it interprets a baby’s steady high-pitched crying not simply as a easy cry, however as ache or a danger of irregular indicators.
Kim said, “When a general-purpose model hears a baby’s cry, it usually outputs a probability like ‘baby cry 0.72.’ From the user’s perspective, it is difficult to connect that to ‘so what should I do now.’ SSIorganizes the information into a form that the user can immediately judge, such as when the crying started, how long it lasted, the intensity, and whether it recurred within the last 10 minutes.” He continued, “Even for the same baby’s cry, the volume of the TV differs in every house, and every child has a different usual crying pattern. SSI first learns the normal sounds and patterns of that specific house and child, and organizes it as an event when a change deviating from that standard is detected.”
Based on this, Medisensing developed a beta model of iMedic, a self-stethoscopic software app for AI-based pediatric respiratory analysis, and launched it in 2025. The core of iMedic is to measure a baby’s respiratory sounds with a smartphone microphone to find out abnormalities and then report and handle them. The focus was on making it simple for anybody to function at house and permitting the sound to be transmitted to medical employees if needed.
However, it was confirmed that extra procedures, resembling medical gadget certification, have been required for the official launch of iMedic. Consequently, the beta service was terminated. Seong-eun Kim stated, “The core technology of iMedic was based on research results already published in academic papers. Although it did not lead to commercialization, the experience of implementing technology that remained in papers into an actual MVP(Minimum Viable Product) form and verifying it in the field was very meaningful.”
Beta model of iMedic developed by Medisensing / supply=IT DongA
Medisensing can be specializing in creating expertise to acknowledge 5 to 6 core particular person sounds, resembling child cries and screams. Kim defined, “We have developed a model that can stably recognize baby cries even in environments where TV sounds, daily noise, and surrounding conversations are mixed. We are expanding this technology into a sound sensing AI platform that integrates various core sounds rather than being limited to specific sounds.”
Above all, Medisensing seeks differentiation by lightening its core expertise in order that it may function with out large-scale server sources. Medisensing gives this expertise within the type of an Edge SDK. They plan to extend utility by designing it in order that it may be put in and used on any gadget related to a microphone, resembling a smartphone.
Accordingly, the expertise utilization situations are various. Representative examples embody security methods put in in alleyways or public areas to detect screams or requests for assist at evening, in addition to caregiving help expertise in houses that acknowledges the whole lot from a child’s cry to whimpering.
“What Medisensing wants to create is not just a simple sound recognition function. It is hearing intelligence that assists moments that humans had to listen to and judge directly. Although we started in medicine, we are expanding the technology in a direction that helps human safety and care in daily life and society as a whole,” stated Kim.
Solving Difficulties in Data Acquisition and Real Noise Environments by Changing the Approach
The journey to creating the expertise for Medisensing was not simple. The largest problem was undoubtedly information acquisition. Kim confessed, “Unlike images or text, public data for sound is very limited. In particular, sound data that includes the context of actual situations hardly exists. In the case of medical sounds, practical constraints on development speed were added because clinical validation and ethical considerations are essential.”
Another downside was that sound recognition expertise virtually all the time has to function ‘inside noise’ in precise utilization environments. Kim stated, “Models that worked well in a laboratory environment often saw a sharp decline in performance in real environments where daily noises like TV and conversation were mixed. However, it was not easy to secure and reproduce such realistic environmental noise itself.”
That was not the top. There was additionally a big hole between analysis and enterprise. Kim said, “While research requires sufficient verification and repetition, business requires rapid execution and pivoting. Satisfying these two demands simultaneously was a constant concern during the technology development process.”
Medisensing has modified its method to handle a sequence of issues / supply=IT DongA
Nevertheless, they didn’t surrender. They modified their method to unravel the issue. Instead of attempting to know all sounds comprehensively from the start, they established a technique to outline and acknowledge sounds one after the other, beginning with those who have clear meanings and excessive social utility. As a end result, they’re at present specializing in creating expertise that selects 5 to 6 core particular person sounds, resembling child cries, screams, and requests for assist, and defines and acknowledges every as an unbiased unit of that means.
“After changing the strategy, we were able to clearly limit the scope of the problem and rapidly increase the technical maturity. Also, to create conditions similar to actual environments, we introduced a method of designing environmental noise into various scenarios and reproducing them by combining them with Generative AI. For example, by artificially generating noise conditions assuming actual usage environments such as homes, alleys, and indoor public spaces, and utilizing them for learning and verification, we were able to develop a model strong against noise environments. We lightened the model so that it works properly even in real environments and designed it to operate stably with very few computational resources,” defined Kim.
As a end result, via a sequence of methods, Medisensing was capable of safe a light-weight sound recognition AI module that doesn’t miss significant sounds even in real looking noise environments. This has develop into the core basis of the sound sensing AI platform that Medisensing is at present creating.
The largest problem for Medisensing at present is to precisely establish what ‘sound-based expertise’ the market wants proper now. Kim stated, “Technically, we have secured a significant portion of AI technology that recognizes individual sounds and operates stably in noisy environments, but we are at a stage of clearly defining what problem this technology solves to become an ‘absolutely necessary technology.'”
Parallel to expertise improvement, Medisensing is making efforts to confirm precise wants via interviews and PoC(Proof of Concept) with numerous trade stakeholders. They are confirming one after the other which sound recognition capabilities are most determined within the discipline, what type they need to be supplied in to have worth as a service, and what enterprise mannequin must be created based mostly on this.
Aiming for Step-by-Step Growth Through Parallel Technology and Market Validation
The power of Medisensing is that it’s not based mostly on an thought created in a brief interval, however on analysis capabilities and achievements amassed within the laboratory over a very long time. Top-tier worldwide journal papers and a patent portfolio are additionally vital belongings which might be troublesome to own within the early startup stage. In addition, the technical functionality to concurrently develop and distribute Cloud API and Edge SDK is taken into account an element that will increase the potential of collaboration with numerous industrial companions.
Medisensing participated in Next Stage: Global IR for Growth-Stage Startups / supply=IT DongA
Based on this enterprise feasibility, Medisensing was chosen for the Initial Startup Package within the deep-tech discipline by the Seoultech Startup Support Foundation in 2025. Kim expressed satisfaction, saying, “Through this project, we received initial funding to stably secure labor costs for about five months and established an environment where we could focus on technology development. Not only the financial support, but the VC meetups, IR material preparation and presentation opportunities, and networking programs were of great practical help. These processes went beyond simple support and helped Medisensing systematically prepare to move to the next stage.”
Furthermore, Medisensing has at present accomplished an preliminary small-scale funding contract with Seoultech Holdings. They are additionally getting ready to draw seed funding with the objective of linking with TIPS within the first half of 2026. Kim emphasised, “Once stable funds are secured through investment, we plan to expand core technical personnel who can advance sound recognition and situational understanding technology, as well as product planning personnel, to increase the completeness of products and services applicable to the actual market. Rather than short-term expansion, Medisensing’s goal is to grow step-by-step while performing technology and market validation in parallel.”
According to Medisensing, the mid-to-long-term objective is to construct numerous sound databases and safe listening to intelligence expertise that may perceive a number of sounds concurrently and understand conditions. Medicine was an vital place to begin, and the technique is to develop it into a core expertise that may be expanded to each day life and the trade as an entire sooner or later.
The imaginative and prescient of Medisensing is “Defining the Standard for Hearing Intelligence.” Kim said, “Humans can perceive the general situation just by hearing sound, but current AI technology is excellent at voice recognition but still has limits in understanding the situational meanings contained in non-verbal sounds. Our goal is to create a common standard and structure for which sounds should be interpreted in what context and with what meaning.” He predicted, “Once this standard is established, it can be utilized as a basic infrastructure for AI that understands sound in various industries such as robotics, smart homes, autonomous driving, and caregiving services.”
Attention is concentrated on what sort of outcomes Kim’s problem to attach long-term analysis achievements to social worth will result in available in the market.
By Kui-im Park ([email protected])