"With the support of the Microsoft AI Co-Innovation Lab KOBE, we successfully implemented a high-precision voice model for AI Hachiemon using Azure AI Foundry and the Voice Live API. By faithfully reproducing the unique characteristics of the voice actor from existing audio data, we achieved seamless integration into a real-time voice chat system. This breakthrough opens up new possibilities for character AI utilization in the broadcasting industry and lays the foundation for enhancing future viewers engagement."  — Naohiro Nakamichi, Broadcast Promotion Center, Kansai Television Co., Ltd.

Co-Innovation Challenge

Hachiemon is a beloved character created for Japan-based TV broadcaster, Kansai TV. To celebrate the character’s 30th debut, Kansai TV planned to integrate Hachiemon into commemorative initiatives like pop-up events where viewers could engage with him.

The challenge? The actor who voiced Hachiemon was retired and Kansai was working toward a responsive, AI capable of real-time conversation.

Seven years ago, the first AI Hachiemon was developed using 1,300 gag voice clips to build a system that analyzed user conversations and selected appropriate audio responses. At the time, this was an impressive implementation of AI technology, but it was limited to playing predefined audio clips and could not enable free-form dialogue.

Since then, AI technology has advanced significantly, and Kansai TV has been working to recreate Hachiemon’s voice and build an AI characters capable for real-time, responsive conversation with Kansai TV audience members.

From building the voice model to implementing the latest Voice Live API, the technical challenges were diverse. However, through collaboration with Microsoft AI Co-Innovation Lab, rapid development was achieved, resulting in the completion of a character-based AI chatbot with great potential for future expansion.

arch
Original voice
arch
AI voice

In the Lab

To bring AI Hachiemon to life, the Lab team developed a sophisticated solution using Microsoft technologies. At the core is  Azure OpenAI Service (GPT-4o Realtime API), which enables natural, real-time conversations while preserving Hachiemon’s unique personality traits. Another supporting technology was Custom Voice, part of Azure AI Speech, which was used to replicate the original voice actor’s tone and dialect modeled on those 1,300 existing voice recordings to capture the nuances of the Kansai dialect and emotional intonation. Finally, Microsoft’s newest AI voice technology, Azure Voice Live API enables interactive audio experiences with real-time voice interaction. To make these conversations contextually relevant, RAG (Retrieval-Augmented Generation) technology was integrated, connecting the system to Kansai TV’s program database. This allowed AI Hachiemon to provide timely responses and guidance based on current programming. The solution was supported by a robust architecture including Azure Blob Storage, AI Search, WebSocket servers, and a GUI for operations, ensuring seamless management of data and workflows. This comprehensive approach not only preserved Hachiemon’s legacy but also opened new possibilities for interactive entertainment.

Solution Impact

The collaboration with Microsoft AI Co-Innovation Lab Kobe enabled Kansai TV to transition from AWS to Azure and compress a six-month development timeline into a single week of sprint development. This rapid acceleration was made possible through unified support from Avanade and the Lab, covering infrastructure, AI modeling, and operations.

This innovation will be showcased at the International Broadcast Equipment Exhibition 2025, offering a glimpse into the future of interactive broadcasting. With this critical partnership, Kansai has advanced in its AI transformation and is poised to be a pioneer in AI-driven media experiences.

Join Microsoft AI

Co-Innovation Labs today!

Apply now to get the opportunity to co-engineer your solutions with Microsoft Technology Experts.