The current system only has Japanese models for speech processing, so the default contents are for Japanese language only. We actually has an English speech model assets already, and are now working to make English version.
Audio input device is required. MMDAgent-EX will open the default audio device for audio input.
Disk room of about 300MBytes are required to store the system data and default content. On desktop OS, all the downloaded files are stored under user’s home directory.
Network connection is required to download the first-time system data. About 200 MBytes of data will be downloaded at start up.
If you can, it’s better to move to a quiet room or office environment. Speech recognition may suffer from noises.
Install the beta app from the download page. Follow the link for your OS and proceed for installation. Only minimal set of binaries are installed.
On Mac OS, open the downloaded .dmg file and move the App into Application.
On Windows, open the downloaded .zip file and put them anywhere you like.
On Linux, open the downloaded .tgz and put them anywhere you like.
The app first downloads all system data and default dialogue content from system server.
Just launch the app to start! On iOS and Android, tap the installed App icon. On desktop OS, run the executable “MMDAgent-EX” or “MMDAgent-EX.exe”. Stay until all the data has been downloaded.
Try it out!
After successfull start up, the default dialogue content is playing now. You can talk with to the default sample agent. Take a brief look at the following pages to know basics.
Play with other content
As an extra, you can try another Web content we have prepared to experience Web-based content deliverling and management. It is a fun-made real-time rendered dancing content originally built on MikuMikuDance by creators. Although there are no speech interaction in the content, you can see the power of MMD-based anime-like rich character expression, being rendered on real-time with high frame rate.
WarningThe example is fun-made based, will play anime-like character dancing with a song.
This is a demonstration content that shows you the expression capability and fast rendering of MMDAgent-EX. The content is composed of several materials which has been made open at the MMD community with a proper license term. You can see to what extent a character can be drawn and moved in MMDAgent-EX, with MikuMikuDance platform. Note that no speech interaction is defined in it.
WarningIt shows a Japanese 2D-like anime character dancing with anime songs. It may be embarrassing to you if you are not familier with anime.
On iOS, Android and MacOS, just tap the link below to start downloading and playing it. On Windows, start MMDAgent-EX, and paste the follwing URL link to the window. On Linux, give the URL as command line argument.
- iMarine Project - Welcome to DEEP BLUE TOWN! (modified for MMDAgent-EX)
Swipe from right edge or press
/ key to open menu. Left / right flick press arrow keys to flip through pages, and Tap or
Enter to select.
Interact with buttons
You can show the content-provided buttons by
- Tap and hold at the center of the screen, or
- Open menu, swipe left to “
[Contents]” page, then select “Show Buttons…”.
In the default content, tapping the shown buttons will open a web page related to MMDAgent-EX in web browser.
Buttons in default content that appears with long tap on center screen of `q` key.
Add it to bookmark
You can bookmark the content. The bookmarked content can be played again from your menu.
Open the menu, flick until “
[Favorite]” page, then tap
[+] to add the current content.
Open menu and flick to the left to show the page.
Tap [+] to add the current content!
OK, what shall I do next?
- Read the System for the overview of this system.
- Go through the Dialogue Content to know the details of dialogue content and how to edit or create it.
- See the Deploy page to see how your content can be distributed on the Web.
And feel free to join the community!
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.