Directory

← Back

voice-elements

:speaker: Web Component wrapper to the Web Speech API, that allows you to do voice recognition and speech synthesis using Polymer

Author

Rating

Popularity

<100

[ This description is mirrored from README.md at github.com/zenorocha/voice-elements on 2019-05-10 ]

<voice-elements>

Web Component wrapper to the Web Speech API, that allows you to do voice recognition (speech to text) and speech synthesis (text to speech) using Polymer.

Demo

Check it live!

Install

Install the component using Bower:

$ bower install voice-elements --save

Or download as ZIP.

Usage

  1. Import Web Components' polyfill:

    <script src="bower_components/webcomponentsjs/webcomponents-lite.min.js"></script>
    
  2. Import Custom Element:

    <link rel="import" href="bower_components/voice-elements/dist/voice-player.html">
    <link rel="import" href="bower_components/voice-elements/dist/voice-recognition.html">
    
  3. Start using it!

    <voice-player></voice-player>
    <voice-recognition></voice-recognition>
    

<voice-player>

Provides you a simple DOM API to do speech synthesis (text to speech).

Options

Attribute Options Default Description
autoplay boolean false Specifies if the audio should play when page loads.
accent en-US, en-GB, es-ES, fr-FR, it-IT, de-DE, ja-JP, ko-KR, zh-CN en-US Specifies the language to be synthesized and spoken.
text string You are awesome Specifies the text to be synthesized and spoken.

Methods

Method Parameters Returns Description
speak() None. Nothing. Triggers the voice audio to be played.
cancel() None. Nothing. Triggers the voice audio to be canceled.
pause() None. Nothing. Triggers the voice audio to be paused.
resume() None. Nothing. Triggers the voice audio to be resumed.

Events

Event Description
onstart Triggers when the voice begun to be spoken.
onend Triggers when the voice completed to be spoken.
onerror Triggers when the voice player detects an error.
onpause Triggers when the voice player is resumed.
onresume Triggers when the voice player is resumed.

<voice-recognition>

Provides you a simple DOM API to do voice recognition (speech to text).

Options

Attribute Options Default Description
continuous boolean true Specifies if the recognition should continue when the user pauses while speaking.
text string Returns the recognized text.

Methods

Method Parameters Returns Description
start() None. Nothing. Starts the voice recognition.
stop() None. Nothing. Requests to recognition service to stop listening to more audio.
abort() None. Nothing. Requests to immediately stop listening and stop recognizing.

Events

Event Description
onstart Triggers when the recognition begins.
onerror Triggers when there's a recognition error.
onend Triggers when the recognition ends.
onresult Triggers when there's a recognition result.

Browser Support

Unfortunately, the Web Speech API still have a poor support. Check Can I Use for more information.

IE Chrome Firefox Opera Safari
None ✘ Latest ✔ None ✘ None ✘ Latest (<voice-player> only) ✔

Development

In order to run it locally you'll need to fetch some dependencies and a basic server setup.

  1. Install Bower & Grunt:

    $ [sudo] npm install -g bower grunt-cli
    
  2. Install local dependencies:

    $ bower install && npm install
    
  3. To test your project, start the development server and open http://localhost:8000.

    $ grunt server
    
  4. To build the distribution files before releasing a new version.

    $ grunt build
    
  5. To provide a live demo, send everything to gh-pages branch.

    $ grunt deploy
    

Contributing

  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -m 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request :D

History

For detailed changelog, check Releases.

License

MIT License © Zeno Rocha

Compatibility

(Loading compatibility data...)

Was this helpful? Need more help?
Leave a comment or a question below. You can also join the chat on Discord or ask questions on StackOverflow.

Version

Dependencies

  • polymer#Polymer/polymer#^1.0.0
Released
2018-02-23
Maturity
IMPORTED
License
MIT License

Compatibility

Framework
Polymer 1.0+
Browser
Browser Independent

voice-elements - Vaadin Add-on Directory

:speaker: Web Component wrapper to the Web Speech API, that allows you to do voice recognition and speech synthesis using Polymer voice-elements - Vaadin Add-on Directory
**[ This description is mirrored from README.md at [github.com/zenorocha/voice-elements](https://github.com//zenorocha/voice-elements/blob/v1.1.0/README.md) on 2019-05-10 ]** # <voice-elements> > Web Component wrapper to the [Web Speech API](https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html), that allows you to do voice recognition (speech to text) and speech synthesis (text to speech) using [Polymer](http://www.polymer-project.org/). ## Demo [Check it live!](http://zenorocha.github.io/voice-elements) ## Install Install the component using [Bower](http://bower.io/): ```sh $ bower install voice-elements --save ``` Or [download as ZIP](https://github.com/zenorocha/voice-elements/archive/gh-pages.zip). ## Usage 1. Import Web Components' polyfill: ```html ``` 2. Import Custom Element: ```html ``` 3. Start using it! ```html ``` ## <voice-player> Provides you a simple DOM API to do speech synthesis (text to speech). ### Options Attribute | Options | Default | Description --- | --- | --- | --- `autoplay` | *boolean* | `false` | Specifies if the audio should play when page loads. `accent` | `en-US`, `en-GB`, `es-ES`, `fr-FR`, `it-IT`, `de-DE`, `ja-JP`, `ko-KR`, `zh-CN` | `en-US` | Specifies the language to be synthesized and spoken. `text` | *string* | `You are awesome` | Specifies the text to be synthesized and spoken. ### Methods Method | Parameters | Returns | Description --- | --- | --- | --- `speak()` | None. | Nothing. | Triggers the voice audio to be played. `cancel()` | None. | Nothing. | Triggers the voice audio to be canceled. `pause()` | None. | Nothing. | Triggers the voice audio to be paused. `resume()` | None. | Nothing. | Triggers the voice audio to be resumed. ### Events Event | Description --- | --- `onstart` | Triggers when the voice begun to be spoken. `onend` | Triggers when the voice completed to be spoken. `onerror` | Triggers when the voice player detects an error. `onpause` | Triggers when the voice player is resumed. `onresume` | Triggers when the voice player is resumed. ## <voice-recognition> Provides you a simple DOM API to do voice recognition (speech to text). ### Options Attribute | Options | Default | Description --- | --- | --- | --- `continuous` | *boolean* | `true` | Specifies if the recognition should continue when the user pauses while speaking. `text` | *string* | | Returns the recognized text. ### Methods Method | Parameters | Returns | Description --- | --- | --- | --- `start()` | None. | Nothing. | Starts the voice recognition. `stop()` | None. | Nothing. | Requests to recognition service to stop listening to more audio. `abort()` | None. | Nothing. | Requests to immediately stop listening and stop recognizing. ### Events Event | Description --- | --- `onstart` | Triggers when the recognition begins. `onerror` | Triggers when there's a recognition error. `onend` | Triggers when the recognition ends. `onresult` | Triggers when there's a recognition result. ## Browser Support Unfortunately, the [Web Speech API](https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html) still have a poor support. Check [Can I Use](http://caniuse.com/#feat=web-speech) for more information. ![IE](https://cloud.githubusercontent.com/assets/398893/3528325/20373e76-078e-11e4-8e3a-1cb86cf506f0.png) | ![Chrome](https://cloud.githubusercontent.com/assets/398893/3528328/23bc7bc4-078e-11e4-8752-ba2809bf5cce.png) | ![Firefox](https://cloud.githubusercontent.com/assets/398893/3528329/26283ab0-078e-11e4-84d4-db2cf1009953.png) | ![Opera](https://cloud.githubusercontent.com/assets/398893/3528330/27ec9fa8-078e-11e4-95cb-709fd11dac16.png) | ![Safari](https://cloud.githubusercontent.com/assets/398893/3528331/29df8618-078e-11e4-8e3e-ed8ac738693f.png) --- | --- | --- | --- | --- | None ✘ | Latest ✔ | None ✘ | None ✘ | Latest (<voice-player> only) ✔ | ## Development In order to run it locally you'll need to fetch some dependencies and a basic server setup. 1. Install [Bower](http://bower.io/) & [Grunt](http://gruntjs.com/): ```sh $ [sudo] npm install -g bower grunt-cli ``` 2. Install local dependencies: ```sh $ bower install && npm install ``` 3. To test your project, start the development server and open `http://localhost:8000`. ```sh $ grunt server ``` 4. To build the distribution files before releasing a new version. ```sh $ grunt build ``` 5. To provide a live demo, send everything to `gh-pages` branch. ```sh $ grunt deploy ``` ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -m 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## History For detailed changelog, check [Releases](https://github.com/zenorocha/voice-elements/releases). ## License [MIT License](http://zenorocha.mit-license.org/) © Zeno Rocha
Online