Changelog¶
2.0.0 (2018-04-30)¶
(#108)
Features¶
- The
gtts
module- New logger (“gtts”) replaces all occurrences of
print()
- Languages list is now obtained automatically (
gtts.lang
) (#91, #94, #106) - Added a curated list of language sub-tags that have been observed to provide different dialects or accents (e.g. “en-gb”, “fr-ca”)
- New
gTTS()
parameterlang_check
to disable language checking. gTTS()
now delegates thetext
tokenizing to the API request methods (i.e.write_to_fp()
,save()
), allowinggTTS
instances to be modified/reused- Rewrote tokenizing and added pre-processing (see below)
- New
gTTS()
parameterspre_processor_funcs
andtokenizer_func
to configure pre-processing and tokenizing (or use a 3rd party tokenizer) - Error handling:
- Added new exception
gTTSError
raised on API request errors. It attempts to guess what went wrong based on known information and observed behaviour (#60, #106) gTTS.write_to_fp()
andgTTS.save()
also raisegTTSError
on gtts_token errorgTTS.write_to_fp()
raisesTypeError
whenfp
is not a file-like object or one that doesn’t take bytesgTTS()
raisesValueError
on unsupported languages (andlang_check
isTrue
)- More fine-grained error handling throughout (e.g. request failed vs. request successful with a bad response)
- Added new exception
- New logger (“gtts”) replaces all occurrences of
- Tokenizer (and new pre-processors):
- Rewrote and greatly expanded tokenizer (
gtts.tokenizer
) - Smarter token ‘cleaning’ that will remove tokens that only contain characters that can’t be spoken (i.e. punctuation and whitespace)
- Decoupled token minimizing from tokenizing, making the latter usable in other contexts
- New flexible speech-centric text pre-processing
- New flexible full-featured regex-based tokenizer
(
gtts.tokenizer.core.Tokenizer
) - New
RegexBuilder
,PreProcessorRegex
andPreProcessorSub
classes to make writing regex-powered text pre-processors and tokenizer cases easier - Pre-processors:
- Re-form words cut by end-of-line hyphens
- Remove periods after a (customizable) list of known abbreviations (e.g. “jr”, “sr”, “dr”) that can be spoken the same without a period
- Perform speech corrections by doing word-for-word replacements from a (customizable) list of tuples
- Tokenizing:
- Rewrote and greatly expanded tokenizer (
- The
gtts-cli
command-line tool- Rewrote cli as first-class citizen module (
gtts.cli
), powered by Click - Windows support using setuptool’s entry_points
- Better support for Unicode I/O in Python 2
- All arguments are now pre-validated
- New
--nocheck
flag to skip language pre-checking - New
--all
flag to list all available languages - Either the
--file
option or the<text>
argument can be set to “-” to read fromstdin
- The
--debug
flag uses logging and doesn’t pollutestdout
anymore
- Rewrote cli as first-class citizen module (
Bugfixes¶
_minimize()
: Fixed an infinite recursion loop that would occur when a token started with the miminizing delimiter (i.e. a space) (#86)_minimize()
: Handle the case where a token of more than 100 characters did not contain a space (e.g. in Chinese).- Fixed an issue that fused multiline text together if the total number of characters was less than 100
- Fixed
gtts-cli
Unicode errors in Python 2.7 (famous last words) (#78, #93, #96)
Deprecations and Removals¶
- Dropped Python 3.3 support
- Removed
debug
parameter ofgTTS
(in favour of logger) gtts-cli
: Changed long option name of-o
to--output
instead of--destination
gTTS()
will raise aValueError
rather than anAssertionError
on unsupported language
Improved Documentation¶
- Rewrote all documentation files as reStructuredText
- Comprehensive documentation writen for Sphinx, published to http://gtts.readthedocs.io
- Changelog built with towncrier
1.2.1 (2017-08-02)¶
1.1.6 (2016-07-20)¶
1.1.5 (2016-05-13)¶
1.1.4 (2016-02-22)¶
Features¶
- Spun-off token calculation to gTTS-Token (#23, #29)