Title page for ETD etd-11112008-151309


Document Type Doctoral Thesis
Author Smit, Willem Jacobus
Email willie.smit@gmail.com
URN etd-11112008-151309
Document Title Sparse coding for speech recognition
Degree PhD
Department Electrical, Electronic and Computer Engineering
Supervisor
Advisor Name Title
Prof E Barnard Supervisor
Keywords
  • mathematical optimization
  • spike train classification
  • spike train
  • speech recognition
  • sparse code
  • linear generative model
  • sparse code measurement
  • dictionary training
  • overcomplete dictionary
  • spectrogram
Date 2008-09-02
Availability unrestricted
Abstract

The brain is a complex organ that is computationally strong. Recent research in the field of neurobiology help scientists to better understand the working of the brain, especially how the brain represents or codes external signals. The research shows that the neural code is sparse. A sparse code is a code in which few neurons participate in the representation of a signal.

Neurons communicate with each other by sending pulses or spikes at certain times. The spikes send between several neurons over time is called a spike train. A spike train contains all the important information about the signal that it codes. This thesis shows how sparse coding can be used to do speech recognition. The recognition process consists of three parts. First the speech signal is transformed into a spectrogram. Thereafter a sparse code to represent the spectrogram is found. The spectrogram serves as the input to a linear generative model. The output of themodel is a sparse code that can be interpreted as a spike train. Lastly a spike train model recognises the words that are encoded in the spike train.

The algorithms that search for sparse codes to represent signals require many computations. We therefore propose an algorithm that is more efficient than current algorithms. The algorithm makes it possible to find sparse codes in reasonable time if the spectrogram is fairly coarse.

The system achieves a word error rate of 19% with a coarse spectrogram, while a system based on Hidden Markov Models achieves a word error rate of 15% on the same spectrograms.

İ University of Pretoria 2008

D535/gm
Files
  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
  00front.pdf 41.02 Kb 00:00:11 00:00:05 00:00:05 00:00:02 < 00:00:01
  01chapters1-2.pdf 497.31 Kb 00:02:18 00:01:11 00:01:02 00:00:31 00:00:02
  02chapters3-4.pdf 245.65 Kb 00:01:08 00:00:35 00:00:30 00:00:15 00:00:01
  03references.pdf 62.15 Kb 00:00:17 00:00:08 00:00:07 00:00:03 < 00:00:01

Browse All Available ETDs by ( Author | Department )

If you have more questions or technical problems, please Contact UPeTD.