machine learning

New project: auphonic

Currently I am working on the auphonic project, which involves machine learning, audio signal processing, web development, open-source technologies and much more.

So don't expect many updates on my mur.at page, I will write about new things on the auphonic blog. You can subscribe to the auphonic feed or follow @auphonic on twitter.

Echo State Networks with Filter Neurons and a Delay&Sum Readout

Year: 
2010
Authors: 
Georg Holzmann
Authors: 
Helmut Hauser
Type: 
Journal paper
Publisher: 

Neural Networks

Abstract: 

Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation.

Here we suggest two enhancements of this network model.
First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series.
Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks.

It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling.

Reservoir Computing: a powerful Black-Box Framework for Nonlinear Audio Processing

Year: 
2009
Authors: 
Georg Holzmann
Type: 
Conference paper
Publisher: 

Proc. of the 12th Int. Conference on Digital Audio Effects (DAFx-09)

Abstract: 

This paper proposes reservoir computing as a general framework for nonlinear audio processing.
Reservoir computing is a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It can in theory approximate arbitrary nonlinear dynamical systems with arbitrary precision, has an inherent temporal processing capability and is therefore well suited for many nonlinear audio processing problems. Always when nonlinear relationships are present in the data and time information is crucial, reservoir computing can be applied.

Examples from three application areas are presented: nonlinear system identification of a tube amplifier emulator algorithm, nonlinear audio prediction, as necessary in a wireless transmission of audio where dropouts may occur, and automatic melody transcription out of a polyphonic audio stream, as one example from the big field of music information retrieval.
Reservoir computing was able to outperform state-of-the-art alternative models in all studied tasks.

Master Thesis on Echo State Networks

Year: 
2008
Authors: 
Georg Holzmann
Type: 
Master Thesis
Publisher: 

Institute for Theoretical Computer Science, TU Graz, Austria

Abstract: 

Echo State Networks with Filter Neurons and a Delay&Sum Readout with Applications in Audio Signal Processing

Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. They can in theory approximate arbitrary nonlinear dynamical system with arbitrary precision (universal approximation property), have an inherent temporal processing capability, and are therefore a very powerful enhancement of linear blackbox modeling techniques in nonlinear domain. It was demonstrated on a number of benchmark tasks, that echo state networks outperform other methods for nonlinear dynamical modeling.

This thesis suggests two enhancements of the original network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons and the ability of such networks to learn multiple attractors is demonstrated. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in benchmark tasks that this new structure is able to outperform standard ESNs and other models, moreover no other comparable method for sparse nonlinear system identification with long-term dependencies could be found in literature.

Finally real-world applications in the context of audio signal processing are presented and compared to state-of-the-art alternative methods. The first example is a nonlinear system identification task of a tube amplifier and afterwards ESNs are trained for nonlinear audio prediction, as necessary in audio restoration or in the wireless transmission of audio where dropouts may occur. Furthermore an efficient and open source C++ library for echo state networks was developed and is briefly presented.

The audio examples can be downloaded below.

Publication: 

Echo State Networks in Audio Processing

Year: 
2007
Authors: 
Georg Holzmann
Type: 
Technical report
Publisher: 

Internet Publication

Abstract: 

In this article echo state networks, a special form of recurrent neural networks, are discussed in the area of nonlinear audio signal processing. Echo state networks are a novel approach in recurrent neural networks with a very easy (linear) training algorithm.
Signal processing examples in nonlinear system identification (valve distortion, clipping), inverse modeling (quality enhancement) and audio prediction are briefly presented and discussed.

Publication: 

Genetische Algorithmen in Komposition und Computermusik

Year: 
2003
Authors: 
Georg Holzmann
Type: 
Technical report
Publisher: 

Internet Publication

Abstract: 

Meist werden komplexe Systeme von Algorithmen in der algorithmischen Komposition
verwendet. Dadurch entsteht eine Vielfalt an Parameter, die die intuitive Steuerung solcher
Systeme erschweren.
Mit Hilfe von Interaktiven Genetischen Algorithmen (IGA) kann man Variationen dieser
unzähligen Parameter nach eigenen ästhetischen Vorstellungen „züchten“, ohne ein Wissen
über die darunter liegende Struktur haben zu müssen und hat trotzdem noch ein hohes Maß an
Kontrolle.
Diese Arbeit bietet einen Überblick von Anwendungen Genetischer Algorithmen in der Musik und präsentiert eine neue Implementation.

aureservoir

Started in: 
2007
Authors: 
Georg Holzmann
License: 
GNU Library or "Lesser" General Public License (LGPL)
Programming language: 
C++, Python
Overview: 

Reservoir computing is a recent kind of recurrent neural network computation, where only the output weights are trained. This has the big advantage that training is a simple linear regression task and one cannot get into a local minimum. Such a network consists of a randomly created, fixed, sparse recurrent reservoir and a trainable output layer connected to this reservoir. Most known types are the "Echo State Network" and the "Liquid State Machine", which achieved very promising results on various machine learning benchmarks.

This library should be an open source (L-GPL) and very efficient implementation of Echo State Networks with bindings to scientific computation packages (so far to python/numpy, Pure Data and octave are in work, everyone is invited to make a Matlab binding) for offline and realtime simulations. It can be extended in an easy way with new simulation, training and adaptation algorithms, which are function objects and automatically used by the main classes.

For a theoretical overview and some papers about Echo State Networks see: Echo State Networks and for a detailed description, examples, documentation, downloads and installation instructions please visit the project page.

TheBrain

Started in: 
2005
Authors: 
Georg Holzmann
License: 
GNU General Public License (GPL)
Programming language: 
C++
Overview: 

TheBrain is a small C++ library for artificial neural networks.
Currently I implemented a feedforward, a recurrent neural net and wrappers for GEM, which calculate audio signals out of video frames.

TheBrain consists of the following 2 objects:
pix_linNN (with a linear feedforward neural net) and pix_recNN (with a recurrent neural net).

pix_recNN/pix_linNN are thought as an instument/interface.
This instrument should be useful as a general experimental video interface to generate audio. You can train an artificial neural net with playing audio samples to specific video frames in real-time - so you are able to produce specific sounds to specific video frames and you can control the sound with making some movements, colors, ... (whatever) in front of the camera.
The main interest for me was not to train the net exactly to reproduce these samples, but to make experimental sounds, which are "between" all the trained samples.

pix_linNN has one neuron per audio sample: this neuron has three inputs (a RGB-signal), a weight vector for each of the inputs, a bias value and a linear output function.

pix_recNN uses a 2 layer recurrent neural net (which is much better for time-based information like video/music).

GApop

Started in: 
2004
Authors: 
Georg Holzmann
License: 
GNU General Public License (GPL)
Programming language: 
C++
Overview: 

GApop is a genetic algorithm external for Pure Data and MAX/MSP, using the flext-layer by Thomas Grill.

Release Tarball: 
Syndicate content