History and Objectives

Introduction 

In the early sixties, automatic telephone exchange services and television broadcasting were widely accepted in the USA and Western Europe. Many long distance analog telephone transmission systems though were badly hampered by disturbing noise impairments. It was the invention of Pulse Code Modulation (PCM), a binary representation technique proposed by Alec Reeves as early as in 1937, that became finally an indispensable enabler for transmitting voice with higher fidelity over long distance telecommunication facilities. As design, implementation and performance of electronic circuit technology progressed, the vision of providing point-to-point transmission of video signals developed and soon after Picturephone™ services, e.g. bundled sound and video connections to dialed-up end-users, became technically feasible.
In the USA of that time, the telecommunication services were strongly government regulated and AT&T and its Bell Laboratories had the monopoly to develop and operate new telecommunication services. In the mid sixties, AT&T launched several Picturephone™ service field test experiments. But after a while, all these experiments were abandoned lacking general public acceptance. The main reasons being the service quality, e.g. small screen, limited resolution, monochrome video as well as the estimated high installation and operating cost.
The results of the Picturephone™ field test triggered much interest amongst telecommunication professionals. It was the first time that sound and video were integrated and the term multimedia service coined. Amongst communications and computer scientists it became clear, that the shortcomings and the failure of the Picturephone™ field tests had to be better understood. In particular the basic question of how to increase picture quality and reduced cost by using techniques of digital image compression to remove statistical and perceptual signal redundancies. In order to do so, programmable electronic environments were the right tools and therefore, the development of techniques and procedures to simulate visual information systems by computers, the right approach.

 

Peter Stucki’s Work and Contributions

From 1965 to 1967 Peter Stucki gained early expertise in the digitalization of video signals during his postgraduate stay at the Electrical Engineering Department of the Imperial College of Science, Technology and Medicine in London, where he followed image processing related lectures by Profs. Gabor (NP) and Cherry. As part of his thesis work for the Diploma of Imperial College (DIC) he developed in Prof. Cherry’s Telecommunication Research Laboratory a single bit/sample digitizing system for BBC 405-line broadcast television signals. This work was based on the theory of scanning and signal statistics for bandwidth reduction of analog television signals. During his stay in England, he did also an internship at the Standard Telecommunication Laboratories in Harlow/Essex, where he was working with Ken Cattermole and Alec Reeves on their visions to apply Pulse Code Modulation (PCM) techniques to television signals and other analog media.

 

Pioneering work at the IBM Zurich Research Laboratory

In 1967 Peter Stucki returned to Switzerland and got employed by the IBM Zurich Research Laboratory to start a Digital Image Processing Project. The original R&D objectives set out for this project were to conceive and realize computer simulation facilities permitting to acquire, store, process, transmit and reproduce digital images and to develop new algorithms and procedures to optimize the digital image quality for systems where humans constitute the final information receiver.
The project start was very challenging as there were no scanners, A/D converters, image displays or image printers available. Also, appropriate software had to be developed and in the late sixties, the processing power of general-purpose computers was very limited too. Improvements in peripheral device technologies, computing power that according to Moore’s Law doubled every 18 months and better programming tools made computer simulation more and more attractive. This led to the development of many new digital image processing applications, e.g. document scanning and archiving, digital halftoning, image editing for print-for-profit, digital color copying, 3D-printing, medical imaging, e-learning, etc.
From the early years of R&D in digital image processing and multimedia, the original R&D objectives basically remain the same until nova days. They still include the setting-up of new and better means that permit to acquire, store, process, transmit and reproduce digital media elements such as print, audio and video as well as to develop new and more sophisticated algorithms and procedures to optimize functionality and media quality for Information and Communication Technology (ICT) applications where humans and/or machines constitute the final information receiver.
However, what has substantially changed over the past is the quality and the versatility of peripheral devices (functionality, resolution, speed, lower cost), the manifold advances in computing performance and storage capacities (speed, lower cost), the powerful data transmission facilities available (wired and wireless) and last but not least the multitude of applications on the market and the increased end-user awareness of the many gadgets available today.
Over the past decades the characteristics of technical ICT specifications have increased from Kilo-, to Mega-, to Giga-values, e.g. number of instructions per second, bytes and bauds, and they will continue to grow into the Peta- and even higher-dimension.

 

 

December 2016