Jump to content

Yasuo Matsuyama

From Wikipedia, the free encyclopedia
Yasuo Matsuyama
At the 2017 HPC Connection Workshop
Born (1947-03-23) March 23, 1947 (age 77)
Yokohama, Japan
NationalityJapanese
Alma materWaseda University (Dr. Engineering, 1974) Stanford University (PhD, 1978)
Known forAlpha-EM algorithm
Scientific career
FieldsMachine learning and human-aware information processing
InstitutionsWaseda University, Stanford University
Thesis Studies on Stochastic Modeling of Neurons (Dr. Engineering from Waseda University) . Process Distortion Measures and Signal Processing (PhD from Stanford University).
Doctoral advisorWaseda University: Jun'ichi Takagi, Kageo Akizuki, and Kastuhiko Shirai for Dr. Engineering Stanford University: Robert M. Gray for PhD
Websitehttp://www.f.waseda.jp/yasuo2/en/index.html

Yasuo Matsuyama (born March 23, 1947) is a Japanese researcher in machine learning and human-aware information processing.

Matsuyama is a Professor Emeritus and an Honorary Researcher of the Research Institute of Science and Engineering of Waseda University.

Early life and education

[edit]

Matsuyama received his bachelor’s, master’s and doctoral degrees in electrical engineering from Waseda University in 1969, 1971, and 1974 respectively. The dissertation title for the Doctor of Engineering is Studies on Stochastic Modeling of Neurons.[1] There, he contributed to the spiking neurons with stochastic pulse-frequency modulation. Advisors were Jun’ichi Takagi, Kageo, Akizuki, and Katsuhiko Shirai.

Upon the completion of the doctoral work at Waseda University, he was dispatched to the United States as a Japan-U.S. exchange fellow by the joint program of the Japan Society for the Promotion of Science, Fulbright Program, and the Institute of International Education. Through this exchange program, he completed his Ph.D. program at Stanford University in 1978. The dissertation title is Process Distortion Measures and Signal Processing.[2] There, he contributed to the theory of probabilistic distortion measures and its applications to speech encoding with spectral clustering or vector quantization. His advisor was Robert. M. Gray.

Career

[edit]

From 1977 to 1078, Matsuyama was a research assistant at the Information Systems Laboratory of Stanford University.

From 1979 to 1996, he was a faculty of Ibaraki University, Japan (the final position was a professor and chairperson of the Information and System Sciences Major).

Since 1996, he was a Professor of Waseda University, Department of Computer Science and Engineering. From 2011 to 2013, he was the director of the Media Network Center of Waseda University. At the 2011 Tōhoku earthquake and tsunami of March 11, 2011, he was in charge of the safety inquiry of 65,000 students, staffs and faculties.

Since 2017, Matsuyama is a Professor Emeritus and an Honorary Researcher of the Research Institute of Science and Engineering of Waseda University. Since 2018, he serves as an acting president of the Waseda Electrical Engineering Society.

Work

[edit]

Matsuyama’s works on machine learning and human-aware information processing have dual foundations. Studies on the competitive learning (vector quantization) for his Ph.D. at Stanford University brought about his succeeding works on machine learning contributions. Studies on stochastic spiking neurons[3][4] for his Dr. Engineering at Waseda University set off applications of biological signals to the machine learning. Thus, his works can be grouped reflecting these dual foundations.

Statistical machine learning algorithms: The use of the alpha-logarithmic likelihood ratio in learning cycles generated the alpha-EM algorithm (alpha-Expectation maximization algorithm).[5] Because the alpha-logarithm includes the usual logarithm, the alpha-EM algorithm contains the EM-algorithm (more precisely, the log-EM algorithm). The merit of the speedup by the alpha-EM over the log-EM is due to the ability to utilize the past information. Such a usage of the messages from the past brought about the alpha-HMM estimation algorithm (alpha-hidden Markov model estimation algorithm)[6] that is a generalized and faster version of the hidden Markov model estimation algorithm (HMM estimation algorithm).

Competitive learning on empirical data: Starting from the speech compression studies at Stanford, Matsuyama developed generalized competitive learning algorithms; the harmonic competition[7] and the multiple descent cost competition.[8] The former realizes the multiple-object optimization. The latter admits deformable centroids. Both algorithms generalize the batch-mode vector quantization (simply called, vector quantization) and the successive-mode vector quantization (or, called learning vector quantization).

A hierarchy from the alpha-EM to the vector quantization: Matsuyama contributed to generate and identify the hierarchy of the above algorithms.

On the class of the vector quantization and competitive learning, he contributed to generate and identify the hierarchy of VQs.

  • VQ ⇔ {batch mode VQ, and learning VQ}[8] ⊂ {harmonic competition}[7] ⊂ {multiple descent cost competition}.[8]

Applications to Human-aware information processing: The dual foundations of his led to the applications to huma-aware information processing.

  1. Retrieval systems for similar images[9] and videos.[10]
  2. Bipedal humanoid operations via invasive and noninvasive brain signals as well as gestures.[11]
  3. Continuous authentication of uses by brain signals.[12]
  4. Self-organization[7] and emotional feature injection based on the competitive learning.[8]
  5. Decomposition of DNA sequences by the independent component analysis (US Patent: US 8,244,474 B2).
  6. Data compression of speech signals by the competitive learning.[13][14][15]

The above theories and applications work as contributions to IoCT (Internet of Collaborative Things) and IoXT (http://www.asc-events.org/ASC17/Workshop.php).

Awards and honors

[edit]

References

[edit]
  1. ^ Matsuyama, Yasuo (1974-03). "Studies on Stochastic Modeling of neurons", http://www.f.waseda.jp/yasuo2/MatsuyamaWasedaDissertation.pdf
  2. ^ Matsuyama, Yasuo (1978-08). "Process Distortion Measures and Signal Processing", http://www.f.waseda.jp/yasuo2/MatsuyamaStanfordDissertation.pdf
  3. ^ Matsuyama, Yasuo; Shirai, Katsuhiko; Akizuki, Kageo (1974-09-01). "On some properties of stochastic information processes in neurons and neuron populations". Kybernetik. 15 (3): 127–145. doi:10.1007/BF00274585. ISSN 0023-5946. PMID 4853437. S2CID 31189652.
  4. ^ Matsuyama, Y. (1976-09-01). "A note on stochastic modeling of shunting inhibition". Biological Cybernetics. 24 (3): 139–145. doi:10.1007/BF00364116. ISSN 0340-1200. PMID 999955. S2CID 5211589.
  5. ^ a b Matsuyama, Y. (March 2003). "The alpha;-EM algorithm: surrogate likelihood maximization using alpha;-logarithmic information measures". IEEE Transactions on Information Theory. 49 (3): 692–706. doi:10.1109/tit.2002.808105. ISSN 0018-9448.
  6. ^ Matsuyama, Y. (July 2017). "The Alpha-HMM Estimation Algorithm: Prior Cycle Guides Fast Paths". IEEE Transactions on Signal Processing. 65 (13): 3446–3461. Bibcode:2017ITSP...65.3446M. doi:10.1109/tsp.2017.2692724. ISSN 1053-587X. S2CID 34883770.
  7. ^ a b c Matsuyama, Y. (May 1996). "Harmonic competition: a self-organizing multiple criteria optimization". IEEE Transactions on Neural Networks. 7 (3): 652–668. doi:10.1109/72.501723. ISSN 1045-9227. PMID 18263462.
  8. ^ a b c d Matsuyama, Y. (January 1998). "Multiple descent cost competition: restorable self-organization and multimedia information processing". IEEE Transactions on Neural Networks. 9 (1): 106–122. doi:10.1109/72.655033. ISSN 1045-9227. PMID 18252433.
  9. ^ Katsumata, Naoto; Matsuyama, Yasuo (2005). "Database retrieval for similar images using ICA and PCA bases". Engineering Applications of Artificial Intelligence. 18 (6): 705–717. doi:10.1016/j.engappai.2005.01.002.
  10. ^ Horie, Teruki; Shikano, Akihiro; Iwase, Hiromichi; Matsuyama, Yasuo (2015-11-09). "Learning Algorithms and Frame Signatures for Video Similarity Ranking". Neural Information Processing. Lecture Notes in Computer Science. Vol. 9489. Springer, Cham. pp. 147–157. doi:10.1007/978-3-319-26532-2_17. ISBN 9783319265315.
  11. ^ Matsuyama, Yasuo; Noguchi, Keita; Hatakeyama, Takashi; Ochiai, Nimiko; Hori, Tatsuro (2010-08-28). "Brain Signal Recognition and Conversion towards Symbiosis with Ambulatory Humanoids". Brain Informatics. Lecture Notes in Computer Science. Vol. 6334. Springer, Berlin, Heidelberg. pp. 101–111. doi:10.1007/978-3-642-15314-3_10. ISBN 9783642153136.
  12. ^ Matsuyama, Yasuo; Shozawa, Michitaro; Yokote, Ryota (2015). "Brain signal׳s low-frequency fits the continuous authentication". Neurocomputing. 164: 137–143. doi:10.1016/j.neucom.2014.08.084.
  13. ^ Gray, R.; Buzo, A.; Gray, A.; Matsuyama, Y. (August 1980). "Distortion measures for speech processing". IEEE Transactions on Acoustics, Speech, and Signal Processing. 28 (4): 367–376. doi:10.1109/tassp.1980.1163421. ISSN 0096-3518.
  14. ^ Matsuyama, Y.; Gray, R. (January 1981). "Universal tree encoding for speech". IEEE Transactions on Information Theory. 27 (1): 31–40. doi:10.1109/tit.1981.1056306. ISSN 0018-9448.
  15. ^ Matsuyama, Y.; Gray, R. (April 1982). "Voice Coding and Tree Encoding Speech Compression Systems Based Upon Inverse Filter Matching". IEEE Transactions on Communications. 30 (4): 711–720. doi:10.1109/tcom.1982.1095512. ISSN 0090-6778.
  16. ^ "IEEE Fellows 1998 | IEEE Communications Society".
[edit]