Felix Sasaki is the Chief Expert for Knowledge Graph and Semantic Technology to the Artificial Intelligence (AI) Unit at SAP, and an invited expert in the W3C Internationalization Activity. He has spent several years at the W3C on Internationalization as well as Artificial Intelligence work, and has worked in various leadership roles in software development, publishing and higher education. In the interview we talk about the mechanics of generative artificial intelligence, the difference between semantic web and artificial neural networks, and the principles and structure of web governance at W3C.
Felix, you have worked on semantic web technologies and artificial intelligence for decades. Were you surprised, excited or unimpressed by the sudden mainstream success of large language models such as ChatGPT?
I was surprised about the speed of the success. The general technology behind large language models has been around for a while. But the capabilities that are now available also to non AI expert users are astonishing. It is an exciting question how the success will evolve. We will learn the answer by more and more real-world applications of large language models.
How do you see the potential and limitations of artificial neural networks (ANNs) more general?
ANNs have the potential to automatize many tasks that currently humans have to spend time on. The limitations of ANNs are the amount of data and the compute power needed for training. With the World Wide Web as a source, data sparseness seems to be no issue anymore. But for enterprise AI applications, internal data is needed and must be brought into the right form for AI. In this process, enterprise knowledge graphs can play a key role. By making the semantics of enterprise data explicit, they contribute to scalability of data processing for enterprise AI.
Could you contrast the potential and limitations of current Large Language Models AI based on Artificial Neural Networks and their ‘black-box’ approach with rule-based Semantic Web?
I would not see this as a contrast, but as a virtuous circle. I created this metaphor with my colleagues Johannes Hoffart and Christian Lieske. As mentioned in the answer to the previous question, large language models or other machine learning approaches can benefit from the explicit knowledge provided by knowledge graphs. On the other hand, knowledge graphs can be generated or enhanced based on large language models. Even if these graphs may not be perfect, large language models can contribute in an unseen way to closing the knowledge graph gap. The knowledge graph community has now the unseen chance for widespread knowledge graph adoption, based on the interplay between large language models and knowledge graphs.
Is your own work still focused on semantic web technologies?
Semantic web technologies are a key part of my daily work. Especially their relation to large language models is of high interest to me, as described in the answer to the previous question. I am not sure, if e.g. the interplay between large language models and knowledge graphs will lead to a new research field. Nevertheless, there are the forehand mentioned opportunities to create mutual benefit.
How do you think AI will influence sectors that you are familiar with – higher education, software development and publishing – over the next 2-3 years?
All these sectors are already changing. A general view on these sectors is that they produce content. AI already is accelerating the content production enormously, may it be educational material, software code or even books generated by AI. As with other technical changes, new job profiles will emerge. For example, the emergence of machine translation led to the development of pre- and post-editing as integral parts of translation processes. If you are working in an area effected by AI, new questions will arise. E.g. how can a well-known designer maintain the originality of her work, if anybody can ask a large language model “Create a visualization in the style of …”? In these situations, regulations can help to some extent. But we must be careful not to limit ourselves and loose competitive grounds.
Can you talk about your work at the W3C? What brought you there?
I have written a PhD in computational linguistic, with a focus on encoding of linguistic information. At that time in my thesis and still today, standard based formats like XML or RDF play a key role for linguistic and general data re-use and data integration. However, I joined the W3C not only because of my background in Web technology standards, but also for personal reasons. My wife is Japanese, and the W3C job offer was an opportunity for our family to move back to Japan. Also, I think I was chosen by the W3C because of my multilingual and multicultural background. So, I got this great work opportunity because of my wife, and not only because of this I am very grateful to her.
My work at the W3C meant to become a bridge builder, involving Japanese companies that are interested in Web technologies. A key area during my time in Japan was the Japanese printing industry. I coordinated the discussion between the Japanese local companies and global Web technology companies about requirements for high quality Japanese layout on the Web. This is one key example of the W3C work: to be a mediator between various communities, and in that way move Tim Berners-Lee the vision of the Web forward: “This is for everyone”.
Do you use AI tools? Which ones and what for?
I am working and experimenting with many tools. This field is so dynamic that it is hard to focus on a set of tools, and I do not want to highlight a specific tool. The purpose I am using tools for is as highlighted in this interview: for generating or changing code, knowledge graphs or other types of content. A fascinating aspect of large language models is that they are not developed with one usage scenario in mind. So also new ideas and usages come up every day.
Felix Sasaki is the Chief Expert for Knowledge Graph and Semantic Technology to the Artificial Intelligence (AI) Unit at SAP, and an invited expert in the W3C Internationalization Activity. He has spent several years at the W3C on Internationalization and at the German Research Centre for Artificial Intelligence (DFKI) on AI and has worked in various leadership roles in software development, publishing and higher education.