Martin Rost
Publikationen

Literal transcript of the interview conducted by Martin Rost with Andreas Pfitzmann

on 21 June 2010 (11:00 to 12:30)Sources:
http://www.maroki.de/pub/video/pfitzmann/start_video_pfitzmann.html
https://www.datenschutzzentrum.de/interviews/pfitzmann/transkript-dt.html


English translation (last updated: 27 July 2011)

Martin Rost: Prof. Pfitzmann, thank you for allowing me to interview you. We are sitting here at the Technical University of Dresden, at the Chair of Data Protection and Security. You are the chairholder. Before we get to the content of data protection, which is very exciting from a technical point of view and something you specialise in, I am naturally interested in how you came to be involved in data protection.

Andreas Pfitzmann: Oh, that's a long story, but it could be a short one too. The long story is that since I finished school, I've been very moved by the question: What do I want to do with my life? Do I want to deal with people? At the time, I thought about studying psychology or even theology. Do I want to make something of my mathematical and technical talents? Do I want to work with machines? And then, during my final year at school, I was told that there was a subject that had something to do with both. I'd never heard of it before, it was new at the time. So I decided: yes, that sounds good. It's more technical and mathematical, but it certainly has more to do with people and society than physics or mathematics. I'm going to study that. And it was clear to me throughout my studies that I wanted to learn these technical things. Ultimately, to then somehow do something useful for people or for society. That's the long story.

The short story goes like this: in the spring of ‘83, I had just finished my studies. I finished my studies in the autumn of 82, and I was a brand new research assistant at the Chair of Fault Tolerance. Dr. Ruth Leuze, the state commissioner for data protection in Baden-Württemberg, gave a lecture in Karlsruhe on data protection. The students had fought for this for a long time. At the beginning, our professors didn't really like it, because it had something to do with politics, with law, with things outside of computer science. It wasn't advertised very much either, with maybe 20 students at the first event. And then, of course, the census was scheduled for that spring. The first time, there were 20 of us, then there were 40, then the seminar room was bursting at the seams, then we went to one of the larger lecture halls, the next time it was half full and finally it was packed. And it was an eye-opening experience for me to see how strong emotions this census awakened. How some citizens were extremely afraid that the state could spy on them, that something could be done with their data that was against their interest. Perhaps many had the feeling: ‘Gosh, 1983, that's somehow 1984 minus 1; so we're very close to it.’ So it was a huge movement and I sat there with my mouth open. And at around the same time... I went to the faculty colloquium regularly back then... we had a lecture by someone who came from the telecommunications industry. He was working on a project called ‘Bigphone’: broadband integrated local telecommunications network. The idea was: in the future, we will provide all services, including television and radio. The enormous technical charm of this is that you can now choose from an unlimited range. So the transmission route itself is no longer the limit, you can only transmit a certain number of television channels, but you choose from a selection. And the price, of course, is that this selection can be perceived and also registered in the communication network. With the classic television network, the network doesn't know how many people and which people are watching which channel, which programme, but with Bigphone you would know exactly which films are being watched, when people switch channels, etc. And so you could see exactly which films someone is watching and whether they switch when there is little violence or when there is a lot of violence. That was much more disturbing than the rather boring questions in the census, if you'll excuse the expression. So I asked this telecommunications engineer whether he would be able to see what was going on with the census and whether he had ever thought about how his work as an engineer on this new network relates to data protection and whether he thinks that anyone will accept this network at some point. And I was lucky. He was a completely honest engineer who told me that he himself had never thought about it, and he didn't know anyone who had either. That, together with the lecture by Ruth Leuze, led me to start thinking about this problem with my colleagues in the coffee break at our institute.

00:05:54

Martin Rost: And the problem has exploded, too. Technical development has exploded since then.

Andreas Pfitzmann: Yes, it was clear to us how many technical innovations would be driven by infrastructure over the next 10, 20, 30 years. Yes, and suddenly I had the feeling, Andreas, that this is the thing, how you can now combine your technical knowledge and your interest and your desire to really do something for people's needs and not just make money. That's how I got into data protection. I then spent about a year and a half doing technical data protection... that's what we called it back then... so data protection through technology.

00:06:41

Martin Rost: In the 80s already?

Andreas Pfitzmann: 83 was that, technical data protection

Martin Rost: Data protection by design. There was a great influence from America; a theoretical academic influence.

Andreas Pfitzmann: Yes, so the first few months I worked completely out of nothing, so to speak. I started from scratch, I knew nothing in that direction, and then at the first conference where I had a paper accepted, a professor, Professor Fiedler, said to me, ‘Do you know the name David Chaum?’ I said, no, I don't know him, can you spell it for me? I wrote the name down and went to the library and found an article by David Chaum: ‘Communication of the ACM’ from 1981... so a few years earlier, when David Chaum developed the concept of mixing and digital pseudonyms. I found that interesting and exciting, and of course a little bit disappointing: Oh, I'm not the first. I'm an independent second, so to speak, but it simply helped us to advance as a group because David Chaum, as a cryptographer, had a different approach to the problems; he wanted to solve everything with cryptography. I came more from the field of network technology, from networks; I wanted to solve the problems at the physical level. From today's perspective, one would say: wanting to solve it with cryptography leads to the clearer system design. Or the more flexible system design, and that is what has prevailed. So we got to know the first fundamental work by David Chaum in the literature. I tried to write him a letter at the time. I wrote it and sent it, but it probably never arrived. At first I had no way of reaching him, I didn't have a new address, and then in the spring of ‘85 I met him at a conference and approached him. He was surprised that someone had not only read his article, but also understood it, including the more difficult parts. He spontaneously invited me and my group to visit him in Amsterdam. At the time, he had moved from America to Amsterdam, so the main part of my group... Michael Waidner, Birgit Pfitzmann, sometimes one or two students, often travelled to Amsterdam from then on, and were able to learn a great deal in a short time from David Chaum, who simply had a few years' lead in the field and is simply a gifted cryptographer.

Yes, that's how I got into the field, and then after a short time others became interested in our work. At first, it was mainly lawyers who took an interest, and I noticed this most strongly with them. Of course, there were very different comments, for example from data protection officers at the time: ‘What, you want to encrypt the data? Then we won't be able to control who sends what to whom at all!’ So data protection officers at the time felt: So please, data protection should not be done through technology, but through law. Technology is precisely what is endangering it, we need the law to protect it. All right, I didn't agree with that too much, but it was a negative feedback and at least it was feedback and indicated some interest. Then there was a specialist group in the GI that dealt with legal informatics, the legal design of information technology.

00:10:43

Martin Rost: Can you give us some names?

Andreas Pfitzmann: There was Mr Göbel, Mr Fiedler, Mr Redecker, who were also simply interested because it was new to them that we now wanted to directly support some things technically or even build them. They invited us to many small workshops, discussed a lot with us and helped us a lot to begin to understand what makes lawyers tick. I don't want to claim that I've fully understood it yet, but we've started to understand it, and that was, so to speak, the first discipline outside of computer science with which we had very close contacts. The data protection officers themselves didn't react much in the first few years. I remember sending the first papers to Dr Leuze with a thank-you letter about how great I thought it was for me that she brought the topic to my attention after all. I then received a lovely letter in return, but nothing happened. And it was only much, much later that data protection officers really took an interest in what we were doing and got involved.

00:12:02

Martin Rost: Roughly when?

Andreas Pfitzmann: We are now... I guess... so 88/89/90/91... where then people like Hansjürgen Garstka or Helmut Bäumler in Kiel invited people like me and said: ‘You have to do further training with us. Our office of lawyers now has to learn about technology and technical possibilities.’ That was a big surprise for me; I enjoyed going there and enjoyed teaching something. And what's nice, of course, is that I also learned a lot in turn, because the questions you get then teach you a lot, such as: What are they really interested in? How well did you explain it, etc.?

I learned a lot from lawyers. And then a casual contact arose... probably a few years earlier... with the Provet research group... with Professor Dr Alexander Roßnagel. And I think, when I look back on my life now, that Alexander Roßnagel is probably the lawyer with whom I have had the longest and most constant exchange.

00:13:20

Martin Rost: The Chaum papers... How did you take them on? What did you do with them?

Andreas Pfitzmann: First of all, we tried to write down some of the things in the papers from our point of view in a more understandable way. And we also tried to bring in the engineering perspective, and to some extent the legal perspective, and to expand, so to speak, David Chaum's very, very good ideas. And to consider how they could be implemented and, in later years, how they could be improved. And I think we succeeded in doing that. At the time, a whole series of fundamental articles were produced. From today's perspective, it has to be said that unfortunately a lot of them are still in German; from an academic perspective, it has to be said that unfortunately, because of course this is something that our colleagues from America or colleagues from around the world practically cannot read. As a German citizen, I have to say, fortunately in German, because at least the lawyers and politicians of the time would have been completely overwhelmed by English articles. That means that we published a lot in German at the beginning, and it helped that in the German-speaking world, this discussion about data protection and technology design was cultivated much earlier and more intensively than I know in any other language area. So if you want to read the leading literature in the field from, say, 1985 to 1990, it's in German, not in English; only in more recent years has English, as a scientific language, also become established in this field.

00:15:16

Martin Rost: ‘Mixe’ by David Chaum, where you set about, together with colleagues, to actually put things into practice, to really implement them, to show that these are not just considerations that might be implemented at some point in the future, but that they are already feasible.

Andreas Pfitzmann: Now, of course, we are in a much later time period. Multilateral security started.

Martin Rost: What does that mean? Multilateral security; what is the core idea?

Andreas Pfitzmann: The core idea is that we want to take care of the security of all parties involved, or, to put it even more generally, of all those affected. The core idea is to avoid having only those affected, but to bring those affected into an active role, so to speak, so that they are involved and can represent themselves in the process and articulate themselves in the system. The idea would be to start by asking: What are the different interests? This should be properly analysed and written down, and then, of course, we will end up with the fact that there are typically clashes of interests. When there are clashes of interests, we have to look at how we conduct negotiations. In other words, how do we resolve the contradictions? Because as an engineer, I can't build a system that meets conflicting requirements. That's not possible. I need a consistent requirements analysis, otherwise I can't build anything. Which of course also means the other way around: if these interests have to be resolved, then the resolution of interests has to be faster than you can typically build a system, i.e. our infrastructure has to be built in such a way that the most diverse types of weighting of interests and resolution of interests can be supported. Yes, so after the end of the negotiation it should be clear what should apply. What has been agreed? And it should be possible to enforce it against others. So data security and data protection should not just be a declaration of intent or a promise that can later be forgotten, broken and ignored at will. Rather, we would like to have it enforced. And in multilateral security, you would also hope that everyone can enforce their interests against the others. If you want to summarise it, you could say that multilateral security is security with minimal assumptions about others. And any assumption about others can, of course, also be wrong. So the fewer assumptions I have to make about others, the better the chances are that I don't have any false assumptions and that in the end, what I want is really guaranteed by the system. So far for multilateral security, which was developed in a research project by the Gottlieb Daimler and Karl Benz Foundation, together with Günter Müller, Kai Rannenberg and me, plus of course many other people in related fields and other disciplines. We also had a lot to do with psychologists at the time, which was the second discipline after the lawyers, with whom I came into closer contact and where I then also had the feeling that I was slowly beginning to understand them and eventually even pleased that I had the feeling that I was now also understood by them. It probably took them much longer to feel understood by me than it did for me to feel that I understood them.

And this concept of ‘multilateral security’ is, in my view, a kind of superstructure or generalisation of traditional security. Traditional security was: the person who builds the system decides how much and what kind of security is in it. So typically, when banks build systems, the bank is only interested in the security of the bank and not in the security of the bank's customers. And then it typically takes a decade and a half before the bank realises that, because it built the system, and because the judges gradually understand that they could have built it differently, that they then suddenly have the burden of proof, and then lose the lawsuits. And so a system that was initially very secure for the bank and very insecure for the customers has now become very secure for the customers but very insecure for the banks. At least when the customers look for a technically competent lawyer. So on the one hand multilateral security, a generalisation of the classical security; I also think a comprehensive concept where technical data protection finds its place, because of course when I describe my requirements for the system, I meaningfully say which data protection properties... which confidentiality properties... I would like to have guaranteed there, for example. Of course, which data avoidance strategies as well. Yes, so multilateral security.

00:22:37

Martin Rost: Anonymity?

Andreas Pfitzmann:... developed, well, multilateral security was developed, let's say as a concept maybe from 95 to 98. Anonymity is also a protection goal, so to speak, which can then be covered within multilateral security. This has been on our minds from the very beginning... so anonymity when we build networks, build infrastructures, was a primary goal for us. From 83 on. Because we had the feeling, firstly, that if confidentiality is to be hard, then the contents may only be known to those who are supposed to know them. But in communication, that means that at least one other person will get the content. I mean, if I don't want anyone to know, I don't need to communicate. Which of course also means that this other person can always pass on the content. And with anonymity, the big research question for us at the beginning was: Can we build networks in such a way that nobody can tell who is communicating with whom? At first, this seems completely abstruse. Is it even possible to build such a network? From today's perspective, we would say: Yes, it would require a noticeable amount of effort, but it can be built. And of course the next question is, yes, who would want to use it at all? When I communicate, I hope I will want to know with whom I am communicating. But the first question is: do you absolutely have to know with whom you are communicating, or is it enough to have a service description or role description, for example, and then I turn to this service, so to speak; to this service address. There could be several of them behind it. Incidentally, this is not so strange, so typically when someone calls the telephone counselling service, they don't know which person they will be connected to, that's a service description. And the second aspect for us with anonymity was: if we have anonymity at the communication level, then you could have very strong liability where you would not like anonymity, of course via digital signatures, all messages and if you now have a corresponding directory infrastructure, a so-called public-key infrastructure, then you could also, if the names, the civil identities, are really listed there, determine with a very high degree of certainty who the message is coming from. In short, we said at the time that the ISDN (Integrated Services Digital Network) that was being built at the time is a very poor compromise because, ultimately, it does not prove who the messages really come from and whether they are really unchanged. In other words, it is not good enough for integrity and liability. But it does destroy so much anonymity that it is not good either; so it is somehow something that does nothing really well.

00:24:00

Martin Rost: Did you feel pressure from the industry you were criticising, or from the security authorities? Or were you not taken seriously at all?

Andreas Pfitzmann: Well, I wasn't stepped on in a tangible or harsh way. There were a few professors at the University of Karlsruhe who said: ‘Mr Pfitzmann, you are aware of how much research funding we receive from Deutsche Telekom, Siemens or Alcatel, aren't you? And that some of the things you write, which you consider to be so exaggerated, or which you write in such a clear manner that they appear to be exaggerated from the point of view of our partners, are not always met with goodwill.’ At that time, we also wrote papers that cryptographic systems should be publicly known, should be standardised. It was also the beginning of the crypto debate.

00:24:59

Martin Rost: Where are we in terms of time back then?

Andreas Pfitzmann: At that time, 87 has now been published, so perhaps the discussion began in 86. At that time, we wrote the first article in the German-speaking world about it. I think we were also, compared to the Americans, once again earlier and also mentioned the so-called Central Office for Encryption. At the time, it was a crypto department, essentially to develop and ensure cryptography for the diplomatic service and, of course, to break the cryptography of other states. I had heard orally that something like this existed, but I had never spoken to anyone about it. I don't think they were interested in talking to people like me. And the first time I saw the term ‘Zentralstelle für das Chiffrierwesen’ in print was in the article that we wrote ourselves. Because at that time they were not seeking publicity at all. And there was also a time when the ZFCH asked another chair at Harbor (?) in Karlsruhe whether there was some way to silence these people. But more than this request, to which the answer was: ‘No’ - I don't know now whether the answer was: ‘Thank God no’ or ‘Unfortunately no’, but the answer was definitely ‘No’. I didn't sense anything more at this point.

00:26:42

Martin Rost: Does that mean you are also allowed to do research freely and without being harassed?

Andreas Pfitzmann: Yes, I think so. As long as we were just doing research, we had, as is the case with research, and completely normal, people who said, ‘What you are doing is great, we agree!’ In every research there are also people who say, ‘Nope, we don't accept your assumptions, your premises. You should be doing completely different things.’ But what happened back then was just scientific dialogue, sometimes even arguments, completely normal.

00:27:14

Martin Rost: But you also have engineers in your team, i.e. you also implemented things.

Andreas Pfitzmann: Not at all at that time.

Martin Rost: And later on, did something change?

Andreas Pfitzmann: ... I'd like to come to that. Well, let's say until 96/97 we mainly did desk work. For me, it was also something I enjoyed doing as a person, so to speak. We took ideas, either from David Chaum or developed them ourselves, and then spelled out in great detail how the technology could be developed. What would it cost? Not in terms of dollars or euros, but in terms of transmission volume and delay times. In other words, in terms of costs, as engineers tend to express costs. So it was clear, in the first, let's say, 12 to 13 years, that anonymity is feasible, but it causes costs, of course. And if anonymity is not done, then it is a political or legal or, for that matter, economic decision not to do it. I was relatively satisfied with that as a person, because I think that is the task of a basic scientist. Look at what the realities are, how could we implement. But as a scientist, I am not the one who decides whether to implement. So democracy means that society, the community and then its representatives decide what is done. I think that's how my generation dealt with it at first; that was our approach to research.

And then, from around 94/95/96, there was a very strong movement via the internet not just to write papers but to say we can try it out. Basically, the internet's architecture is the big playground for trying things out. At least that was originally the idea. And some uncertainties can only be commented on as follows: It is just designed as a playground. And it is still designed as a playground today, so please don't be surprised if it is used as a playground. So that came to the fore, and some of my colleagues, who are perhaps 8/10/12 years younger than me, grew up in their studies with ‘We'll just try it out’, then had the strong feeling that we wanted to try anonymity, anonymous communication. I thought it was interesting, so I said: Yes, okay, go for it. And then they started in a way that made me sit there shaking my head for quite a while. Namely, they didn't implement what we had come up with in theory at full strength, but they first implemented it in such a way that the performance was still halfway decent, and they made their compromises on anonymity. That was very unusual for me, because in my work at my desk, of course, my goal was: ‘Make it as secure as possible!’ So very early on, we called it ‘attacker models’, so to speak, a description of what the person working against the protective mechanisms could do. We called that the ‘attacker model’. We said very early on: ‘We're going to assume that all channels, all transmission lines, are being intercepted by the attacker.’ Of course, we didn't say that for the first time until 2007... that was fiction, of course, it was an estimate, it was the absolute horror scenario. ‘We have a lot of distance between us, it never happens.’ Today, we have to say: ‘Well, the secret services and police authorities of the world have now implemented the tapping of all transmission lines.’ And of course the concepts we developed at our desks back then would still be secure against these things. But so, my staff started with attacker models that I found much too weak, and yes... But they started, and that naturally brought us into contact with many people and organisations and institutions with whom we had had no contact before. It starts out very harmlessly. Then our anonymity service was used because some student, presumably either legitimately or as a prank, wrote in their teacher's visitor book: ‘That stupid cow can't teach properly.’ And then, of course, the lady turned to us and said, ‘I want to know which student did this.’ Of course, we said, ‘Sorry, we can't find out, we don't know either.’ I don't know if we wrote it, but we probably all thought it. She would probably have to put more energy into talking to her students or changing her teaching rather than trying to find out who wrote the criticism in her guestbook through us.

00:33:11

Martin Rost: What just came through, the special thing about this anonymity service, was that it also protects against the operator of the service. That's what we're saying again. That was actually the problem to be solved.

Andreas Pfitzmann: Yes, what existed before, so-called anon proxies, where there was a computer to which you were sent and which then replaced addresses and forwarded them. And this one computer knew exactly what was sent from whom to whom, so that, let me change the perspective, if I were the head of the secret service, it would be obvious that I would offer an anon proxy. Because there is no cheaper way for me to get the information – who considers what to be secret and who does not want to be observed when communicating with whom... And with our system, we didn't just go through one intermediate node, but through several, which then also had different operators, so an early operator of such a mix was also the Independent Centre for Privacy Protection in Schleswig-Holstein, so that several operators then have to cooperate with each other to find out who is communicating with whom. So, we had that from the very beginning, but I could now report on many weaknesses of at least the first versions of our software, how this concatenation could have been done at the time without these operators cooperating. That was the reason why I shook my head so much at the beginning. But Hannes Federrath, Stefan Köpsell and others said: ‘Andreas, let's give it a try.’ And to some extent, despite the weaknesses that I see, it seems to have worked. Because now not only the teacher, but also the police came to us from time to time, of course, with some kind of investigation, and said, ‘We would like to know...’ Okay. Then our standard answer was, of course, ‘We're sorry, we...’ either don't know, we have no record, or then, of course, the question often came, ‘But then please record for the future.’ Then the question is, on what legal basis are we obliged to do so or are we at least entitled to do so, etc., etc.? So from the point where we also practically operated a system, we suddenly had a lot of contact with the police, fewer secret services, of course, and then also the media and the press.

Because I think that for the media and press, of course, such an ongoing system, with its minor scandals and scandals, is much more vivid and much more tangible. And I think that this system, from which we have learned a lot, has of course also taught many users a lot about anonymity. So it was, so to speak, a big... and is still a big awareness campaign.

00:36:25

Martin Rost: The anonymity software of this service was also used, for example, by the police, and probably also by representatives of large industries when they wanted to ensure that their competitors did not know what they were interested in. This means that there are actually also advocates in areas where you would not necessarily expect advocates for anonymity, for example, in investigating authorities. And yet, at the moment, I still don't have the impression that it should actually be a matter of course that there must be an anonymity infrastructure for communication, to give just one example, to be able to make political choices over the net.

Andreas Pfitzmann: Well, first of all... I can confirm that there is a need for anonymity, especially for the police, secret services and industry. We once had a time when we were told that not only was child pornography being distributed via our service, but a paedophile ring was also using our service to arrange meetings to abuse children. And that is of course a point where we offered to shut down our service. At this point, I simply have to say: freedom of research is a wonderful thing. But there comes a point where, as a researcher, I no longer want to invoke my freedom when children are being abused. And the answer I got back then was: ‘For heaven's sake, don't shut down your service! Your service must continue to run. And not just because otherwise we would warn people and they would realise that we are hot on their heels with regard to development and prosecution. But also because we need your service itself! Because, so to speak, we use your service to search for illegal content on the internet. Because if we arrive with an IP address from the BKA, then it is of course clear that the websites will only show us content that is legally okay.’ So at this point, just a note: what a web server returns in response can also depend on which IP address the query is made with. So we were pretty clearly told: “We don't want you to shut down your service. We need your service.”

00:39:02

Martin Rost: I remember the story with China; that they wanted to be able to research on the internet from China and that's why Western companies also accessed AN.ON and the JAP.

Andreas Pfitzmann: We can take this even further: I think it was important for companies. And it was important for the freedom movements in Iran. This service brought us into contact with many actors; we sometimes received messages from China and Iran saying, ‘Thank you for running this service.’ So we not only received e-mails from teachers who felt criticised by their students. Of course, we also had to take note of the fact that this service was also abused. So I just have to say for myself: it gets under your skin when you learn that something you operate or have built on is used to organise the abuse of children; that doesn't leave you calm. However, based on what we know statistically, it has to be said that our service does not have a higher crime rate than the internet as a whole. In the early years, we didn't know whether our service would become a gathering place for people who would ultimately come into conflict with the law and thus also with the police at some point. That is not the case. We have had some spectacular cases where our service was also used for criminal purposes.

But with millions of downloads and probably hundreds of thousands of users, never at the same time but spread over the decade of its existence, so to speak, the demands of the police are by no means proof that we frequently come into conflict with the law. And in this respect, our initial fears... ‘Are we doing a service and then have to stop it half a year later because of our inner conscience? Simply because we essentially only support criminals in their actions?’ Based on what we know, that has not come true.

00:41:28

Martin Rost: Now I suggest we broaden the perspective: First we created infrastructure, and then there was a phase on the subject of identity management, which can be implemented a few layers higher.

Andreas Pfitzmann: Yes. So we have now been talking about, I would like to call it, data avoidance. And not just avoiding storage, but even avoiding data in the sense of avoiding recordability. That was the topic I started with in research. And it continues. It is still an issue from the perspective of: how far can you get? How efficient can you become? But it is also clear that there are simply many services where data avoidance cannot solve the problem completely, because some data has to be communicated, because you also want to be recognised by communication partners, for example to continue a transaction or a dialogue. And that led to work, so to speak, that could now be called a kind of second generation of research activities. In our group, this is mainly associated with the keyword identity management. The idea that each person not only has one identity and always appears under their entire identity, so to speak, i.e. their ID card number, date of birth, place of residence, interests, educational qualifications, blood type, and who knows what else; so always under their total identity – but that you can say, ‘No! We want to appear in different contexts with different partial identities – that's what we call it!’ So if it's about me taking part in a forum where we exchange the latest jokes, then my blood type and probably also my highest educational qualification are completely irrelevant. The only thing that matters is: were the last ten jokes he made in public funny or not? And with very little personal data, you could create a partial identity that would essentially only have to ensure that no one else could post bad jokes under my name. I would then essentially need a digital pseudonym, that is, a test key from the digital signature system, so that I could post my jokes anonymously on an anonymous infrastructure, signing them with this digital pseudonym so that it is clear that they come from me. So that no one else can ruin my good name, that my jokes are somehow quite nice and entertaining.

Between ‘always doing everything under your full civilian identity’ and now the other example with the jokes... The latter was, so to speak, the nice other extreme, where you practically don't need any personal data. In the digital world, keeping these things apart is something that at least doesn't work well in the material world. And I also think it is very, very important that keeping apart works well in the digital world. Because in the digital world, forgetting is practically impossible to organise. In the opera or in a football club, etc., people will gradually forget my face unless I have behaved terribly or made a huge impression on them. But in the digital world, if you can link it together and bring it together, then it can no longer be erased from the world. And in this respect, this first generation of anonymous communication complements the second step of identity management very well: only giving the communication partner or partners, so to speak, the information that is relevant to this situation in this area of life when communicating with a group.

00:47:57

Martin Rost: Your work is clearly political. And you also want to have a political impact. Can you give examples of where you have been successful politically?

Andreas Pfitzmann: The first question is: What does successful political work mean? If I take it as a basic researcher, then it means: you have found things in basic research that create new possibilities. And whether you want to realise these new possibilities is discussed and decided politically. Often the decision is called ‘reacting by doing nothing’... but okay... So in this sense I would say: ‘Yes, very successful.’ And we didn't do it alone. It was a whole group of people: David Chaum, our group, and now many other places, be it in German-speaking countries or internationally. People who do good work. Yes, developments in the field of technical data protection are being noticed and discussed politically. That was the view of the basic researcher.

If you now ask me, as a politically minded citizen: ‘To what extent have political decisions been made in the way I would have wanted or do want as a politically minded citizen?’... And if success means: a decision is actually made and it goes in the direction I want? Then our success was very mixed.

There were isolated cases where we were successful. In most cases, nothing happened. And in a few cases, things happened explicitly against our advice. Now, of course, or probably as a politically minded citizen, one can say: That's actually a very healthy situation. Because it would be very surprising in a certain way if an individual could stand up and say, ‘They have always accepted my advice, and everything has turned out the way I would have wished.’ So somewhere I almost have the feeling that if someone says that, then he must be pretty stupid or full of himself.

So when did we as researchers get the result we wanted? There is the most prominent example we have: the topic of cryptoregulation. So from about 1986, there were intensive discussions about whether to regulate the use of cryptography – perhaps even export implementation – because cryptography had reached the point where it could enter the mass market. Of course, there was concern that cryptography would then also be used by rogue states, foreign intelligence services and terrorists. And particularly in the USA, but also in other industrial nations, the discussion was: ‘Can't we try to ensure that a key is always stored somewhere in cryptography so that someone is able to read the plain text?’ The buzzwords coming out of the USA were Clipper chip and key recovery, and so first came key escrow with the Clipper chip and later key recovery. Here we... So my group, back in Karlsruhe and then later in other places, in Hildesheim and in Dresden, wrote very early on very fundamental papers explaining why we believe that cryptography and the promotion of cryptography, especially public-key cryptography, benefits civil society more than the attempt at regulation harms terrorists and criminals. Because terrorists and criminals typically don't need public-key cryptography; they can exchange their keys in other ways. And the one-time pad... that's a cryptographic method that cannot be broken by any supercomputer in the world... it has simply already been invented. This is in all textbooks, and is known by anyone who wants to know. [Holds up a USB stick]. You can fit the 1983 census data onto a USB stick like this five times over. Now we are talking about the one-time pad, here I can get so much key material on the one-time pad that I can simply talk to someone on the phone for life; even screen reading for many hours. So crypto-regulation, I am convinced, I think there are very good arguments for it... practically does not harm organised crime and terrorists, but it does harm civil society. This argument is quite old, it is already in the first papers and, between you and me, has achieved practically nothing. Because I have the feeling that in political discussions, arguments achieve very, very little. So then in 92/93, when the key escrow/key recovery debate from the USA came up again, we entered a new field of research: steganography.

Steganography is an ancient art of hiding secret data in larger volumes of data that appear normal. This means that secret data is embedded in encrypted images, which then look practically the same and are very inconspicuous. We developed this and also advanced research in this area. There were years when I think we had the best research group in the field of steganography, at least in Europe. And we were also able to demonstrate how we can embed data in video conferences. We built a really nice demonstrator where you could then see things. We also demonstrated this, for example in the Hessian state parliament, where a big meeting of data protection officers takes place every year. They had invited us, I think it was in ‘98, and we wanted to demonstrate this. Then we were told that unfortunately there was no screen in the plenary hall. We insisted that we needed a screen. We then managed to get a wall plug actually put in the front wall of the Hessian parliament that afternoon so that a screen could be hung there, and we were able to demonstrate what it looked like: ‘showing images’. The success was resounding. What I would have liked as a basic researcher is that someone would have come afterwards and said: ‘And what is behind the pictures you are showing? How do you do it exactly?’ And would have been happy to explain it. Nobody came. Nobody wanted to know. They believed us on everything. They could. Okay, we weren't bluffing, what could be seen was real. But actually, it should have been critically questioned, which no one did. And the pictures were very, very impressive. People felt really touched. So a state secretary from the Bavarian Ministry of the Interior called us supporters of terrorism. A federal minister of justice said: ‘Yes, if that's the case, then obviously crypto-regulation makes no sense.’ So the pictures really impressed.

What do I want to learn from this? Well, there are some disciplines where it is very easy to have images. In the field of steganography, when you embed images, it's very easy to show. And unfortunately... in many other fields, it is not clear or it is not easy to present the message you have for politics as an image.

Since then, I have been repeatedly asking myself how we can condense and present what is most important to us, our message, in such a way that it can be grasped in a few seconds. Because I have the impression that political attention – and this applies to the mass media, but it also applies to the attention of politicians – can basically be measured in seconds. Not in minutes, not in hours. So... there was one success: the Federal Republic of Germany decided: We will vote against the binding guideline introduced by the USA within the OECD, that all cryptography built and distributed in the industrialised countries, so the club of the OECD, must contain a back door, so to speak, i.e. key recovery. This is probably the greatest success we have had here as politically active citizens, given that, of course, we as scientists did the basic research, we had an impact there.

I also want to talk about the biggest failure: the biggest failure we have is that we have not succeeded in convincing politicians that the so-called data retention of communication data is a great nonsense. In terms of content, something very similar applies here as in the crypto debate. The people who want to do something in small closed groups to plan terrorist acts or, for all I care, to exchange individual photos in the area of child pornography. I don't need a high-performance communication medium; a relatively modest transmission volume and a relatively modest real-time requirement are enough. And that is something that anyone who wants to can still achieve, even if many countries have data retention and it is enforceable there, by using structures and servers in countries where there is no data retention.

Or we are currently developing a next-generation anonymisation service based on the ‘DC+’ network, where there will simply be no data retention at all. Because there is nothing that could be meaningfully stored. So I think we have very good scientific arguments why it doesn't work. We didn't manage to condense that into a single image. And then, quite simply, what the Americans wanted via the OECD happened: they would never have got it through domestically with key recovery. They wanted to do it externally via the OECD. And no nation state would probably have got data retention through. That's why they did it via an EU directive. And we couldn't prevent that. Now we have to see if this directive will be reviewed at some point. Maybe it's not all over yet. But from my point of view, this is the point where I, as a citizen, say: ‘No, the arguments we put forward, and which we believe are very good arguments, have not been accepted.’ And of course, this may also have to do with images. And I mean images in a different way. Anyone who wants to solve crimes will show policymakers images of abducted and dismembered children, of abused children. And they will use these images to build emotional pressure to do something. Understandable. If you can do something, you should do something, that's perfectly clear. But the emotional pressure is so strong that, at least from my point of view, reason simply stops and these people lose all sense of proportion as to whether the measures that are now being proposed to combat the abuse and kidnapping of children, etc., will really work.

Our greatest success was: we had pictures and the pictures won. And our greatest failure was: others had pictures and those pictures won. The conclusion is that I almost have the feeling that arguments hardly matter in political discussion. It is the pictures that decide.

And as far as I'm concerned, there's a third class of images: the collapsing Twin Towers. Each of us has stored these images in our heads. They are there. We feel it was a huge catastrophe. If you look soberly, 5000 something dead, not even, less than

01:00:19

Martin Rost: I think 3600.

Andreas Pfitzmann: 3600... a fraction of the number of deaths we have in road traffic every year. That means: if I actually want to protect my population, then I don't have to fight against these terrorists, but I have to ask myself how we organise our road traffic differently. If I look at why the Twin Towers collapsed: supposedly because of these planes... Of course, without the planes they wouldn't have collapsed that day, in that sense: yes! But according to everything we know today: the Twin Towers would have collapsed many hours later, if they had collapsed at all, if the fire protection precautions on the steel skeletons of the Twin Towers had been as the regulations were. That is to say, the terrorists did not actually bring down well-built, well-maintained buildings. Rather, the buildings were simply not being operated and maintained in a condition that would have allowed them to be. And this form of criticism, that you simply say, ‘Okay, now we're not starting a fight against terrorism. Now let's take a look at the building fabric and take appropriate fire safety precautions.’ I never heard that from the US. It is probably cheaper and more prestigious for politicians to say we are now waging a ‘War On Terror’ than to say ‘We are now taking care of fire protection in our public buildings or in commercial buildings’. So in that respect: the power of images is enormous.

And at this point, perhaps another comment: the images we see, the new multimedia world, and also many images that are now taken by amateurs and then shared further... On the one hand, I think it's good that information can no longer be suppressed. But as a society, we should still be looking at how we can learn to deal with these images in a meaningful way, so that they don't rob us of all rationality and reason and simply urge us to do things that are simply irrational and not really helpful.

01:02:38

Martin Rost: How would you define the difference between data security and data protection... where would you draw the line? How would you define the relationship between the two?

Andreas Pfitzmann: Well, the flippant answer is, of course, that data protection is, in the first approximation, protection from data, and data security is, in the first approximation, protection of data. But okay, let's take a closer look.

When it comes to data protection, I want to protect people... individuals. At least that is my main motivation. I can also imagine that there are people in data protection who say: ‘Well, it's not just the individuals, I actually want to protect groups too, I also want to protect group interests, etc.’ But for me, at least in my motivation, how I got into the field, protecting the individual from... ... yes, excessive knowledge of others about this individual, with the possibility of then persecuting, manipulating, etc. this individual, is an essential motivation.

Data security is related to a much larger amount of data. So data protection is about data that relates to people, their lives, their relationships. While data security... that can be all kinds of data. It could be data... I don't know... about the love life of turtles. Where I would not see any relevance to data protection. Unless we want to grant turtles personal rights and their right to privacy. Which would certainly be an interesting legal research question, but maybe there are even more important questions than that. So that would be my demarcation. It is clear that even if I can initially keep everything very well apart. When I then build systems, I can only implement data protection halfway if I have proper data security. Because I typically don't build any infrastructure other than what I use anyway for personal data. I also typically don't have any security mechanisms other than what I have for any kind of sensitive, valuable data. In short, I actually need both.

01:05:10

Martin Rost: When you think about systems... You are supposed to analyse existing systems or you want to design new systems... You want to design them well... In which categories do you think about such systems?

Andreas Pfitzmann: First of all, I would like to understand and be told what the needs are. What is the benefit of the system for whom? What should the system do?

Martin Rost: The data protection officers would ask about the purpose. What is the system for?

Andreas Pfitzmann: Yes. And then I would look at what data I don't need for this purpose. And often it is the case that you are not asked at all, you are supposed to construct a new system, but you are actually presented with a system that has already been constructed. Or at least a rough concept for such a system. And when I now consider: So what can I leave out? then it's usually not that I'm thinking of something: what can I leave out? Rather, there is a specific system design and I ask myself: what can I leave out of that? The next question is: once it's clear what can be left out, so to speak, where can I prevent the possibility of recording? Where can I perhaps also shorten the storage period where storage is necessary? Then at some point I have something like a system design, with a minimum of data. A minimum of storage time as well. And then, of course, the next question is: Is it acceptable somewhere that the system can do a little less than what you want? Or, dual to that, the question: Should the system we are designing now be expandable? In which direction? So, so to speak: How much leeway do I have in terms of going up and down? In order to then be able to express, if I can use this leeway, what other data could I omit? If I can go down a bit, so to speak. Or, if it is to be expandable... Do I have to add further data or interfaces now, so to speak? That is my way of asking and probably I then have protection goals in the back of my mind, so to speak. So protection goals would then become relevant for me or I would become aware of them when we then imagine: And what do the system's user interfaces look like? So in a system that is secure on multiple sides, all parties involved would have their perspective on the system, typically also their device with the help of which they then interact with the overall system or the other devices in this system. And for me, this is where the perspective of the protection goal comes into play very strongly, in that I say: ‘Okay, at this interface not only the functionality has to be expressed, but I also have to have some kind of selection here. Which protection mechanisms are used now? How do I recognise people or not, etc.?’ So that's where it comes into play.

01:08:35

Martin Rost: But now we are dealing with Ambient Assisted Living, with ubiquitous computing. And there the idea is that you actually want to capture everything about the person, especially with AAL. With your suggestion ‘Can we do this in a minimal way and can we maybe leave something out?’ you'll fall on deaf ears. The exact opposite is the case. And what do you do in such situations?

Andreas Pfitzmann: Yes. We have now... if you will... a very old conflict, but now taken to the extreme. When I started working with networks in ‘83, the big question that drove me was: How can we minimise or perhaps even exclude the possibility of data collection? Because at that time it was clear: if data is collected - or when it comes to the question of what kind of certainty can I create for myself? If data can be collected, then I can never prove afterwards that it was not collected. Or that, if it has been recorded, it has since been deleted from all existing copies. That was the starting point of our work. And the motivation at the time was: why do we need this? Because it is clear that this exponential growth in storage capacity and processing and also communication capacity means that storage and processing, and also communication, is becoming so cheap that the costs are becoming irrelevant. So what this means is that there is always a temptation to continue to store things for future reference. Or to store things for future reference within the scope of the possibilities offered by data collection, because it costs practically nothing. And you never know what you might need it for later. Incidentally, in more abstract terms, it can also be said that the understanding of data, the possibility of evaluating data, is also constantly improving for the data that we actually collect. So even at the point in the data where we once accepted ‘we collect it’, you wouldn't normally throw it away either. Not only from the point of view of ‘we don't know what we might need it for’, but also ‘we may not know what information value this data really has’ even at the time of collection, in the first few years. Because data mining algorithms, etc. are simply being improved.

This means that our early realisation was: if I now really want confidentiality to be a hard property – and I think that it is in the relevant parts when it is now about power, the exercise of power, and the control of power – then it has to be hard. So if I am put under pressure by a secret service with claims like ‘We know the following about you:...’, then I want to know: What do they really know now and where are they bluffing? Or if families are threatened: I want to know: What do they really know now about the family, their whereabouts, etc.? So that means: Such a soft confidentiality... ‘Well, I hope that they don't know that, etc.,’ that won't strengthen my backbone to say “No!” But in the moment when confidentiality is not really strict, not really resilient, people will give in. Which led me to the conviction very early on: So we have to try to avoid data collection. And that was a pretty realistic approach back in 1983 and in the years that followed, because data entered the computing systems and data networks, and only the data that was entered by people. Or that was created in the course of mediation processes.

01:13:01

Martin Rost: And AAL is the perfect opposite!

Andreas Pfitzmann: And now we are getting exactly the opposite. Or you could say: the perfect opposite! We are giving our computers, which have become smaller and smaller and more and more powerful, their own eyes, ears, hands, so to speak, to grasp, understand and evaluate the world. When we equip rooms in the area of ubiquitous computing, we are also building an infrastructure where we don't yet know what will be done in these rooms later on. So in this respect, we are endeavouring to be universal enough in terms of sensors to make anything possible. Incidentally, we are now also recording things, especially in the area of multimedia, where we know even less about what the actual information content is. So, what do I know... If they had recorded e-mails from me in 1983, they would certainly have been able to tell: ‘Okay, what does he know? What is he communicating? What typical punctuation, grammar or spelling mistakes does he make?’ Okay, yes, there was more information in the text than just what the text is about. So you can draw a few conclusions about, for example, what his school education might have been like. Maybe, if that varies a lot, you can still draw some conclusions: How important it is right now? How pressed for time he is right now, etc.? But if we now have rooms that do something that basically corresponds to these video cameras that are recording right now. What are they recording right now? They record, of course: What I'm saying. How I'm saying it. How I emphasise what I'm saying with gestures and facial expressions. Depending on the resolution, they might even record the back of my eyes when I look directly at the camera. Maybe the temperature distribution on my face, if I were to use image processing and colour shifting. And maybe a doctor, when he sees these videos and enlarges and analyses them, could say: ‘Okay, I can see from this person that he might be at risk of a heart attack. Or maybe he is a high-risk patient for mental illness.’ We don't know. And that means that if we now capture multimedia or many modal variables, then it is absolutely impossible to know how one might be able to evaluate them later. How some people currently swear blindly that they would protect themselves, where I simply have to say: believing in infrastructure data that you can protect for years or decades is just silly. So you just have to look at: How often do I have to patch my systems? And then I know how many days and weeks I can believe that I am protecting my data before the first possibility that someone hacks into my system.

01:16:29

Martin Rost: I would like to come back to this doctor example, who could perhaps analyse a person's face based on temperature distribution and determine that this person may be exposed to a particular health risk. He claims this, it is scientifically proven that this is the case. The insurance company then sets its policy accordingly, so that the right to informational self-determination is already restricted by this analysis, because an insurance company already predefines something like a normal life through the price of the health policy. So, the prevention problem that this expresses.

Andreas Pfitzmann: Well, it's all happening a bit too fast for me now. Or I would rather hold back from wanting to make a final assessment right now. If I were sitting here in front of the camera, so to speak, and assume that these videos would also be evaluated under this aspect, and the aim of the evaluation would be for my GP to say: ‘Well, Mr Pfitzmann, you really need to come to the practice sometime. We need to do a few tests on you. It seems as if you are currently at risk, I would like to see you within the next 24 hours.’ Then, of course, I would say: thank God that these evaluations are being done. I'll go there even if it was a false alarm. Better 10 times a false alarm than to then only regain consciousness somewhere in the hospital.

01:18:12

Martin Rost: Health insurance?

Andreas Pfitzmann: Of course, if... and that is the dilemma with this kind of data... if I am not the first to learn about this diagnosis, or my doctor, who is otherwise subject to a certain duty of confidentiality. But there is a very real danger that health insurance companies will use such data. That perhaps secret services will use such data to consider, ‘Hmmm, is he in a stressful situation right now? Can we recruit him particularly easily right now because he is in a stressful situation and in a life crisis?’ So if I knew, so to speak, with this ubiquitous computing that nothing negative would be done with this data, then of course I would also accept that many of the evaluations could really benefit me as a person. But: My experience tells me, and basically history tells us, that there have always been conflicting interests in society. And of course the ability to access and analyse data means exercising power. And so, of course, something like ubiquitous computing and the corresponding data collections are something that, in my view, at least poses a major stability problem for any society. How would I actually want a society to be? I would want a society in which, when something happens, there is no overreaction. There is a certain composure. Not that we should say: laissez faire, we don't care about anything. But please don't overreact. So, if we now have ubiquitous computing, if we can observe people, if we can run all kinds of evaluations without those affected noticing...

01:20:27

Martin Rost: Automatically.

Andreas Pfitzmann: Automatically for everyone. So then we imagine the situation before 11 September 2001. An American president who, at first, hid in the bushes and was not seen at all for 24 hours, but then, all the more decisively, wanted to demonstrate the ability to act and who simply said: ‘We are now analysing everything in every respect. And everyone who is anywhere else will be monitored or locked up as a preventive measure. And then we will gradually look at all people who are in the normal range, and let them back into normal life, etc.’ I just have to say: disaster!

So what I actually want as an engineer is for the technology I help build to help make society more stable and robust. And I am very careful not to build technology or infrastructure where the potential for destabilising society is obvious. Because history simply tells me: the idea that a society in the management of complex systems always keeps calm and always the measure... that has never been the case. And I have no faith that I think will now emerge in the next few years or decades in a resilient, reliable way. And that's why I would rather be a little more cautious at this point, rather a little more cautious. And in this respect, when we talk about data protection, in terms of anonymity and data avoidance for identity management, ubiquitous computing is of course a huge problem. And possibly, after a thorough analysis, we could perhaps come to the conclusion: No, we won't do that!

01:22:37

Martin Rost: You could say that ‘you don't want to’. But there is no authority that can decide on this. And secondly: it is happening. Preliminary studies are currently underway, lots of them, and very well paid, there is a lot of money in it, it's tempting. Also, the idea that you can't maintain the entire care system in this welfare state as it is cannot be sustained. That the insurance policies will have to go up dramatically if it continues like this. This applies to the entire medical system. But that there, too, a greater degree of mechanisation must take place in order to allow a certain level at all. And with this idea, a great deal of research is currently being done in this area. So at the moment it looks as if these systems are to be made viable right now. What can be done?

Andreas Pfitzmann: Yes, something small and maybe something larger. The smaller one is that I can try to interpret these systems, for example when it comes to care, in such a way that I now have a care system in which all rooms, all our living spaces, and perhaps also our public places, are equipped with sensors so that I can follow everything that happens in this society. But I go to the extreme, if I potentially need to be supervised or am in need of care, then there is a care robot in the room with me. Or let's assume that I am still able to move, and that there is a robot that follows me and waits for me to fall over, and then takes care of me. So, please, if I need help, don't act as if all people need help in every respect. But if ubiquitous computing is being done now... what could be the motivation for doing it? Obviously it's not about care. Because that could be done with less. Is it thoughtlessness? Is it that the secret services said, ‘Okay, we're now going to make peeping and eavesdropping fully automated and everywhere, because personnel capacity is expensive and computers are cheap?’ So maybe we'll be advertised a care system. But what is really being built is a surveillance system, which can of course also be used to organise care, obviously.

I'll jump back to 1983. In 1983, people were talking about video text. It was such an ancient system, where basically the TV was supposed to become a kind of terminal where you could interactively retrieve data. This system... if I were to build it as an engineer to monitor people as closely as possible... you would build it exactly like this: no local storage, the smallest screen possible. So that I can follow every scroll, every action the person in front of the thing does, exactly. Was the video text system a system design to observe people as closely as possible? Or simply a poorly thought-out design with very imperfect technology? So is ubiquitous computing a very ill-conceived design that ‘has not been thought through to the end’? Or are we being sold something with very different goals than those mentioned? I don't know. I would now tend to work in the direction of these mobile devices, the mobile companions. So what do I know, some of it already exists. We already have our mobile phones, which can basically detect our heartbeat when we have them in our breast pocket. And if something strange happens, they inform a doctor. All this already exists, at least in theory. There are also prototypes that could be used differently. Then, of course, the question arises: if you can't stop it, can it still be used for other good purposes? And then I have to go back a long way again.

When I started with data protection, with data protection through technology, in the early 80s, we naturally considered: ‘Can we make many things much more data protection friendly?’ The answer was: yes! But somehow it was clear to us that, of course, there wouldn't be a complete turnaround. At least it wasn't to be expected that society would completely switch to maximum anonymity and maximum data avoidance. Because some people don't want that. Some things might become more expensive. Why did we do it? Because we believe that, in addition to knowing that it is possible in principle, it strengthens certain approaches to technology. And that it buys us time in an incredibly dynamic process. So: the incredibly dynamic process is: computing power, storage power, communication power double on average about every 18 months. Sometimes it's a bit faster, sometimes it's a bit slower. But every doubling every 18 months is a pretty good value. From today's perspective, looking back over a period of 55 years, it's been pretty good.

01:28:27

Martin Rost: … in memory and speed?

Andreas Pfitzmann: Yes, everything at the same price. Which, of course, can also mean... I could also say now: Every 18 months I get the same performance at half the price. To illustrate what that means, a doubling at the same price. Imagine: cars in 1 1/2 years, twice as fast for the same price. But at the latest after 20 months, the legislator would become active and say: ‘No, no, no, not like that!’ Or let's do it differently: half the price! A car, the same car, in 18 months half the price. You might be able to keep that up for 36 months and then the federal government would scream: ‘Alarm!’ How would German industry then make any export profits at all if things got so cheap? We have to intervene.’ It is only in the field of information technology that we believe that we can essentially let it run its course; that we don't have to intervene. And that not by a factor of 2 or 4, but for about 55 years. To make it even clearer: 15 years means 10 times doubling, basically means a factor of 1000! 30 years means a factor of 1 million! This means that we are dealing with a growth process in technology that has been going on for several decades from the perspective of 83. And where it was also to be expected that it would continue for at least another two decades. From today's perspective, we know that it has definitely been going on for a good 2 1/2 decades. I think it will continue like this for at least another 1 to 1 1/2 decades. This process is something that we have never had before in human history on a technical level.

And a very naive idea of mine is: if something happens insanely fast, much too fast, then it's probably a good thing if society gains a little more time to adjust. You can't expect to stop it when there are growth processes of so many millions, billions, etc. as a factor. You can't expect to keep it completely under control. I think it would be naive to expect that we can now get a grip on all this with data avoidance or identity management. We may be able to get a grip on some parts of it if things go well, or maybe cushion it a little as a whole. Society will have a little more time; that would be a big step.

01:31:36

Martin Rost: That means that your activities have given society time to think about the data protection problem.

Andreas Pfitzmann: Yes. And hopefully also to realise that if it no longer works technically or cannot be implemented, then society needs a lot more tolerance and a lot more ‘we won't exploit it’. Data avoidance and identity management have given us some time and may continue to be a tool for some sectors. But I assume that ubiquitous computing, even in the negative version with sensors everywhere or almost everywhere, is likely to come. Now the question is: can we still do anything with it for data protection?

And when I think about what I can do for data protection when confidentiality is no longer really possible? Because of ubiquitous data collection, almost free storage, free communication, ubiquitous copies, I can no longer enforce deletion. What can I still do?

And now the only thing I have come up with so far is: maybe I can make a virtue out of necessity and say that what I can ensure in this world of computers everywhere is that people's statements, i.e. communication acts, can no longer be taken out of context. Let me give you an example: What could possibly harm me as a university lecturer? Let's assume I'm in a debating club and, as is usual in debating clubs, I'm given the task of representing a position that may not be my position at all. Now let's make it really interesting politically: So I get the task: Justify the Nazi doctrine of race and the corresponding actions of the Nazis as well as you can! Now everything is happening in the debating club. From a sporting point of view: How well can you represent this position? Well, I really get involved in the matter, find a few good or not-so-good arguments, but in any case, I present them very convincingly, as I can. If I do a good job of it, you will be able to see from the pictures, or largely see: he really is convinced of it. After all, how can I present something convincingly if I don't at least pretend, as well as I can, that I am convinced of it myself? So now you have a plea from me, 15 minutes of argumentation, the Nazi racial doctrine with all its consequences is fine. So, now cut away the little intro, that we are in some debating club where I get to represent some position that I didn't choose myself. And we also cut the credits when we say to each other: ‘Yes, okay, it was a great exchange of arguments. But by the way: We know that it's a game and that none of us really means it seriously; we'll cut that too.’ We put the whole thing on Youtube. And maybe some forensic experts will analyse it and determine that there is no cut in it, so those 15 minutes are really on the block. There is no other language added either, it is really authentic Andreas Pfitzmann, on this topic, like that. That would be the end of my reputation.

And what this ubiquitous world could do now is say, ‘Okay, there are these opening credits. And these end credits. And the data about: Where did we meet? For what purpose did we meet?’ That still exists in x copies. So I would have to be able to ensure, technically, that whoever watches this video receives the information: this was about a debating club. This is not Andreas Pfitzmann giving a speech about what he really thinks, believes, considers right. Well, let's call that ‘contextual integrity’. That means: the integrity of the context is maintained and cannot be falsified or cannot be eliminated or suppressed. This is a property that is certainly privacy-friendly and privacy-promoting. Incidentally, it is not new. So the idea that information cannot be misinterpreted by changing the context is by no means new. It was known as a problem for data protection even before I started reading about it. This kind of problem could be helped to solve by this ubiquitous computing. So when I think about the future, I am not so naive as to believe that these very big trends – calculating, saving, transmitting – will become less and less expensive and will also become more and more widespread – that you can stop that. But perhaps it is possible to preserve certain areas in some sectors where truly confidential communication can take place. And for the community and society as a whole, secure spaces where the contexts of information files are truly safeguarded. And that would be, so to speak, ‘technical data protection 3.0’ for me, no longer ‘1.0 avoidance’, ‘2.0 identity management’, but ‘3.0 preserving contexts’.

01:37:59

Martin Rost: Professor Pfitzmann, thank you very much for the conversation.

Follow-up...

Andreas Pfitzmann: I think Steinmüller did write some more essays, stating that open networks, in the sense of networks for any services, can no longer be controlled by law. That was one of his seminal articles, where he simply said: To judge admissibility, I need the specific services. And that's why I can't say anything as a lawyer about an open network where I don't yet have the services, but for any services. That was his argument at the time. And basically, the Internet is now really the implementation of what was once understood by ISDN. ISDN was supposed to be a vehicle, so to speak, for these open networks. Which it only became to a very limited extent. And the Internet has become what ISDN was supposed to become. In other words, low-cost, very flexible and connecting everything possible. And basically, Steinmüller's resignation at this point could actually be well justified in terms of content, in that he simply says: with the legal instruments that are currently available, it can no longer be controlled or we are getting nowhere.

Now I'm going to add a bit of cultural pessimism on my part: the fact is that with the university reform we haven't done so much...evaluation, hocus pocus, that these positions or research areas that we are now talking about no longer exist. Because the good people simply say: I'll become a lawyer and I'll really make money. I'm not going to waste my time with university administration and annual or triennial evaluations. But if all 50 only do low-risk and mainstream work, then... [shrugs].