<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>On the road to Bandol</title>
	<atom:link href="https://javacard.vetilles.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://javacard.vetilles.com</link>
	<description>A weblog on Java Card, security, and other things personal</description>
	<lastBuildDate>Mon, 18 Aug 2025 06:48:26 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.0.32</generator>
	<item>
		<title>Java Card++ ?</title>
		<link>https://javacard.vetilles.com/2025/08/18/java-card/</link>
		<comments>https://javacard.vetilles.com/2025/08/18/java-card/#comments</comments>
		<pubDate>Mon, 18 Aug 2025 06:47:58 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[Java Card 2.x]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26403</guid>
		<description><![CDATA[I was part of the team that defined the binary format that has been in use since the end of the 1990&#8217;s. The selected solution was not my preferred one, as I preferred a pre-linked version. At the time, everybody agreed that on-card verification was too ambitious, so this was never considered for Java Card [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I was part of the team that defined the binary format that has been in use since the end of the 1990&#8217;s. The selected solution was not my preferred one, as I preferred a pre-linked version. At the time, everybody agreed that on-card verification was too ambitious, so this was never considered for Java Card 2.1, the ancestor of today&#8217;s Java Card specification.</p>
<p>In the following years, we wasted a lot of time working on a much more ambitious version of Java Card, which never made it, but we never worked on addressing this issue of unverified bytecode, mostly because we had the cryptographic protections provided by GlobalPlatform. If nobody can load unverified bytecode, where is the problem?</p>
<p>As an evaluator, this allowed me to reverse engineer many types of Java Card, as I was allowed to download unverified (and very much illegal) bytecode into cards. Over time, most virtual machines have included runtime countermeasures, and some implementations have been difficult to attack with illegal bytecode for quite a while.</p>
<p>Yet, Java Card has become an industry standard. Strong implementations exist, in which loading unverified bytecode is about impossible, and then, exploiting unverified bytecode is challenging. But on the other hand, weaker platforms also exist, and recently, following a combination of mistakes from various stakeholders, <a href="https://security-explorations.com/esim-security.html" class="liexternal">one of them was broken</a>; luckily, by a researcher, not by a criminal.</p>
<p>The real surprise is not that this has happened. It is that it took over 25 years for it to happen. The lack of on-card verification has always been a weakness in the Java Card story. Over the past 25 years, there have been a few attempts to address the issue, but the industry never adopted them. Isn&#8217;t a solution to this issue a bit overdue?</p>
<p>The smart card industry is a small industry, with few actors, and Java Card has been a great success, its interoperability providing the basis for the deployment of billions of products every year. But the industry has been complacent, and although I am not part of it any more, I bear some of this responsibility since I have headed the Java Card Forum&#8217;s Technical Committee for a few years. I defended off-card bytecode verification many times, for instance  <a href="https://javacard.vetilles.com/2011/10/19/the-misuse-of-bytecode-verification/" class="liexternal">in 2011</a> and quite vehemently <a href="https://javacard.vetilles.com/2013/11/21/the-off-card-byte-code-verifier-is-fine-thank-you/" class="liexternal">in 2013</a>.</p>
<p>But that was over 10 years ago, and I would now argue that it&#8217;s time to fix this issue. Despite minor updates, the core Java Card technology is over 25 years old. The similarity between the latest version of the Java Card Virtual Machine specification with its first interoperable version is striking. There have been almost no changes. The CAP file format is antiquated, and an update could allow all vendors to ensure that no illegal bytecode is allowed to run on the any card.</p>
<p>From a now outsider&#8217;s viewpoint, I believe that after the recent developments and an issue impacting the security of millions of unsuspecting users, the usual denials sound really out of touch with reality, so I am asking the Java Card community a simple question: Since you are pushing for Java Card to be the basis for our personal security, and asking us to trust you with our sensitive data, including our identity, isn&#8217;t it now the right time to put this bytecode issue behind us once and for all?</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2025/08/18/java-card/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Uh oh, Google just stopped updating my kids&#8217; phones</title>
		<link>https://javacard.vetilles.com/2019/05/20/uh-oh-google-just-stopped-updating-my-kids-phones/</link>
		<comments>https://javacard.vetilles.com/2019/05/20/uh-oh-google-just-stopped-updating-my-kids-phones/#comments</comments>
		<pubDate>Mon, 20 May 2019 20:13:32 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[Discussions]]></category>
		<category><![CDATA[Mobile Security]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Android]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26391</guid>
		<description><![CDATA[So, Google has revoked Huawei&#8217;s Android license. Huawei&#8217;s new phones won&#8217;t get any of the nice Google features like Google&#8217;s store, Gmail, and more. But also, all existing Huawei phones will stop receiving updates from Google. What? This includes my kids&#8217; Honor-branded phones, and as far as I know, a significant portion of the kids [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>So, Google has revoked Huawei&#8217;s Android license. Huawei&#8217;s new phones won&#8217;t get any of the nice Google features like Google&#8217;s store, Gmail, and more. But also, all existing Huawei phones will stop receiving updates from Google.</p>
<p>What? This includes my kids&#8217; Honor-branded phones, and as far as I know, a significant portion of the kids in their middle school, as Honor has been proposing phones with a great value for a few years, and they are popular in that population who are usually not getting the top-of-the-line models.</p>
<p>I could also have titled this blog more provocatively as &#8220;Google denies basic security to their customers,&#8221; &#8220;Donald Trump throws millions of kids in hackers&#8217; hands,&#8221; or &#8220;Evil Americans exercise extra-territorial power over people around the world.&#8221; There are plenty of opportunities here to be angry, but the problem is elsewhere.</p>
<p>There is here a trust and liability issue. When I buy an Android phone, I expect some service from the vendor, but I also expect some services from Google. In my professional life, I am battling for improving IoT security, making updates mandatory and secure, among other things. Until now, this was a battle against slackers and profiteers, but today, politics is getting in the way. If hackers benefit from this, who can be held liable? Is this just Huawei? Doesn&#8217;t Google share some responsibility for stopping their support? My kids have done nothing, for sure.</p>
<p>Most comments seem to emphasize that Google dealt a big blow to Huawei, but Google has also dealt a big blow to themselves: Huawei didn&#8217;t cut my kids&#8217; updates, Google did. This really has some consequences on the Android model: When you buy a phone with Android, you introduce a dependency between you and both the device vendor and Google, and you will be a collateral victim if their relationship turns sour. This almost sounds like Apple; when you get an iPhone, you belong to Apple, but at least, only to Apple.</p>
<p>It makes me rethink seriously my dependency on Google, so it&#8217;s time to take some strong decisions. I will switch my family streaming subscription from Google Play Music to Spotify, just to make sure that my kids still enjoy music on their unsupported phones. And if this madness continues, I will move them to Huawei&#8217;s app store as wellâ€¦ </p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2019/05/20/uh-oh-google-just-stopped-updating-my-kids-phones/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Is the IoT apocalypse coming, or not?</title>
		<link>https://javacard.vetilles.com/2019/01/06/is-the-iot-apocalypse-coming-or-not/</link>
		<comments>https://javacard.vetilles.com/2019/01/06/is-the-iot-apocalypse-coming-or-not/#comments</comments>
		<pubDate>Sun, 06 Jan 2019 19:08:03 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[Economics]]></category>
		<category><![CDATA[IoT Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26384</guid>
		<description><![CDATA[There is a wide agreement on the fact that IoT is much more vulnerable to attacks than traditional internet, and even on the fact that IoT attacks could lead to considerable damage to all kinds of assets, logical and physical. But risk is not just about vulnerability level and potential consequences. There is also intent. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>There is a wide agreement on the fact that IoT is much more vulnerable to attacks than traditional internet, and even on the fact that IoT attacks could lead to considerable damage to all kinds of assets, logical and physical. But risk is not just about vulnerability level and potential consequences.</p>
<p>There is also intent. A vulnerability is only dangerous when an attacker actually decides to exploit it. The problem with intent is that it is definitely not obvious to measure, especially on new risks by new kinds of attackers. Here we can oppose two theses, between Bruce Schneier&#8217;s core theory from <a href="https://amzn.to/2RdaCiz" class="liexternal">Click Here to Kill Everybody</a> and James Andrew Lewis&#8217; theory from his 2016 <a href="https://www.csis.org/analysis/managing-risk-internet-things" class="liexternal">Managing Risk for the Internet of Things</a> CSIS report.</p>
<h4>Terrorists and Enemies</h4>
<p>Lewis&#8217; reasoning is that we have been promised major cyber disruptions on traditional internet for a long time and that we are still waiting to see one. His reasoning about terrorists is interesting, as he explains that terrorists tend to prefer tactics that include &#8220;direct action, bloodshed, and political drama.&#8221; I agree with him, but I still think that a terrorist group with the same financial means as the 9/11 commandos could very well use IoT today as an amplifier of their attacks, for instance by having a botnet contribute to the chaos by attacking key services.</p>
<p>The main difference between Lewis and Schneier, though, is about the likelihood of exploitation of IoT vulnerabilities in the context of war. Here, the assumptions are different, as Lewis considers that a massive cyber attack would be deterred by potential response from the U.S. whereas Schneier considers that (1) it could be useful in the case of an already started war, and (2) that the difficulty to attribute an attack could lead to misguided retaliation or to the absence of retaliation. </p>
<h4>Consequences</h4>
<p>There are also a few significant differences between Lewis and Schneier on other topics, which I outline below:</p>
<ul>
<li>About consequences, Lewis mentions that &#8220;most vulnerabilities found on IoT devices lead to events that would qualify as pranks.&#8221; He acknowledges that botnets can be created, but he dismisses them by mentioning improved defenses against DDoS attacks. Schneier is much more cautious, and I would be as well. Botnets could be used for other things than traditional DDoS, for instance for attacking other vulnerable devices.</li>
<li>About cyberwar, the same difference in considering only repetitions of existing attacks leads to similar differences, where Lewis dismisses the risk of potential consequences of a full-scale cyberwar.</li>
<li>Finally, Lewis considers that the risk will decrease as we get more familiar with the technology, and our experience grows. This is partly true, but it is only valid if we build experience fast enough to offset the increase of risk due to continued deployment of new technologies, which is not obvious today.</li>
</ul>
<p>At this level, we are talking about opinions and predictions. Depending on whether you believe that history repeats itself or that we always get interesting new things, the conclusions are different. Well, my motto for 2019 still is &#8220;The times, they are a changin&#8217; &#8220;, so I believe in the unpredictable.</p>
<h4>Does it matter?</h4>
<p>Note that it doesn&#8217;t matter that much. The conclusion from James Lewis does not differ greatly from Bruce Schneier&#8217;s. In the end, he recommends that the government &#8220;can accelerate risk reduction with the same methods we use for general cybersecurity: research, liability, infrastructure and regulation.&#8221;</p>
<p>The IoT insecurity issue may not be of apocalyptic scale, but it nevertheless remains an issue that needs to be considered by governments.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2019/01/06/is-the-iot-apocalypse-coming-or-not/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>We&#8217;re back for 2019!</title>
		<link>https://javacard.vetilles.com/2019/01/06/were-back-for-2019/</link>
		<comments>https://javacard.vetilles.com/2019/01/06/were-back-for-2019/#comments</comments>
		<pubDate>Sun, 06 Jan 2019 16:01:01 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[Discussions]]></category>
		<category><![CDATA[Open issues]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26381</guid>
		<description><![CDATA[It&#8217;s 2019, and it took me 2 months (including a great deal of procrastination) to fix a PHP version issue after a site migration. My hate of PHP just grew a bit more&#8230; In this early 2019, the Road to Bandol can be quite dangerous, as exemplified by the video below: Yep, that&#8217;s the Bandol [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>It&#8217;s 2019, and it took me 2 months (including a great deal of procrastination) to fix a PHP version issue after a site migration. My hate of PHP just grew a bit more&#8230;</p>
<p>In  this early 2019, the Road to Bandol can be quite dangerous, as exemplified by the video below:</p>
<p><iframe width="560" height="315" src="https://www.youtube.com/embed/N99Po_zbtgc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p>
<p>Yep, that&#8217;s the Bandol toll barrier burning during a <em>gilets jaunes</em> demonstration. I am not sure to be fully sympathetic to these <em>gilets jaunes</em>, but they represent the French way to signal major unease in the population: rioting.</p>
<p>What they want is not clear, but what they express is very clear: it was better before. And I have to admit that the latest serious books that I have read kinda make me feel somewhat like this:</p>
<ul>
<li>Bruce Schneier&#8217;s <a href="https://amzn.to/2SD5CjV" class="liexternal">Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World</a> promises us an IoT apocalypse that sounds just plausible to me.</li>
<li>Yuval Harari&#8217;s <a href="https://amzn.to/2C2eAAf" class="liexternal">Homo Deus: A Brief History of Tomorrow</a> promises us another dystopian future, where AI is the scary thing, and gives a good background to the <em>gilets jaunes</em> by his description of the current failure of the capitalist religion.</li>
<li>Research book <a href="https://amzn.to/2TtOVr6" class="liexternal">Agnotology: The Making and Unmaking of Ignorance</a> provides chilling insights into how fake news is just one of the ways to build ignorance of the masses willfully for the benefit of a few.</li>
</ul>
<p>I guess that the year 2019 is going to be interesting, but it takes me a lot of positive energy to be optimistic about the near future. But then, who knows, because</p>
<blockquote><p><em><large>The times, they are a changing&#8230;</large></em>
</p></blockquote>
<p>So let&#8217;s go for a good change.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2019/01/06/were-back-for-2019/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Time bombs, from climate to IoT security</title>
		<link>https://javacard.vetilles.com/2018/08/10/time-bombs-from-climate-to-iot-security/</link>
		<comments>https://javacard.vetilles.com/2018/08/10/time-bombs-from-climate-to-iot-security/#comments</comments>
		<pubDate>Fri, 10 Aug 2018 15:23:02 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[IoT Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26376</guid>
		<description><![CDATA[The comparison between IoT security and climate change is getting better every single day, and I am not sure that this is good news. A few minutes ago, a tweet on climate change got my attention: This is not the new normal, just a pit stop on the way to decades and decades of deteriorating [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>The comparison between IoT security and climate change is getting better every single day, and I am not sure that this is good news. A few minutes ago, a tweet on climate change got my attention:</p>
<blockquote><p><strong>This is not the new normal, just a pit stop on the way to decades and decades of deteriorating conditions.<br />
</strong></p></blockquote>
<p>Nothing that I didn&#8217;t know, but a nice way to remind us that talking about the &#8220;new normal&#8221; is completely wrong: Things will become much worse before they get any better. And the more we wait to act, the more painful the impact of changing climate will be.</p>
<p>As a citizen of a developed country and most likely a member of the world&#8217;s 1% richest, I am not doing much to curb our energy consumption. I proudly bike to work, I try to isolate my house, but I still put my entire family on an intercontinental flight for the vacations. I feel the need to act on the climate, but I don&#8217;t seem to be able to decide what to do. I can blame my government (easy), Donald Trump (easier), I can request action, but in terms of actions, I am most likely not doing my share.</p>
<p>I am not a climate expert, and that may explain my questioning. Maybe that I should ask one of these experts: What should I do, myself, very practically? What are you doing yourself?</p>
<p>Beyond climate change, we have other, smaller threats to face, like IoT (in)security. And this time, I am an expert. Interestingly, IoT security shares some characteristics with climate change, including at least:</p>
<ul>
<li>It&#8217;s a time bomb, as the insecure devices that we deploy today will still be around tomorrow, and may come to haunt us in a few years.
</li>
<li>The problem is global; anyone&#8217;s vulnerable device can be used to attack somebody else&#8217;s IT service, just like any person&#8217;s CO2 contributes in the same way to global warming.
</li>
<li>Many citizens understand the risk (security is a top concern for IoT), but very few know what to do to lower that risk.
</li>
</ul>
<p>As an IoT security expert and a user of IoT devices, then I need to ask myself the question: Eric, what do you do about IoT security?</p>
<ul>
<li>I monitor my network. I have a device at home that lets me know when unknown devices come on my network or when strange things happen. I caught a few things with this, so I am happy about it.
</li>
<li>I use diversified passwords, and sometimes, 2FA (two-factor authentication). However, it took me years to move away from bad practices; I am still not using 2FA wherever I can, and I am still not forcing my family members to use 2FA. I even haven&#8217;t changed at least one password that appears on <a href="https://haveibeenpwned.com/" class="liexternal">haveibeenpwned </a>. Overall, I am not too happy about this.
</li>
<li>I have no clue about the security of the devices I use. This is bad, but I am just a customer here: my hacking skills are rusty, so I am not going to pentest all the devices that I buy and deploy at home. I have no other way to know, and I am not happy about it.
</li>
</ul>
<p>Since the beginning of 2018, I moved into a job working on the definition of security certification for IoT. When I started, my perspective was to maximize security vertically, making products more secure; that sounded natural for a chip vendor with a strong security background. After only a few months, my priority is now to maximize security horizontally, reaching as many products as possible; that is just as good for my company because our high-security chipsets are useless in a world full of default passwords and other trivially exploitable vulnerabilities.</p>
<p>We need security certification, we need it to be as simple as possible, we need it to be as mandatory as possible, and we need it as soon as possible. Simple? Because some good people trying to implement IoT security fail at it, and we must help them. Mandatory? Because some people think that IoT security is not their problem, and we must force them to act. Soon? Because as the clock is ticking, vulnerable devices are accumulating.</p>
<p>Finally, what can you even if you are not an expert? You can try to apply some good practices, and you can also ask your elected representatives to act on behalf of the community. And as you&#8217;re at it, also ask them to act on climate change.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2018/08/10/time-bombs-from-climate-to-iot-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Collective Risk of IoT</title>
		<link>https://javacard.vetilles.com/2018/04/03/the-collective-risk-of-iot/</link>
		<comments>https://javacard.vetilles.com/2018/04/03/the-collective-risk-of-iot/#comments</comments>
		<pubDate>Tue, 03 Apr 2018 15:19:28 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[Economics]]></category>
		<category><![CDATA[IoT Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26374</guid>
		<description><![CDATA[One of the favorite activities of certification experts is to define security levels based on risks. Such levels allow us to put the items to be certified in well-defined boxes. Then, we can certify them according to the rules on that box/level. Until recently, life was easy, and we could define levels easily. Since 3 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>One of the favorite activities of certification experts is to define security levels based on risks. Such levels allow us to put the items to be certified in well-defined boxes. Then, we can certify them according to the rules on that box/level.</p>
<p>Until recently, life was easy, and we could define levels easily. Since 3 is a magic number for levels, here is a definition that I penned myself:</p>
<ul>
<li><strong>Low</strong>. Risks on individual goods and personal assets.</li>
<li><strong>Medium</strong>. Risks on collective good and community assets.</li>
<li><strong>High</strong>. Risk on human lives.</li>
</ul>
<p>This classification can be (and has been) criticized, but it shows the idea. If something can only hurt your belongings, it is less sensitive than a thing that can impact, let&#8217;s say, your city&#8217;s traffic lights, and much less sensitive than a device that can kill you. And of course, you expect to spend less money on certification for anything with low risk.</p>
<p>If you work on IoT devices, then it is easy to apply this classification. My connected toothbrush is low, so is my personal security camera. The school&#8217;s security camera is medium, though, and a hospital&#8217;s connected syringes are high.</p>
<p>But wait. Didn&#8217;t Mirai exploit cameras to take down community assets like OVH servers? That&#8217;s the IoT issue: my camera as a personal security device to protect my house has a low risk level, but the same camera as a member of a botnet has at least a medium risk level, possibly a high if the next bad guy decides to attack emergency services instead of Web hosting services.</p>
<p>This is very hard to capture in a 3-level unidimensional classification. Yet, as we move towards certification of IoT devices, we need to include this collective risk. It is not enough today to consider what a device is supposed to do (watch my house), but we must consider what the device could do after being hacked, and even more importantly, what a large number of the same device could do after being hacked.</p>
<p>Here are three examples with different risks:</p>
<ul>
<li>IT risk. Any permanently connected device can be targeted by a Mirai-like malware and end up attacking any part of our digital infrastructure.
</li>
<li>Human risk. Anything with a battery may be led to overheat and possibly explode, and multiplying this could lead to havoc and multiple injuries.
</li>
<li>Economic risk. Sending Brickerbot (which destroys what it infects) to a large number of simple but essential connected parts (for instance, car parts) could lead to a shortage of parts and major economic damage (for instance, if cars can&#8217;t be fixed).
</li>
</ul>
<p>These risks are hard to capture, but they are significant. However, it is just not possible to label every connected object as high risk, because certification constraints are too high.</p>
<p>One solution is to define a Low or Basic level that includes a significant level of protection against hacking and malicious exploitation. Even this apparently simple solution is hard to define, but thinking about the problem in such terms is already a big step ahead.</p>
<p>So, remember: A single connected device is cute, but collectively, they can be very dangerous.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2018/04/03/the-collective-risk-of-iot/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Should we Protect Cars from Terrorists?</title>
		<link>https://javacard.vetilles.com/2017/09/05/should-we-protect-cars-from-terrorists/</link>
		<comments>https://javacard.vetilles.com/2017/09/05/should-we-protect-cars-from-terrorists/#comments</comments>
		<pubDate>Tue, 05 Sep 2017 15:07:54 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[News]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26372</guid>
		<description><![CDATA[Some days ago, Mark Cuban published on LinkedIn a question about weaponized cars: who has developed solutions to detect/prevent such events? I live close to Nice, so I would definitely extend the question to trucks, and basically to anything heavy that moves faster tn humans. Terrorists are not easy to distinguish from normal drivers before [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Some days ago, Mark Cuban published on LinkedIn a question about <a href="https://www.linkedin.com/feed/update/urn:li:activity:6304720723394990080?lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3Bf6HGjEd8SdGitOKt9Ao60w%3D%3D" class="liexternal">weaponized cars</a>: who has developed solutions to detect/prevent such events? I live close to Nice, so I would definitely extend the question to trucks, and basically to anything heavy that moves faster tn humans.</p>
<h4>Terrorists are not easy to distinguish from normal drivers before it&#8217;s too late</h4>
<p>In the real life of a security consultant, terrorists are a problem in a risk analysis. Whatever technology you think about can be somehow abused by terrorists, and terrorists are really an annoying kind of attacker:</p>
<ul>
<li>Terrorists don&#8217;t care about being caught or killed, which greatly limits the efficiency of many countermeasures, which are designed to make sure that the bad guys eventually get caught (like good logs). With terrorists in a car, the only countermeasure that works is the one that stops them immediately.
</li>
<li>Terrorists are often &#8220;bad users&#8221; rather than intruders, which means that countermeasures against them must be applied against users. If a car decides to crash itself following a perceived attack, it better be sure that it is actually driven by a terrorist, not just a careless driver.
</li>
<li>Any drastic countermeasure that is designed against terrorists may be misused by other attackers (or even by terrorists themselves) to create havoc. Self-destructing cars are not a pretty sight.
</li>
</ul>
<p>In the end, because of these and similar issues, in a classical risk analysis, terrorists are not listed among the bad guys, and if they are, they are explicitly ignored. Yet, Mark Cuban&#8217;s question makes sense, so should we do something about it? I have browsed some of the answers to his question, and I am now reaching my own conclusions:</p>
<ul>
<li>First, whatever we will do on vehicles now will not affect older vehicles, so terrorists will still have access to a large number of weapons for many years to come.
</li>
<li>The first consequence is that it is necessary to work on the infrastructure. In Nice, the Promenade des Anglais is now protected physically against weaponized vehicles. They have done a rather good job, using small poles and palm trees as obstacles.
</li>
<li>Also, infrastructure has an IoT component, such as the V2I (vehicle-to-infrastructure) communication. The infrastructure can therefore emit an emergency signal to surrounding vehicles when it detects a potential attack.
</li>
<li>Beyond responding to such infrastructure events, adding countermeasures in cars is difficult. The guy who drives on the sidewalk may be escaping from terrorists, so such countermeasures would still be hard to define.
</li>
<li>Any measure based on deep learning and analysis of drivers&#8217; behavior is also very hard to define and enforce without moving straight to a police state.
</li>
<li>In the long term, full and mandatory automation sounds like a good countermeasure, which also addresses many more problems.
</li>
</ul>
<p>But then, none of this is easy. If we consider the emergency signal sent by the infrastructure in case of attack, there are many potential issues:</p>
<ul>
<li>If someone simply crashes into a barrier, who will decide that it is not an attack, and how long will that take?
</li>
<li>Emergency vehicles should not be directly affected by the emergency signal. But then, we need to ensure that terrorists don&#8217;t steal emergency vehicles.
</li>
<li>How can we refrain bad guys (terrorists or not) from sending emergency signal just to stop traffic?
</li>
</ul>
<h4>There is always a trade-off between our protection and our freedom</h4>
<p>In the end, I am not sure that we are ready yet to do anything to deter terrorists from weaponizing our vehicles. The main reason is that the security measures required to do so impose strong contraints on us. So far, we have accepted additional constraints in airports and planes, but we balk at a laptop ban in long-haul flights. And I am quite sure that our tolerance threshold in our cars is very low.</p>
<p>So, we should of course protect our cars from terrorists, but I am afraid that we will not do anything about it for now.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2017/09/05/should-we-protect-cars-from-terrorists/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Is it Reasonable to Own a Connected Car?</title>
		<link>https://javacard.vetilles.com/2017/08/16/is-it-reasonable-to-own-a-connected-car/</link>
		<comments>https://javacard.vetilles.com/2017/08/16/is-it-reasonable-to-own-a-connected-car/#comments</comments>
		<pubDate>Wed, 16 Aug 2017 15:01:47 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[IoT Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26370</guid>
		<description><![CDATA[I have been hearing for a while that Â« cybersecurity is a process Â» and that one of the issues with executives is that they donâ€™t understand that: most of them think that cybersecurity is a problem that should be solved by engineering. When you think about an online serviceâ€™s lifecycle, it all makes sense. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I have been hearing for a while that Â« cybersecurity is a process Â» and that <a href="https://hbr.org/2017/06/the-behavioral-economics-of-why-executives-underinvest-in-cybersecurity" class="liexternal">one of the issues with executives</a> is that they donâ€™t understand that: most of them think that cybersecurity is a problem that should be solved by engineering.</p>
<p>When you think about an online serviceâ€™s lifecycle, it all makes sense. The service is deployed on servers sitting in a secure data center, then to more servers if needed; then, the service is updated to a new version, possibly moved to a new cloud provider, and all of this is quite transparent to users. Basically, security must be part of the normal lifecycle, continuously adapting to the new hardware, new software, and new threat environment.</p>
<p>IoT security is different, of course. IoT is about objects, not services, and securing objects is different. Their hardware is fixed, and people buy an object with a set of features, not a service. Things are evolving, of course: Tesla is selling an Autopilot service, complete with security and functional updates. Many smaller devices also come with online services. For instance, a remote thermostat comes with a service that monitors meteorological data, in-house presence, and many other parameters to optimize target temperatures. For such services, security is, of course, a process, like for any other online service.</p>
<p>Yet, most people buy cars, even Tesla models; people buy connected thermostats, even if they are useless without the accompanying online service. For most consumer connected devices, the online service is free (for life, whatever that means), but the consumer has little guarantees that it will keep working for a long time. Naturally, professional contracts are a bit clearer: companies pay for the connected devices and for the related services, but they get additional guarantees, for instance, that the service will run and support their devices for at least 5 years.</p>
<p>Companies usually have a business view of it: a device is supposed to last for 5 years, so it is financed over 5 years, including support and associated services. After 5 years, the device is evaluated; either it is replaced with a new one, or it still works fine and its support and associated services can be continued for a few more years. The U.S. Senate is <a href="https://www.cnet.com/news/congress-senate-iot-device-makers-your-security-sucks/" class="liexternal">trying to formalize this</a> for federal suppliers.</p>
<h4>What happens when a consumer&#8217;s connected car ages?</h4>
<p>So, what happens when a consumerâ€™s connected object ages? Letâ€™s consider a connected car, for example:</p>
<ul>
<li>For a few years, everything goes fine. The manufacturer regularly updates the car software. The features may evolve or not, but security threats are taken care of. The servers also keep working without problems.
</li>
<li>After a few years, things become more difficult. Some services start to disappear, either because they are outdated and the server doesnâ€™t work anymore, or because they are flawed and cannot be fixed. But the car keeps working.
</li>
<li>Then someday, maybe after 10 years, a key hardware component gets defeated by hackers, to a point that software canâ€™t fix/mitigate. From then on, the car is accessible to hackers. So, what should be done? Should the manufacturer disable the hardware (i.e. the car)? Should they be forced to design a replacement part based on more robust/recent hardware? But then, for how long?
</li>
</ul>
<p>Basically, the problem comes from the mismatch between hardware and software. Letâ€™s make a parallel between computers and connected cars: In 2037, using Autopilot on a 2017 Tesla will be like running a 1997 version of Apache on a 200MHz Pentium Pro/Windows 95 machine in 2017: a very risky business.</p>
<p>The difference between consumer and business IoT is the ownership model (or more generally, the business model). Some people may always lease new cars and basically act like businesses, but some people keep their cars for 10 or 20 years, and some people buy used cars.</p>
<h4>IoT security is a process is a service</h4>
<p>Connecting cars is a great idea, but it doesnâ€™t work well with the current car business model. There are many reasons that would push us not to own cars in the near future, and security is one of them: We shouldnâ€™t own connected cars, and simply use a transportation service. Then, things become much clearer: It is the service providerâ€™s duty to maintain the cars, software, and hardware, and to replace the cars when they are not secure/safe anymore.</p>
<p>More generally, IoT is a service, and IoT security is a process. The IoT Â« devices Â» are just a part of the hardware required to implement a given IoT service, even the big ones; they are just like the servers on the backend side, under the responsibility of the service provider, and their sourcing and maintenance should be under their responsibility.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2017/08/16/is-it-reasonable-to-own-a-connected-car/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Des contraintes naÃ®t la beautÃ©</title>
		<link>https://javacard.vetilles.com/2017/05/16/des-contraintes-nait-la-beaute/</link>
		<comments>https://javacard.vetilles.com/2017/05/16/des-contraintes-nait-la-beaute/#comments</comments>
		<pubDate>Tue, 16 May 2017 21:37:38 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[News]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26354</guid>
		<description><![CDATA[This quote from Leonardo da Vinci &#8220;Beauty is born from constraints&#8221; was chosen by Alain Colmerauer as the motto for Prolog IV, the last iteration (for now) of the Prolog language, dÃ©veloped by Prologia in the early 1990&#8217;s. Alain Colmerauer passed away this week. I have plenty of memories about him, starting from classes with [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This quote from Leonardo da Vinci &#8220;Beauty is born from constraints&#8221; was chosen by Alain Colmerauer as the motto for Prolog IV, the last iteration (for now) of the Prolog language, dÃ©veloped by Prologia in the early 1990&#8217;s.</p>
<p>Alain Colmerauer passed away this week. I have plenty of memories about him, starting from classes with him in Marseille, where his way to present constraint programming was as strange as it was passionate. Even before that, during my studies at Marseille&#8217;s <em>Groupe d&#8217;Intelligence Artificielle</em>, heavy on logic and Prolog, Alain Colmerauer was the name that made us all dream about research and fame.</p>
<p>Constraint programming at Prologia was intended to solve practical problems, in scheduling, optimization, and other NP-complete problems. That was fun for me, but Alain didn&#8217;t care: for him,  what counted most was the beauty of the language, the ability to describe the language with the simplest theory possible. <a href="http://prolog-heritage.org/en/ph30.html" class="liexternal">Prolog III</a>, the first constraint language he developed, used linear programming on rational numbers, which could not solve all problems, but was mathematically exact.</p>
<p>In <a href="http://prolog-heritage.org/en/ph40.html" class="liexternal">Prolog IV</a>, we wanted to solve more general problems, and we started using intervals on less exact floating-point numbers. Alain was not enthusiastic at first, but things got better when we realized that floating-point numbers are actually rational numbers (<em>i.e.</em>, exact numbers, not some approximation).</p>
<p>While I was writing my dissertation, I spent some evenings with Alain discussing potential formalizations of our constraints, and we ended up defining a notion of approximation to map a set to a smaller set of approximated values, and building something on it. I was quite proud of it, and I still am, but Alain was disappointed by the fact that the properties defining an approximation was too complex.</p>
<p>For me, that&#8217;s the legacy of Alain Colmerauer: even in the most complex thing, program, or language, a simple and elegant view of it is carefully hidden, and can be uncovered if you look for it carefully enough.</p>
<p>RIP, Alain.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2017/05/16/des-contraintes-nait-la-beaute/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Think like an attacker with a bottom-up threat analysis</title>
		<link>https://javacard.vetilles.com/2017/03/22/think-like-an-attacker-with-a-bottom-up-threat-analysis/</link>
		<comments>https://javacard.vetilles.com/2017/03/22/think-like-an-attacker-with-a-bottom-up-threat-analysis/#comments</comments>
		<pubDate>Wed, 22 Mar 2017 14:48:59 +0000</pubDate>
		<dc:creator><![CDATA[Eric Vétillard]]></dc:creator>
				<category><![CDATA[IoT Security]]></category>

		<guid isPermaLink="false">http://javacard.vetilles.com/?p=26362</guid>
		<description><![CDATA[A risk analysis is a great tool when planning the security of a product. This is typically done with a top-down methodology: You first define assets, then identify threats or risks on these assets, followed by attack strategies and attack objectives, countermeasures, getting finer and finer. These methodologies present many advantages, and one of the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A risk analysis is a great tool when planning the security of a product. This is typically done with a top-down methodology: You first define assets, then identify threats or risks on these assets, followed by attack strategies and attack objectives, countermeasures, getting finer and finer.</p>
<p>These methodologies present many advantages, and one of the most obvious ones is their ability to have a quantitative side, where a value can be assigned to a risk or threat, or to a countermeasure, allowing management decisions to be taken.</p>
<h4>Top-down is an owner&#8217;s view, not an attacker&#8217;s view<br />
</h4>
<p>However, the top-down approach has some limits, especially when doing a threat analysis, and when getting closer to implementation. The top-down approach has two big problems, which limit how it models reality.</p>
<p>First, a top-down approach doesnâ€™t faithfully represent the way in which attackers work, which is often much more opportunistic. An attacker often has an abstract goal, for instance to break into a system or harm some companyâ€™s reputation. This is because the attacker is in fact an attacker ecosystem, with many actors involved. Some people actually perform attacks on actual targets, but other actors identify vulnerabilities attack paths by investing in security research. The market for 0-day attacks on major software components is an example of this market. In that case, the researcher does not have any idea of the way in which the attack will be exploited, and doesnâ€™t care.</p>
<p>Second, and maybe more importantly, a top-down analysis privileges intent over reality, and it canâ€™t represent the blatant bugs that essential to many vulnerabilities. When a developer assesses the difficulty or cost of an attack, his view is necessarily optimistic. The attacker will not consider that he will make a stupid mistake, or that his algorithm is flawed. In real life, though, many vulnerabilities are linked to such situations.</p>
<h4>Attackers are opportunistic<br />
</h4>
<p>Letâ€™s consider an example. We use an attack graph, because it better matches the reality of an attacker ecosystem, building complete attack paths from smaller attacks and vulnerabilities. Graphs can show how attacks are interconnected. If individual attack edges are weighted with the cost of the attack, then the attackerâ€™s job is to identify the path with the smallest cost, or at least one of the smallest costs. This is shown on the left, with two paths that are better than others (total costs of 8 and 9, respectively), but the differences are not that great, with the costliest path at 11.</p>
<p><a href="http://javacard.vetilles.com/wp-content/uploads/2019/01/attack0.jpg" class="liimagelink"><img src="http://javacard.vetilles.com/wp-content/uploads/2019/01/attack0-300x120.jpg" alt="attack0" width="300" height="120" class="alignnone size-medium wp-image-26363" /></a></p>
<p>Now, look on the right. The graph has been corrected by simply taking into consideration one single stupid bug that makes one attack much easier than expected. And suddenly, a new path appears, that was not initially considered, because it was supposed unlikely, with a much lower total cost of 5. This minor change can lead developers to reconsider some of their hypotheses about threats.</p>
<p>And we have been nice, here. In real life, new attack edges and nodes are likely to be added to the attack graph, defining shortcuts to existing attacks, and often leading to completely unexpected attack paths.</p>
<h4>Build attacks from low-level vulnerabilities<br />
</h4>
<p>Thatâ€™s where a bottom-up analysis can greatly help. The idea is to look at individual components and security measures, identify how they could be abused, what vulnerabilities are likely, in the most practical way possible. Such an analysis can be a bit messy, with many directions, so they will not yield a nice set of slides to show to a manager or customer. However, they are likely to identify a few interesting threats and attacks that cannot be identified through a top-down approach.</p>
<p>Later in the development cycle, similar results can be achieved through a security evaluation, especially in black-box testing. As the evaluators look at their target, they will have an opportunistic approach by first trying to find a few easy vulnerabilities.</p>
<p><strong>[EDITED]</strong> Eric Diehl published a very good article about white-box testing vs. black-box testing, that I have to agree with. It changes a bit my last reference about black-box testing: white-box testing is generally more efficient than black-box testing, and if you are in a position to use it, a bounty program also is more efficient than black-box testing. Yet, I still believe that a foray into black-box testing can be useful to get a different insight into a product&#8217;s security.</p>
<p>If you are not familiar with threat analysis or modeling, try Adam Shostack&#8217;s <a href="https://amzn.to/2CSWAdi" class="liexternal">Threat Modeling: Designing for Security</a>. It really helped me when I started working on threats, and it provides good tools for a bottom-up analysis.</p>
]]></content:encoded>
			<wfw:commentRss>https://javacard.vetilles.com/2017/03/22/think-like-an-attacker-with-a-bottom-up-threat-analysis/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
