SAC 2009 Accepted Submissions

A More Compact AES
David Canright and Dag Arne Osvik
We explored ways to reduce the number of bit operations required to implement AES. One way involved optimizing the composite field approach for entire rounds of AES. Another way was integrating the Galois multiplications of MixColumns with the linear transformations of the S box. Combined with careful optimizations, these reduced the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement was 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.

A new approach for FCSRs
François Arnault and Thierry Berger and Cédric Lauradoux and Marine Minier and Benjamin Pousse
The Feedback with Carry Shift Registers (FCSRs) have been proposed as an alternative to Linear Feedback Shift Registers (LFSRs) for the design of stream ciphers. FCSRs have good statistical properties and they provide a built-in non-linearity. However, two attacks have shown that the current representations of FCSRs can introduce weaknesses in the cipher. We propose a new “ring'” representation of FCSRs based upon matrix definition which generalizes the Galois and Fibonacci representations. Our approach preserves the statistical properties and circumvents the weaknesses of the Fibonacci and Galois representations. Moreover, the ring representation leads to automata with a quicker diffusion characteristic and better implementation results. As an application, we describe a new version of F-FCSR stream ciphers.

An Efficient Residue Group Multiplication for The $\eta_T$ Pairing Over ${\mathbb F}_{3^m}$
Yuta Sasaki and Satsuki Nishina and Masaaki Shirase and Tsuyoshi Takagi
When we implement the $\eta_T$ pairing, which is one of the fastest pairings, e need multiplications in a base field ${\mathbb F}_{3^m}$ and in a group $G$. We have regarded elements in $G$ as those in ${\mathbb F}_{3^{6m}}$ to implement the $\eta_T$ pairing in the past. Gorla et al. proposed a multiplication algorithm in ${\mathbb F}_{3^{6m}}$ that takes 5 multiplications in ${\mathbb F}_{3^{2m}}$, namely 15 multiplications in ${\mathbb F}_{3^{m}}$. This algorithm then reaches the theoretical lower bound of the number of multiplications. On the other hand, we may also regard elements in $G$ as those in the residue group ${\mathbb F}_{3^{6m}}^{\,*}\,/\,{\mathbb F}_{3^{m}}^{\,*}$ in which $\beta a$ is equivalent to $a$ for $a \in {\mathbb F}_{3^{6m}}^{\,*}$ and $\beta \in {\mathbb F}_{3^{m}}^{\,*}$. This paper propose an algorithm for computing a multiplication in the residue group. Its cost is asymptotically 12 multiplications in ${\mathbb F}_{3^{m}}$ as $m \rightarrow \infty$, which reaches beyond the lower bound the algorithm of Gorla et al. reaches.

An Improved Recovery Algorithm for Decayed AES Key Schedule Images
Alex Tsow
A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [9] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES 256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm effciently validates this property and outputs the solution for memory images decayed up to 60%.

BTM: A Single-Key, Inverse-Cipher-Free Mode for Deterministic Authenticated Encryption
Tetsu Iwata and Kan Yasuda
We present a new blockcipher mode of operation named BTM, which stands for Bivariate Tag Mixing. BTM falls into the category of Deterministic Authenticated Encryption, which we call DAE for short. BTM makes all-around improvements over the previous two DAE constructions, SIV (Eurocrypt 2006) and HBS (FSE 2009). Specifically, our BTM requires just one blockcipher key, whereas SIV requires two. Our BTM does not require the decryption algorithm of the underlying blockcipher, whereas HBS does. The BTM mode utilizes bivariate polynomial hashing for authentication, which enables us to handle vectorial inputs of dynamic dimensions. BTM then generates an initial value for its counter mode of encryption by mixing the resulting tag with one of the two variables (hash keys), which avoids the need for an implementation of the inverse cipher.

Compact McEliece Keys from Goppa Codes
Rafael Misoczki and Paulo S. L. M. Barreto
The classical McEliece cryptosystem is built upon the class of Goppa codes, which remains secure to this date in contrast to many other families of codes but leads to very large public keys. Previous proposals to obtain short McEliece keys have primarily centered around replacing that class by other families of codes, most of which were shown to contain weaknesses, and at the cost of reducing in half the capability of error correction. In this paper we describe a simple way to reduce significantly the key size in McEliece and related cryptosystems using a subclass of Goppa codes, keeping the capability of correcting the full designed number of errors while also improving the efficiency of cryptographic operations to subquadratic time.

Cryptanalysis of Dynamic SHA(2)
Jean-Philippe Aumasson and Orr Dunkelman and Sebastiaan Indesteege and Bart Preneel
In this paper, we analyze the hash functions Dynamic SHA and Dynamic SHA2, which have been selected as first round candidates in the NIST Hash Competition. These hash functions rely heavily on data-dependent rotations, similar to certain block ciphers, e.g., RC5. Our analysis suggests that in the case of hash functions, where the attacker has more control over the rotations, this approach is less favorable. We present practical, or close to practical, collision attacks on both Dynamic SHA and Dynamic SHA2. Moreover, we present a preimage attack on Dynamic SHA that is faster than exhaustive search.

Cryptanalysis of hash functions with structures
Dmitry Khovratovich
Affiliations: University of Luxembourg
Hash function cryptanalysis has acquired many methods, tools and tricks from other areas, mostly block ciphers. In this paper another trick from block cipher cryptanalysis, the structures, is used for speeding up the collision search. We investigate the memory and the time complexities of this approach under different assumptions on round functions. The power of the new attack is illustrated with the cryptanalysis of the hash functions Grindahl and the analysis of the SHA-3 candidate Fugue (both functions as 256 and 512 bit versions). The collision attack on Grindahl-512 is the first collision attack on this function.

Cryptanalyses of Narrow-Pipe Mode of Operation in AURORA-512 Hash Function
Yu Sasaki
We present cryptanalyses of the AURORA-512 hash function, which is a SHA-3 candidate. We first describe a collision attack on AURORA-512. We then show a second-preimage attack on AURORA-512/-384 and explain that the randomized hashing can also be attacked. We finally show a full key-recovery attack on HMAC-AURORA-512 and universal forgery on HMAC AURORA-384. Our attack exploits weaknesses in a narrow-pipe mode of operation of AURORA-512 named ``Double-Mix Merkle-Damg\aa{}rd (DMMD)," which produces 512-bit output by updating two 256-bit chaining variables in parallel. We do not look inside of the compression function. Hence, our attack can work even if the compression function is regarded as a random oracle. The time complexity of our collision attack is approximately $2^{236}$ AURORA-512 operations, and $2^{236}\times 512$ bits of memory is required. Our second preimage attack works on any given message. The time complexity is approximately $2^{290}$ AURORA-512 operations, and $2^{288}\times 512$ bits of memory is required. Our key recovery attack on HMAC-AURORA-512, which uses 512-bit secret keys, requires $2^{257}$ queries, $2^{259}$ off-line AURORA-512 operations, and a negligible amount of memory. The universal forgery on HMAC-AURORA-384 is also possible by combining the second-preimage and key-recovery attacks.

Cryptanalysis of the full MMB Block Cipher
Meiqin Wang and Jorge Nakahara Jr and Yue Sun
The block cipher MMB was designed by Daemen, Govaerts and Vandewalle, in 1993, as an alternative to the IDEA block cipher. We exploit and describe unusual properties of the modular multiplication in $\Z_{2^{32}-1}$, but the {\bf main contributions} of this paper are detailed differential, square and linear cryptanalysis of MMB. Concerning differential cryptanalysis, we can break the full 6-round MMB with $2^{118}$ chosen plaintexts, $2^{95.91}$ full 6-round MMB encryptions and $2^{64}$ counters, effectively bypassing the ciphers countermeasures against DC. For the square attack, we can recover the 128-bit user key for 4-round variant of MMB with $2^{34}$ chosen plaintexts, $2^{126.32}$ 4-round encryptions and $2^{64}$ memory requirements. Concerning linear cryptanalysis, we present a key-recovery attack on 3-round variant of MMB requiring $2^{114.56}$ known-plaintexts and $2^{126}$ encryptions. Moreover, we detail a ciphertext-only attack on 2-round MMB using $2^{93.6}$ ciphertexts and $2^{93.6}$ parity computations. These attacks do not depend on weak-key or weak-subkey assumptions, and are thus, independent of the (redesigned) key schedule algorithm.

Cryptanalysis of the LANE Hash Function
Shuang Wu and Dengguo Feng and Wenling Wu
The LANE hash function is designed by Sebastiaan Indesteege and Bart Preneel. It is now a first round candidate of NIST's SHA-3 competition. The LANE hash function contains four concrete designs with different digest length of 224, 256, 384 and 512.

The LANE hash function uses two permutations P and Q, which consist of different number of AES-like rounds. LANE-224/256 uses 6-round P and 3-round Q. LANE-384/512 uses 8-round P and 4-round Q. We will use LANE-n-(a,b) to denote a variant of LANE with a-round P, b-round Q and a digest length n.

We have found a semi-free start collision attack on reduced-round LANE-256-(3,3) with complexity of 2^62 compression function evaluations and 2^69 memory. This technique can be applied to LANE-512-(3,4) to get a semi-free start collision attack with the same complexity of 2^62 and 2^69 memory. We also propose a collision attack on LANE-512-(3,4) with complexity of 2^94 and 2^133 memory.

Differential Fault Analysis of Rabbit
Aleksander Kircanski and Amr Youssef
Rabbit is a high speed scalable stream cipher with 128-bit key and a 64-bit initialization vector. It has passed all three stages of the ECRYPT stream cipher project and is a member of eSTREAM software portfolio. In this paper, we present a practical fault analysis attack on Rabbit. The fault model in which we analyze the cipher is the one in which the attacker is assumed to be able to fault a random bit of the internal state of the cipher but cannot control the exact location of injected faults. Our attack requires around $128-256$ faults, precomputed table of size $2^{41.6}$ bytes and recovers the complete internal state of Rabbit in about $2^{38}$ steps.

Format-Preserving Encryption
Mihir Bellare and Thomas Ristenpart
In the encryption of credit-card data as well as other applications, we may want to encipher in such a way that a certain property of the plaintext is preserved in the ciphertext. This paper initiates a treatment of this type of format-preserving encryption. We introduce a primitive that we call a general cipher that allows us to capture encryption preserving arbitrary formats. We specify an as-strong-as-possible notion of security for it that says that none but the desired property is leaked. We then provide an efficient construction of a general cipher that we call FPF.

Herding, Second Preimage and Trojan Message Attacks Beyond Merkle-Damgaard
Elena Andreeva and Charles Bouillaguet and Orr Dunkelman and John Kelsey
In this paper we present new techniques to analyze the structure of hash functions other than the Merkle-Damgaard construction. We extend the herding attack to concatenated hashes, zipper
hashes, and hash functions which iterate the message blocks several times. We follow with introducing the herding attack on tree hashes, showing how this attack can be applied in a completely different way. Furthermore, we show some new second preimage attacks (herding-based on the hash-twice, time-memory-data tradeoff based on tree hashes). Finally, we present a new type of attack - the trojan message attack, which allows for producing second preimages of unknown messages (from a small space) when they are appended with a fixed suffix.

Highly Regular m-ary Powering Ladders
Marc Joye
This paper describes new exponentiation algorithms with applications to cryptography. The proposed algorithms can be seen as m-ary generalizations of the so-called Montgomery ladder. Both left-to-right and right-to-left versions are presented. Similarly to Montgomery ladder, the proposed algorithms always repeat the same instructions in the same order and so offer a natural protection against certain implementation attacks. Moreover, as they are available in any radix m and in any scan direction, the proposed algorithms offer improved performance and greater flexibility.

Improved Cryptanalysis of the Reduced Grøstl Compression Function, ECHO Permutation and AES
Florian Mendel and Thomas Peyrin and Christian Rechberger and Martin Schläffer
In this paper, we propose two new ways to mount attacks on the SHA-3 candidates Grøstl, and ECHO, and apply these attacks also to the AES. Our results improve upon the original rebound attack. Using two new techniques, we are able to extend of the number of rounds in which available degrees of freedom can be used. As a result, we present the first attack on 7 rounds for the Grøstl-256 compression function, as well as an improved known-key distinguisher for 7 rounds of the AES and the internal permutation used in ECHO.

Improved Integral Attacks on MISTY1
Xiaorui Sun and Xuejia Lai
We present several integral attacks on MISTY1 using the $FO$ Relation. The $FO$ Relation is a more precise form of the Sakurai-Zheng Property such that the functions in the $FO$ Relation depend on 16-bit inputs instead of 32-bit inputs used in previous attacks, and that the functions do not change for different keys while previous works used different functions.

We use the $FO$ Relation to improve the 5-round integral attack. The data complexity of our attack, $2^{34}$ chosen plaintexts, is the same as previous attack, but the running time is reduced from $2^{48}$ encryptions to $2^{29.58}$ encryptions. %This improvement can greatly reduce the running time of the attack. The attack is then extended by one more round with data complexity of $2^{34}$ chosen plaintexts and time complexity of $2^{107.26}$ encryptions. By exploring the key schedule weakness of the cipher, we also present a chosen ciphertext attack on 6-round MISTY1 with all the $FL$ layers with data complexity of $2^{32}$ chosen ciphertexts and time complexity of $2^{126.09}$ encryptions. Compared with other attacks on 6-round MISTY1 with all the $FL$ layers, our attack has the least data complexity.

Information Theoretically Secure Multi Party Set Intersection Re-Visited
Arpita Patra and Ashish Choudhary and C. Pandu Rangan
We re-visit the problem of secure multiparty set intersection in information theoretic settings. In \cite{LiSetMPCACNS07}, Li et.al have proposed a protocol for multiparty set intersection problem with $n$ parties, that provides information theoretic security, when $t < \frac{n}{3}$ parties are corrupted by an active adversary having {\it unbounded computing power}. In \cite{LiSetMPCACNS07}, the authors claimed that their protocol takes six rounds of communication and communicates ${\cal O}(n^4m^2)$ field elements, where each party has a set containing $m$ field elements. However, we show that the round and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed in \cite{LiSetMPCACNS07}. We then propose a {\it novel} information theoretically secure protocol for multiparty set intersection with $n > 3t$, which significantly improves the "actual" round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest.

More On Key Wrapping
Rosario Gennaro and Shai Halevi
We address the practice of key-wrapping, where one symmetric cryptographic key is used to encrypt another. This practice is used extensively in key-management architectures, often to create an "adapter layer" between incompatible legacy systems. Although in principle any secure encryption scheme can be used for key wrapping, practical constraints (which are commonplace when dealing with legacy systems) may severely limit the possible implementations, sometimes to the point of ruling out any "secure general-purpose encryption." It is therefore desirable to identify the security requirements that are "really needed" for the key-wrapping application, and have a large variety of implementations that satisfy these requirements.

This approach was developed in a work by Rogaway and Shrimpton at EUROCRYPT 2006. They focused on allowing deterministic encryption, and defined a notion of deterministic authenticated encryption (DAE), which roughly formalizes "the strongest security that one can get without randomness." Although DAE is weaker than full blown authenticated encryption, it seems to suffice for the case of key wrapping (since keys are random and therefore the encryption itself can be deterministic). Rogaway and Shrimpton also described a mode of operation for block ciphers (called SIV) that realizes this notion.

We continue in the direction initiated by Rogaway and Shirmpton. We first observe that the notion of DAE still rules out many practical and ``seemingly secure'' implementations. We thus look for even weaker notions of security that may still suffice. Specifically we consider notions that mirror the usual security requirements for symmetric encryption, except that the inputs to be encrypted are random rather than adversarially chosen. These notions are all strictly weaker than DAE, yet we argue that they suffice for most applications of key wrapping.

As for implementations, we consider the key-wrapping notion that mirrors authenticated encryption, and investigate a template of Hash-then-Encrypt (HtE), which seems practically appealing: In this method the key is first "hashed" into a short nonce, and then the nonce and key are encrypted using some standard encryption mode. We consider a wide array of "hash functions", ranging from a simple XOR to collision-resistant hashing, and examine what "hash function" can be used with what encryption mode.

More on the Security of Linear RFID Authentication Protocols
Matthias Krause and Dirk Stegemann
The limited computational resources available in RFID tags implied an intensive search for light weight authentication protocols in the last years. The most promising suggestions were those of the $\textsf{HB}$-familiy ($\textsf{HB}^+$, $\textsf{HB}^{\#}$, Trusted$\textsf{HB}$, ...) initially introduced by Juels and Weis, which are provably secure (via reduction to the Learning Parity with Noise (LPN) problem) against passive and some kinds of active attacks. Their main drawbacks are large amounts of communicated bits and the fact that all known $\textsf{HB}$-type protocols have been proven to be insecure with respect to certain types of active attacks. As a possible alternative, authentication protocols based on choosing random elements from $L$ secret linear $n$-dimensional subspaces of $GF(2)^{n+k}$ (so called CKK-protocols) were introduced by Cicho\'{n}, Klonowski, and Kuty\l owski. These protocols are special cases of (linear) $(n,k,L)$-protocols which we investigate in this paper. We present several active and passive attacks against $(n,k,L)$-protocols, thereby giving some evidence that the security of $(n,k,L)$-protocols can be reduced to the hardness of the {\it learning unions of linear subspaces} (LULS) problem. We then present a learning algorithm for LULS based on solving overdefined systems of degree $L$ in $Ln$ variables. Under the hardness assumption that LULS-problems cannot be solved significantly faster, linear $(n,k,L)$-protocols (with properly chosen $n,k,L$) could be interesting for practical applications.

New Cryptanalysis of Irregularly Decimated Stream Ciphers
Bin Zhang
In this paper we investigate the security of irregularly decimated stream ciphers. We present an improved correlation analysis of various irregular decimation mechanisms, which allows us to get much larger correlation probabilities than previously known methods. Then new correlation attacks are launched against the shrinking generator with Krawczyk's parameters, LILI-$\amalg$, DECIM$^{\textit{v2}}$ and DECIM-{$128$} to access the security margin of these ciphers. We show that the shrinking generator with Krawczyk's parameters is practically insecure; the initial internal state of LILI-$\amalg$ can be recovered reliably in $2^{72.5}$ operations, if $2^{24.1}$-bit keystream and $2^{74.1}$-bit memory are available. This disproves the designers' conjecture that the complexity of any divide-and-conquer attack on LILI-$\amalg$ is in excess of $2^{128}$ operations and requires a large amount of keystream. We also examine the main design idea behind DECIM, i.e., to filter and then decimate the output using the ABSG algorithm, by showing a class of correlations in the ABSG mechanism and mounting attacks faster than exhaustive search on a $160$-bit (out of $192$-bit) reduced version of DECIM$^{\textit{v2}}$ and on a $256$-bit (out of $288$-bit) reduced version of DECIM-{$128$}. Our result on DECIM is the first nontrivial cryptanalytic result besides the time/memory/data tradeoffs. While our result confirms the underlying design idea, it shows an interesting fact that the security of DECIM rely more on the length of the involved LFSR than on the ABSG algorithm.

New Results on Impossible Differential Cryptanalysis of Reduced Round Camellia-128
Hamid Mala and Mohsen Shakiba and Mohammad Dakhil-alian
Camellia, a 128-bit block cipher which has been accepted by ISO/IEC as an international standard, is increasingly being used in many cryptographic applications. In this paper, using the redundancy in the key schedule and accelerating the filtration of wrong pairs, we present a new impossible differential attack to reduced-round Camellia. By this attack 12-round Camellia-128 without FL/FL-1 functions and whitening is breakable with a total complexity of about 2^116.6 encryptions and 2^116.3 chosen plaintexts. In terms of the numbers of the attacked rounds, our attack is better than any previously known attack on Camellia-128.

On Repeated Squarings in Binary Fields
Kimmo U. Järvinen
In this paper, we discuss the problem of computing repeated squarings (exponentiations to a power of 2) in finite fields with polynomial basis. Repeated squarings have importance, especially, in elliptic curve cryptography where they are used in computing inversions in the field and scalar multiplications on Koblitz curves. We explore the problem specifically from the perspective of efficient implementation using field-programmable gate arrays (FPGAs) where the look-up table structure helps to reduce both area and delay overheads. We propose several repeated squarer architectures and demonstrate their practicability for FPGA-based implementations. Finally, we show that the proposed repeated squarers can offer significant speedups and even improve resistivity against side-channel attacks.

Optimization strategies for hardware-based cofactorization
Daniel Loebenberger and Jens Putzka
We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

Practical Collisions for SHAMATA
Sebastiaan Indesteege and Florian Mendel and Bart Preneel and Martin Schlaeffer
In this paper we present a practical collision attack on the SHA-3 submission SHAMATA. SHAMATA is a stream cipher-like hash function design with components of the AES, and it is one of the fastest submitted hash functions. In our attack we show weaknesses in the message injection and state update of SHAMATA. It is possible to find certain message differences that do not get changed by the message expansion and non-linear part of the state update function. This allows us to find a differential path with a complexity of about $2^{96}$ for SHAMATA-256 and about $2^{110}$ for SHAMATA-512, using a linear low-weight codeword search. Using an efficient guess-and-determine technique we can significantly improve the complexity of this differential path for SHAMATA-256. With a complexity of about $2^{40}$ we are even able to construct practical collisions for the full hash function SHAMATA-256.

Practical pseudo-collisions for hash functions ARIRANG-224/384
Jian Guo and Krystian Matusiewicz and Lars R. Knudsen and San Ling and Huaxiong Wang
In this paper we analyse the security of the SHA-3 candidate ARIRANG. We show that bitwise complementation of whole registers turns out to be very useful for constructing high-probability differential characteristics in the function. We use this approach to find near-collisions with Hamming weight 32 for the full compression function as well as collisions for the compression function of ARIRANG reduced to 26 rounds, both with complexity close to $2^0$ and memory requirements of only a few words. We use near collisions for the compression function to construct pseudo-collisions for the complete hash functions ARIRANG-224 and ARIRANG-384 with complexity $2^{23}$ and close to $2^0$, respectively. We implemented the attacks and provide examples of appropriate pairs of $H,M$ values. We also provide possible configurations which may give collisions for step-reduced and full ARIRANG.

Real Traceable Signatures
Sherman S.M. Chow
Traceable signature scheme extends a group signature scheme with an enhanced anonymity management mechanism. The group manager can compute a tracing trapdoor which enables anyone to test if a signature is signed by a given misbehaving user, where the only way to do so for group signatures requires revealing the signer of all signatures. Nevertheless, it is not tracing in a strict sense. For all existing schemes, $T$ tracing agents need to recollect all $N'$ signatures ever produced and perform $RN'$ ``checks'' for $R$ revoked users. This involves a high volume of transfer and computations. Increasing $T$ maximizes the degree of parallelism for tracing but also the probability of ``missing'' some signatures.

We propose a new and efficient way of tracing -- the tracing trapdoor allows the reconstruction of tags such that each of them can uniquely identify a signature of a misbehaving user. Identifying $N$ signatures out of the total of $N'$ signatures ($N << N')$ just requires the agent to construct $N$ small tags and send them to the signatures holder. $N$ here gives a trade-off between the number of unlinkable signature a member can produce and the efforts for the agents to trace the signatures. We present schemes with simple design borrowed from anonymous credential systems, which are secure in random oracle model and in common reference string model respectively.

Weak Keys of the Block Cipher PRESENT for Linear Cryptanalysis
Kenji Ohkuma
The block cipher PRESENT designed as an ultra-light weight cipher has the 31-round SPN structure with S-box layers with 16-parallel 4-bit S-boxes and diffusion layers with a bit permutation. The designers evaluated the maximum linear characteristic deviation is not more than $2^{-43}$ for 28 rounds and concluded that the linear cryptanalysis is not vulnerable to PRESENT. But we have found that 32% of PRESENT keys are weak for linear cryptanalysis, and the linear deviation can be much larger than the linear characteristic value by multi-path effect. And we evaluated that a 28-round path with a linear deviation $2^{-39.3}$ for the weak keys. Furthermore, we found that the linear cryptanalysis is applicable up to 24-round reduced version of PRESENT for the weak keys.