Sunday, October 31, 2010

Section 7.3-7.5 Due 1 Nov 2010

1. There were a lot of calculations that bogged me down in tonight's readings. It was hard to follow what Bob was calculating, what Alice was calculating, what Bob need to calculate and when, etc. I also didn't understand the final proposition of 7.5. That is, I didn't quite understand the seemingly inverse relationship between Decision Diffie-Hellman and ElGamal ciphertexts. There was also a part about the exchanging of keys. I understood how the keys were determined by each individual party, but I didn't quite understand the logistics of how this kept the key out of the hands of other parties.
2. One additional question that I've thought about throughout the readings of discrete logs altogether involves the differences between RSA and discrete logs. In other words, when is one system more desirable than another? When are both systems equally desirable?

Thursday, October 28, 2010

Section 7.2 Due Oct 29 2010

1. The processes that the text lays out aren't bad in themselves. However, I didn't understand HOW they were doing the processes. That is to say, I didn't understand where the text was pulling the numbers from. In some cases I saw the patterns and such, but in others I wasn't quite clear on the method behind the madness. For example, the second line on pg 205 where they raise 12 to the 20th power. And this was after a second reading of the processes. Also, why do they write x=x0+2x1+4x2 (mod 8). The choices of the number of xi's to use and the mod 8 are seemingly arbitrary.
2. The other issue I've been having with the reading is more of mild curiosity I'm thinking about. In all the processes for tomorrow's reading especially they work (mod p). I thought that in cryptography (primarily in  RSA) we would be working with (mod n) where n is the product of two primes. My question is, when will we be working ONLY mod p as opposed to mod n.

Tuesday, October 26, 2010

Section 6.5-6.7, 7.1 Due 27 Oct 2010

1. I didn't understand discrete logarithms. I thought that log functions were continuous. I'm guessing it becomes discrete when we use it as such. We are only concerned with the integers and the modulus of the log of those integers. I don't understand the process of finding discrete logs. Consequently, I don't understand why finding discrete logs is hard. Not to say that I see it as easy. Is there a reason that they are hard other than the fact that in cryptography we will probably be working with large primes (as in RSA)?
2. Section 6.7 was interesting to read about public key cryptosystems essentially being reduced to something akin to group theory. It nicely showed how and why RSA was just an example of a public key cryptosystem. The text also interestingly showed a modern day example of turning RSA backward and using it (treaties between nations).

Sunday, October 24, 2010

Section 6.4.1 Due 25 Oct 2010

1. I got lost at the part where the book said to find the linear dependencies mod 2 among the rows that were produced from the exponents of the prime factors in each congruent relation. After further analysis, it seems like they are just the rows whose entries add up to 0 mod 2. However, the book has "1st + 5th + 6th" on the left. I'm guessing that these are the rows that were added up. Can't tell. I also don't understand why they were looking for these specific characteristics among the rows.
2. I sort of understand the process behind the quadratic sieve (except for small details as mentioned). I see the basic reason behind the sieve (trying to find a nontrivial factor of n). It just seems a little arbitrary still. That is to say, it seems like some of the processes are being pulled out of nowhere. I think that's part of what makes some of this stuff hard to understand (not just for me, but for everyone in general). We spend so much thinking power trying to determine where the author is coming from and the reasons behind certain steps.

Friday, October 22, 2010

Section 6.4 Due 22 Oct 2010

1. The p-1 Factoring Algorithm is the hardest part for me to understand. It seems somewhat arbitrary, and I think that's why it's hard to understand how to exactly apply it. It says to choose a bound B, but it seems like that can be such an arbitrary number. For example, if I choose a=2 then a bound (as far as I understand it) would be 3? But that seems like those two choices won't do any good for me. It feels like there should be more guidelines for choosing B. Other than that, it seems like the algorithm is just a loop of calculations.
2. The most interesting part was the very first part of the reading assignment. It was nice to see something reduced so simply back to fundamental algebra (in this case reducing factoring down to a difference of squares idea). Then all that's left to do is run the repeated loop computing n+1^2 and so forth. However, as the text points out, this could take a long time for large primes.

Tuesday, October 19, 2010

Section 6.3 Due 20 Oct 2010

1. I understand the first two subheadings just fine (The Basic Principle and Fermat's Primality Test). I don't understand the Miller-Rabin Primality Test very well, though. From what I understand, however, it's basically a recurrence algorithm in which you find congruences to some random integer (a) raised to a previously found odd integer. You keep doing this working with the new congruences until you arrive at some number that is in fact congruent to 1 (mod n). Having arrived thus, n can be declared composite. Or you will eventually get to something congruent to -1? My question is, will you always arrive at 1 or -1?
2. On a minor note, what are "pseudoprimes" and "strong pseudoprimes" and the like? Also, these tests always seem to result in vague statements as to the probability that a number is prime. Are there any deterministic tests for large primes?

Monday, October 18, 2010

Section 3.10 Due 18 Oct 2010

1. The last example given in the assigned section was the hardest part for me to understand. This is primarily because of property (2) on page 91. I don't understand how n can be congruent to 3 or 5 or 1 or 7 (mod n). It seems to me that n would always be congruent to 0. I imagine that there is some typo in the book, but I don't know what it would be or how to fix it.
2. Also, it is interesting/difficult to understand how you can mathematically equate a number like (2/3211) to +,-1. At first I thought that they were just symbols used to replace one another, but then they started being used in mathematical calculations. I'm just not quite sure I understand how 2/3211 can legitimately be said to equal plus or minus 1.

Friday, October 15, 2010

Section 3.9 Due 15 Oct 2010

1. The section was pretty straight forward. It seemed to basically be giving a straight forward algorithm for finding roots given a specific modulus. The hard part came toward the very end when it started to tie everything back to cryptography and specifically large primes (n) used as the modulus and its factors. I guess I'm having trouble connecting all the pieces, especially just seeing the big picture of where each number is coming from.
2. With all the methods being introduced through the reading that make it easier to factor n and so forth, I must admit that I'm questioning the security of RSA. That is to say, I know full well that it IS VERY secure, but with all the ways that we are discussing as to how to crack RSA, I'm wondering what precautions are taken to ensure security. For instance, do the designers of the keys just ensure that the conditions necessary to find roots mod n are not met, therefore the attacker can't find a root and consequently can't factor n?

Friedlander Brief History of Primes

1. The lecture was very interesting. The best part I think was the fact that Dr. Friedlander catered his lecture to undergraduates, especially those with little mathematical background. That was a very wise move on the part of everyone involved. The most difficult part of the lecture was his use of the pi function. I couldn't tell if that was the function that takes a number as input and outputs the number of primes up to that number. I say this because later I swear he said that the number of primes up to x is equal to pi(x)-pi(x^1/2)+1.
2. The most interesting parts for me were when he was briefly going over some of the interesting characteristics of prime numbers as a whole. For instance, the fact that groups of primes that occur as consecutive odd numbers and how there is no real established pattern as to why that happens. The categories of primes he mentioned such as Mersenne primes were also interesting. The method of singling out the primes (Sieve of Eratosthenes) was also very intriguing.

Tuesday, October 12, 2010

Section 6.2 Due 13 Oct 2010

1. I don't know how important it is to understand the very first part of the assigned section, but I didn't really understand how knowing m/4 characters (beginning or ending) would help to find factors of n. The text referred the reader to more reading. Like I say, I don't know if we need to worry about that, but either way I didn't understand their reasoning. Also a difficult part of the reading for me wasn't necessarily arithmetic related. It was more conceptual. It says in a theorem that if d<(1/3)n^(1/4) then d can can be calculated. However, I ask myself if we know that d is less than some number, it sounds like we already know the number. I obviously know there is a mistake in my understanding, but I just don't know what it is.
2. On reflection of the reading material, I focus primarily on the sub-section on Timing Attacks. The differences in calculation times of numbers on a computer have got to be so minuscule as to call into question the mechanics as to the Timing Attacks. Obviously it works, but I just am realizing how precise the measurements of time intervals must be.

Sunday, October 10, 2010

Section 3.12 Due 11 Oct 2010

1. At first the difficult part for me was understanding the initial example of approximating pi using continued fractions, but after a second reading and closer examination, it came just fine. However, there is still this part toward the end of the section where they approximate 12345/11111 using continued fractions. They have a list of fractions they get after applying the same procedure. I'm not sure exactly how they get that list. I'm assuming that they get it by doing 1, 1+1/9, 1+1/(9+1/246), etc.
2. My remaining questions is, is there a quick way to do this backward? That is, if we are given a continued fraction is there a quick way of determining what rational number the continued fractions represent without having to use a calculator and arduously type the continued fraction in?

Thursday, October 7, 2010

Section 6.1 Due 8 Sept

1. The only thing real difficult about this sections was seeing the arithmetic done so quickly with such big numbers. I think if I were to see someone do it in person it might make a little more sense. Also the explanation about why m=c^d (mod n) wasn't too bad, but it was still a little hard to understand.
2. The other thing I thought about a lot was how this stuff can still work even when we are working modulo n. My initial instinct says that this fact would open up the possibilities for what values variables like c and d can hold. It seems like there could be a few ways that the encryption could come back to haunt you for some reason because of working modulo n. Also, the text states (but leaves it as an exercise) that even if gcd(m,n) does not equal 1 Bob can still recover the message. How does that work?

Tuesday, October 5, 2010

Section 3.6-3.7 Due 6 October 2010

1. The concepts covered aren't terribly new to me. I've encountered them before in previous classes. But we didn't cover them terribly in depth. We were more given just a taste of things to come. With that in mind, I have the hardest time just understanding the proofs of the concepts. As far as concepts go, though, I did have a hard time understanding the concepts that relate the smaller concepts. For example, the "Basic Principle."
2. Upon reflection I can see how this modular arithmetic can come in handy in cryptography. The tools explained in these two sections I think can be adequately categorized as tools used to deal with very large numbers by bringing them down to a smaller scale.It's almost as if we are finding loopholes in large numbers by using modular arithmetic.

Sunday, October 3, 2010

Section 3.4-3.5 Due 4 October 2010

1. These sections actually aren't terribly difficult, as I've taken Math 371 (Abstract Algebra). Now that I've said that, let me clarify by saying that it will be good to review some of this stuff. The most difficult part will be dealing with large numbers. However, the Euclidean Algorithm and other similar tools will make that process easier. There is one part that stood out to me in difficulty: "Multiplication by i gives k=(a-b)i (mod n). Substituting back into x=(b+nk), then reducing mod mn, gives the answer." I would like to see more about that last step, the substituting back in part.
2. Upon further reflection, I see more and more why computers work in binary. The modular exponentiation works really fast by simply using powers of 2. Although, it is pretty funny to read where it says that they never had to work with a number larger than 788^2. This is true, but they say it as if 788^2 isn't a big number. In anyone's book, I think it's big. But I understand the context, meaning that it isn't as big as some of the numbers they would be working with had they not taken this detour through binary.