Tuesday, December 7, 2010

Clay Math Institute Lecture 7 Dec 2010

1. The lecture was primarily on Prime Numbers. There wasn't anything TOO new in the lecture. He talked mostly about the mystery behind some patterns contained within prime numbers and then focused the remainder of the time on the RSA cryptosystem. One thing that is hard for me to fathom is how simple prime numbers are, yet how hard the research on them is. For instance, why is it so hard to find the next prime number? Isn't it just the next non-crossed off number using the Eratosthenese Sieve? Or why is it so hard to explain why factoring large numbers is hard. I know it's hard, but why is it hard to explain?
2. Something rather interesting about the presentation was contained in one sentence the presenter used: "Prime numbers are like atoms." He was saying that every number that isn't prime can be broken down into its key component numbers that ARE prime. I had never thought about prime numbers like this, but now that's pointed out, I like the comparison.

Section 16.5 Due 8 Dec 2010

1. The reading was simply an application of elliptical curves combined with cryptosystems we've already covered in the course. There wasn't anything TOO difficult about that. There was one thing, however. I didn't really understand WHY the verification step works in the ElGamal Digital Signatures application of elliptical curves. Also, I think I'm just over analyzing the reading, but it is very explicit in pointing out the fact that k^-1*k is not 1 but is congruent to 1. I don't understand why they're pointing that out here. I thought that was made clear in the beginning of the book. As such, I'm worried I'm missing something.
2. The thing I thought about most in relation to this reading was why would we use Elliptical Curves in a situation rather than using one of the original cryptosystems we've discussed. That is, why use the Elliptical Curve version of ElGamal rather than just plain old ElGamal? The question is, what advantages does one have over the other?

Thursday, December 2, 2010

Section 16.3 Due 3 Nov 2010

1. Two things I don't really understand. Toward the beginning of the reading it uses the denominator of the slope between two points to find the gcd between it and the number trying to be factored. I don't think I really understand WHY they use the denominator of the slope. After reading it through a few times, it seems like they did this because they were trying to find the inverse of 1770. Nonetheless, I'm not sure how this leads to using it to find the factorization of 2773. Then later in the reading, toward the end, they mentioned that this method is similar to finding gcd(1,n) and gcd(2,n) and so forth. It's probably not a big deal, but I don't see how they are related.
2. It's not too difficult but I'm interested in knowing more about the difference between "smooth" and "B-smooth." Again, it's probably not a big deal, but I have this thing with somewhat ambiguous definitions. Who's to say what are "only small prime factors" and what's not? Where is the line drawn?

Tuesday, November 30, 2010

Section 16.2 Due 1 Dec 2010

1. The thing I don't quite understand is why we would want to mod an elliptic curve. It seems to me that gives us less points on the curve to work with. In cryptography, don't we want as many options as possible, leaving more room for error on the part of the attacker? In short, how does mod-ing an elliptic curve increase advantages? Perhaps this won't be made clear til we start examining the cryptography applications of elliptic curves.
2. I'm starting to guess that as far as cryptography applications go, elliptic curves may be useful once we DO mod a point because then it will be harder to undo the mod, thus increasing the security.  One other thing, I got lost on why they used 2773 as an example when it isn't prime. How important is it for p to be prime when modding? What goes wrong when p is not prime?

Sunday, November 28, 2010

Section 16.1 Due 29 Nov 2010

1. The hardest part for me to understand was the first example they give us to examine. It was hard to follow where all the numbers were coming from and why they were substituting certain expressions for specific variables in various other expressions. I also didn't understand why they could say that the sum of the three roots is minus the coefficient of x^2. Then later in the reading it suggests that -4 is a double root. Yet earlier in the reading it suggested as a "technical point" that we wouldn't worry about multiple roots until 16.3.
2. When doing these readings, I like to predict how the new ideas and techniques being introduced are going to be used in cryptography. I'm guessing that somehow the author of a certain key is going to use a made up equation (made up according to certain guidelines). Then the author is going to pick two (possibly random?) points and use them to find the third point based on the line that goes through P1 and P2 as described in the text. The coordinates of the third point will then be used to create an encryption and decryption key. I don't know HOW the coordinates will be used (unless the original two points are chosen so that the third point is composed of two large random numbers and then we would be in the realm of RSA) but I'm pretty sure they will be used to create E[k] and D[k]. I'm guessing all this purely on the ideas we've discussed on public cryptosystems. The third point is easy to find only to the author of the equation of the elliptical curve. It would be nigh impossible for a random citizen to find the original two points based purely on the third point. This leaves me with the question, "what components will be made public? P3? P2 & P1?" My guess is P3. Then the author can use P1 and P2 to create a decryption key.

Sunday, November 21, 2010

Shor algorithm and section 9.3 Due 22 Nov 2010

1. It was a very well written article. It didn't use hardly any math (except periods and mods). But there were still a few issues I had with it. Finding phi(n) corresponds to finding a multiple of the period mod some N. However, how does one choose N? I'm still having trouble figuring out how to choose N. Furthermore, once one DOES choose N, and finds the period then how can one be sure of the corresponding phi(n)? For example, in the article they let N=15 and found the period to be 6 and phi(n)=12 and explained that sure enough 6 divides 12. When I read this, I thought, "but 6 also divides 24 and 36 and 66, etc." How can I find phi(n) based solely on the period?
2. There was a part explaining something called repeated squaring. It seems to me that's the same as our exponentiating a number mod n that we've learned. On another note, or maybe this would best fit under the previous section, it seems like this article would fit better under our section that examined RSA rather than quantum cryptosystems?

Tuesday, November 16, 2010

Section 14.1, 14.2 Due 17 Nov 2010

1. I've read the sections twice over and am still having trouble understanding how exactly Peggy can use square roots of factors of x to show Victor that she has factored n. How is showing Victor x[1] or x[2] the same as showing Victor that she has factored n? Isn't she giving him enough information so that he can actually deduce what s is, or for that matter what the factors of n are? I think I'm getting lost in the labeling of variables somehow.
2. If Peggy were to mis-choose which numbers to send to Victor as proof, wouldn't there be worse consequences than Victor knowing that Peggy didn't find a factor of n? Wouldn't Victor know that for some reason Peggy was just shooting in the dark, hoping to find some secret information or get something out of Victor? How does this apply to the example of crooks trying to steal sensitive information?

Sunday, November 14, 2010

Section 12.1, 12.2 Due 15 Nov 2010

1. I'm going to go back and re-read the sections. In general, the material is new enough that I'm still trying to sort it all out. As for things that were harder to understand than others, I don't really understand the algorithm given as an example of a threshold scheme. I think that, as usual, once I see it a second and third time (in class) and have a chance to sort it out as I'M writing it down, it will start to come to me. Specifically, I don't understand how the pairs/points (t,w) are combined in such a way as to reconstruct a message.
2. One thing that I'm becoming more and more interested in is applications. I'd like to see more applications of the threshold schemes. It gives some really good examples (banks, inheritances, etc), but I'm wondering if there are any other examples.

Thursday, November 11, 2010

Due 12 Nov 2010

I think the most important ideas to understand are the basics behind RSA (since we spent so much time and RSA related problems can still be found on the homework). I also think that it will be important to know the ideas behind the different methods of factoring large numbers that are the product of two large primes. Cracking RSA messages is centered on factoring n=pq. In relation to this are the discrete log problems.

I need to have a better understanding of the most recent stuff that we've covered in class. That stuff is harder to understand because I haven't had as many opportunities to put them into practice. Toward the top of the list of things I'm having trouble remembering is the ElGamal cryptosystem.

Also, I'm wondering how exactly we're going to be tested on some of these things. I know we won't need a computer, but there are some things on the list of things to know that I can't see how we can be tested on without a computer. For example, continued fractions.

Tuesday, November 9, 2010

Section 8.3, 9.5 Due 10 Nov 2010

1. The SHA-1 Algorithm is unlike anything I've encountered in the course thus far. I thought DES was hard, but that was nothing! To get myself through the algorithm, I have to read it really slowly, often times reading and re-reading portions of it. I guess this means not so much that the algorithm is hard to understand, but more or less it is hard to follow. One thing that is on the back of my mind is the test and its relation to this algorithm. I'm just barely seeing it for the first time. I'm wondering how I'm going to be expected to show my understanding of the algorithm in a testing situation.
2. One point of interest are the values held by H[0], H[1], ... , H[4]. They all seem to be loosely related. Some are just the reverse of others, for example. Were these values chosen for a reason? Also, how do the specific values fit in to the actual algorithm?

Sunday, November 7, 2010

Section 9.1-9.4 Due 8 Nov 2010

1. I don't quite understand how the added signature idea fits into RSA. That is to say, I understand RSA just fine, but I don't understand using RSA with the additional signature. I don't understand where all the different numbers are coming from (how they were computed, why they were computed, where they go, their significance). It seems to me that the signature is simply added onto the end or the beginning of the message before/after encryption. But I still don't understand exactly how this verifies the sender/recipient.
2. The end of the reading describes how someone could make slight changes to an electronic document and how this could foil either or both parties involved in the process. I'd be interested more in the details behind this. Why exactly does this change the legal status of the contract? Doesn't the document still say essentially the same thing except with a few spaces added (or whatever the case may be)? Also, if someone signs one document, how does changing the document mean that they essentially signed another one?

Tuesday, November 2, 2010

Section 8.1-8.2 Due 2 Nov 2010

1. Two things I don't understand. I don't understand the proposition about finding discrete logs only given two messages whose hash functions are equal. I got bogged down in some of those steps. Especially because it says "Suppose . . . " Whenever it says something like that, I think "I may suppose that, but does that make it true?" Also, the simple has example they give involves XORing columns in a matrix. I don't understand how this is going to make a unique h(m). It seems to me that working with a mod that is so small, there would be many different messages that would map to the same XORed function.
2. Really the only thing I found interesting was that hash functions are used a lot in digital signatures. I would like to know more about how this works or how hash functions fit into that idea.

Sunday, October 31, 2010

Section 7.3-7.5 Due 1 Nov 2010

1. There were a lot of calculations that bogged me down in tonight's readings. It was hard to follow what Bob was calculating, what Alice was calculating, what Bob need to calculate and when, etc. I also didn't understand the final proposition of 7.5. That is, I didn't quite understand the seemingly inverse relationship between Decision Diffie-Hellman and ElGamal ciphertexts. There was also a part about the exchanging of keys. I understood how the keys were determined by each individual party, but I didn't quite understand the logistics of how this kept the key out of the hands of other parties.
2. One additional question that I've thought about throughout the readings of discrete logs altogether involves the differences between RSA and discrete logs. In other words, when is one system more desirable than another? When are both systems equally desirable?

Thursday, October 28, 2010

Section 7.2 Due Oct 29 2010

1. The processes that the text lays out aren't bad in themselves. However, I didn't understand HOW they were doing the processes. That is to say, I didn't understand where the text was pulling the numbers from. In some cases I saw the patterns and such, but in others I wasn't quite clear on the method behind the madness. For example, the second line on pg 205 where they raise 12 to the 20th power. And this was after a second reading of the processes. Also, why do they write x=x0+2x1+4x2 (mod 8). The choices of the number of xi's to use and the mod 8 are seemingly arbitrary.
2. The other issue I've been having with the reading is more of mild curiosity I'm thinking about. In all the processes for tomorrow's reading especially they work (mod p). I thought that in cryptography (primarily in  RSA) we would be working with (mod n) where n is the product of two primes. My question is, when will we be working ONLY mod p as opposed to mod n.

Tuesday, October 26, 2010

Section 6.5-6.7, 7.1 Due 27 Oct 2010

1. I didn't understand discrete logarithms. I thought that log functions were continuous. I'm guessing it becomes discrete when we use it as such. We are only concerned with the integers and the modulus of the log of those integers. I don't understand the process of finding discrete logs. Consequently, I don't understand why finding discrete logs is hard. Not to say that I see it as easy. Is there a reason that they are hard other than the fact that in cryptography we will probably be working with large primes (as in RSA)?
2. Section 6.7 was interesting to read about public key cryptosystems essentially being reduced to something akin to group theory. It nicely showed how and why RSA was just an example of a public key cryptosystem. The text also interestingly showed a modern day example of turning RSA backward and using it (treaties between nations).

Sunday, October 24, 2010

Section 6.4.1 Due 25 Oct 2010

1. I got lost at the part where the book said to find the linear dependencies mod 2 among the rows that were produced from the exponents of the prime factors in each congruent relation. After further analysis, it seems like they are just the rows whose entries add up to 0 mod 2. However, the book has "1st + 5th + 6th" on the left. I'm guessing that these are the rows that were added up. Can't tell. I also don't understand why they were looking for these specific characteristics among the rows.
2. I sort of understand the process behind the quadratic sieve (except for small details as mentioned). I see the basic reason behind the sieve (trying to find a nontrivial factor of n). It just seems a little arbitrary still. That is to say, it seems like some of the processes are being pulled out of nowhere. I think that's part of what makes some of this stuff hard to understand (not just for me, but for everyone in general). We spend so much thinking power trying to determine where the author is coming from and the reasons behind certain steps.

Friday, October 22, 2010

Section 6.4 Due 22 Oct 2010

1. The p-1 Factoring Algorithm is the hardest part for me to understand. It seems somewhat arbitrary, and I think that's why it's hard to understand how to exactly apply it. It says to choose a bound B, but it seems like that can be such an arbitrary number. For example, if I choose a=2 then a bound (as far as I understand it) would be 3? But that seems like those two choices won't do any good for me. It feels like there should be more guidelines for choosing B. Other than that, it seems like the algorithm is just a loop of calculations.
2. The most interesting part was the very first part of the reading assignment. It was nice to see something reduced so simply back to fundamental algebra (in this case reducing factoring down to a difference of squares idea). Then all that's left to do is run the repeated loop computing n+1^2 and so forth. However, as the text points out, this could take a long time for large primes.

Tuesday, October 19, 2010

Section 6.3 Due 20 Oct 2010

1. I understand the first two subheadings just fine (The Basic Principle and Fermat's Primality Test). I don't understand the Miller-Rabin Primality Test very well, though. From what I understand, however, it's basically a recurrence algorithm in which you find congruences to some random integer (a) raised to a previously found odd integer. You keep doing this working with the new congruences until you arrive at some number that is in fact congruent to 1 (mod n). Having arrived thus, n can be declared composite. Or you will eventually get to something congruent to -1? My question is, will you always arrive at 1 or -1?
2. On a minor note, what are "pseudoprimes" and "strong pseudoprimes" and the like? Also, these tests always seem to result in vague statements as to the probability that a number is prime. Are there any deterministic tests for large primes?

Monday, October 18, 2010

Section 3.10 Due 18 Oct 2010

1. The last example given in the assigned section was the hardest part for me to understand. This is primarily because of property (2) on page 91. I don't understand how n can be congruent to 3 or 5 or 1 or 7 (mod n). It seems to me that n would always be congruent to 0. I imagine that there is some typo in the book, but I don't know what it would be or how to fix it.
2. Also, it is interesting/difficult to understand how you can mathematically equate a number like (2/3211) to +,-1. At first I thought that they were just symbols used to replace one another, but then they started being used in mathematical calculations. I'm just not quite sure I understand how 2/3211 can legitimately be said to equal plus or minus 1.

Friday, October 15, 2010

Section 3.9 Due 15 Oct 2010

1. The section was pretty straight forward. It seemed to basically be giving a straight forward algorithm for finding roots given a specific modulus. The hard part came toward the very end when it started to tie everything back to cryptography and specifically large primes (n) used as the modulus and its factors. I guess I'm having trouble connecting all the pieces, especially just seeing the big picture of where each number is coming from.
2. With all the methods being introduced through the reading that make it easier to factor n and so forth, I must admit that I'm questioning the security of RSA. That is to say, I know full well that it IS VERY secure, but with all the ways that we are discussing as to how to crack RSA, I'm wondering what precautions are taken to ensure security. For instance, do the designers of the keys just ensure that the conditions necessary to find roots mod n are not met, therefore the attacker can't find a root and consequently can't factor n?

Friedlander Brief History of Primes

1. The lecture was very interesting. The best part I think was the fact that Dr. Friedlander catered his lecture to undergraduates, especially those with little mathematical background. That was a very wise move on the part of everyone involved. The most difficult part of the lecture was his use of the pi function. I couldn't tell if that was the function that takes a number as input and outputs the number of primes up to that number. I say this because later I swear he said that the number of primes up to x is equal to pi(x)-pi(x^1/2)+1.
2. The most interesting parts for me were when he was briefly going over some of the interesting characteristics of prime numbers as a whole. For instance, the fact that groups of primes that occur as consecutive odd numbers and how there is no real established pattern as to why that happens. The categories of primes he mentioned such as Mersenne primes were also interesting. The method of singling out the primes (Sieve of Eratosthenes) was also very intriguing.

Tuesday, October 12, 2010

Section 6.2 Due 13 Oct 2010

1. I don't know how important it is to understand the very first part of the assigned section, but I didn't really understand how knowing m/4 characters (beginning or ending) would help to find factors of n. The text referred the reader to more reading. Like I say, I don't know if we need to worry about that, but either way I didn't understand their reasoning. Also a difficult part of the reading for me wasn't necessarily arithmetic related. It was more conceptual. It says in a theorem that if d<(1/3)n^(1/4) then d can can be calculated. However, I ask myself if we know that d is less than some number, it sounds like we already know the number. I obviously know there is a mistake in my understanding, but I just don't know what it is.
2. On reflection of the reading material, I focus primarily on the sub-section on Timing Attacks. The differences in calculation times of numbers on a computer have got to be so minuscule as to call into question the mechanics as to the Timing Attacks. Obviously it works, but I just am realizing how precise the measurements of time intervals must be.

Sunday, October 10, 2010

Section 3.12 Due 11 Oct 2010

1. At first the difficult part for me was understanding the initial example of approximating pi using continued fractions, but after a second reading and closer examination, it came just fine. However, there is still this part toward the end of the section where they approximate 12345/11111 using continued fractions. They have a list of fractions they get after applying the same procedure. I'm not sure exactly how they get that list. I'm assuming that they get it by doing 1, 1+1/9, 1+1/(9+1/246), etc.
2. My remaining questions is, is there a quick way to do this backward? That is, if we are given a continued fraction is there a quick way of determining what rational number the continued fractions represent without having to use a calculator and arduously type the continued fraction in?

Thursday, October 7, 2010

Section 6.1 Due 8 Sept

1. The only thing real difficult about this sections was seeing the arithmetic done so quickly with such big numbers. I think if I were to see someone do it in person it might make a little more sense. Also the explanation about why m=c^d (mod n) wasn't too bad, but it was still a little hard to understand.
2. The other thing I thought about a lot was how this stuff can still work even when we are working modulo n. My initial instinct says that this fact would open up the possibilities for what values variables like c and d can hold. It seems like there could be a few ways that the encryption could come back to haunt you for some reason because of working modulo n. Also, the text states (but leaves it as an exercise) that even if gcd(m,n) does not equal 1 Bob can still recover the message. How does that work?

Tuesday, October 5, 2010

Section 3.6-3.7 Due 6 October 2010

1. The concepts covered aren't terribly new to me. I've encountered them before in previous classes. But we didn't cover them terribly in depth. We were more given just a taste of things to come. With that in mind, I have the hardest time just understanding the proofs of the concepts. As far as concepts go, though, I did have a hard time understanding the concepts that relate the smaller concepts. For example, the "Basic Principle."
2. Upon reflection I can see how this modular arithmetic can come in handy in cryptography. The tools explained in these two sections I think can be adequately categorized as tools used to deal with very large numbers by bringing them down to a smaller scale.It's almost as if we are finding loopholes in large numbers by using modular arithmetic.

Sunday, October 3, 2010

Section 3.4-3.5 Due 4 October 2010

1. These sections actually aren't terribly difficult, as I've taken Math 371 (Abstract Algebra). Now that I've said that, let me clarify by saying that it will be good to review some of this stuff. The most difficult part will be dealing with large numbers. However, the Euclidean Algorithm and other similar tools will make that process easier. There is one part that stood out to me in difficulty: "Multiplication by i gives k=(a-b)i (mod n). Substituting back into x=(b+nk), then reducing mod mn, gives the answer." I would like to see more about that last step, the substituting back in part.
2. Upon further reflection, I see more and more why computers work in binary. The modular exponentiation works really fast by simply using powers of 2. Although, it is pretty funny to read where it says that they never had to work with a number larger than 788^2. This is true, but they say it as if 788^2 isn't a big number. In anyone's book, I think it's big. But I understand the context, meaning that it isn't as big as some of the numbers they would be working with had they not taken this detour through binary.

Thursday, September 30, 2010

Test Review Due 1 Oct 2010

-Which topics and ideas do you think are the most important out of those we have studied?
 It seems that the most important ideas are the more math related concepts. This is good, because I don't think that it would be very fair to have a closed book exam on some of the more complex processes (DES, AES, etc). I think it is important to understand ideas behind different ciphers, such as their strengths and weakness compared to other ciphers.
-What kinds of questions do you expect to see on the exam?
I expect we'll be asked a number of mathematical calculations (Euclidean algorithm, extended Euclidean algorithm, gcd, solving ax+by=d, congruences, etc.) as well as some basic definition questions, including some illustrations of different cipher systems.
-What do you need to work on understanding better before the exam?
I need to have a solid understanding of the different modes of operation (ECD, CBC, CTR), and be able to differentiate between different systems.

Section 5.1-5.4 Due 29 Sept 2010

1. The most difficult thing about these sections is the whole process. I think mostly this is because I have the test on my mind. I'm concerned about having to keep all these processes straight. They are so complex! If a computer is required to use them, how can I expect to keep them straight in my head?! Also, the key schedule really threw me for a loop! If you expand a 4x4 matrix by 40 columns, you are going to have a very non-square matrix. I'm confused if this is even the case. I feel like what we are learning is much more Computer Science than Math.
2. My only reflection is in regards to the test. How in the world are we going to be tested on AES, DES, etc?! They are so hard to keep track of and separate in my brain! How can we be expected to commit all this information to memory? This is stuff that is hard to derive, as oppose to most other math classes in which the concepts can be derived from one another. This, however, is literally just a bunch of 1s and 0s in literally an arbitrary pattern (pattern is used very loosely). I feel like my brain is turning to mush just trying to think about keeping it all straight in my head.

Sunday, September 26, 2010

Due Monday 27 Sept 2010

How long have you spent on the homework assignments? Did lecture and the reading prepare you for them?

I've probably been spending at least 6 hours (give or take one hour) on the assignments each week. I typically start each assignment at the beginning of the week. This gives me time to converse with the TA, Prof. Jenkins, or other students about any concepts that I don't understand that appear on the homework. Also, in general, it keeps down the pressure to finish the assignment quickly. Lecture helps the most on the homework, especially the demonstrations of the applets that we've needed to use. The book really helps when it comes to the Maple examples in the back as well as being a reference to better understand difficult concepts.

What has contributed most to your learning in this class thus far?

By far, lectures have contributed the most to learning. However, that isn't to say the reading doesn't help. I think the lectures would be worse if I didn't do the reading. By reading the book, when I come to lecture it's the second run through of most of the material. Also, it's a much simpler view, hence it's easier to understand by being the second time covering the material. On that note, the lectures are excellent regardless of me doing the reading or not.

What do you think would help you learn more effectively or make the class better for you?

The class already is going well for me. However, I feel like there is so much information that I'm trying to process that it's hard to sort it all out. On the forefront of my mind are the exams. I process the information with the exam and assignments in mind. I think it would help to process if I knew a little more about what exactly to expect on the exams. Also, reviewing the material would really help. I also need to go back and re-examine each chapter and section.

Thursday, September 23, 2010

Section 3.11 Due 24 Sept 2010

1. I must admit I was a little nervous when I read the disclaimer at the beginning of the chapter, mentioning the difficulty of the chapter material. However, I've taken abstract algebra so the stuff about fields was just review (although a timely review of fields was needed). The only hard part about the chapter were the later sections dealing with various fields that we will be looking at. For example GF(2^8) is a field with which I'm unfamiliar. I'm a little confused about how to work "mod the irreducible polynomial ... ". How does one work "mod" ANY polynomial? I think with a little thought, it might come to me, but I do need a little direction.

2. The most interesting part, was actually the same as the most difficult part. I think because I've taken so many math classes up to this point, I'm now somewhat curious as to how to mod things with respect to some polynomial. When I figure this out, it will be as if opening up a new area of exploration and operation. I'm naturally drawn to this part of the reading because it is so much like the other readings I've been doing for the last umpteen years. I guess I need to become accustomed to similarly looking at new cypher systems and terms so as to become equally curious as to how they work and fit together.

Section 4.5-4.8 Due 22 Sept 2010

1. I really had a hard time understanding the various "modes of operation" discussed in 4.5. It was interesting however to read them in the order they were presented. Each successive mode was described in manner showing how it improved on the previous one mentioned. However, I didn't feel like I understood a single one. Mostly I think this has to do with the fact that I still have trouble understanding a lot of the common vocabulary words that are thrown around a lot and a lot of the symbols that are used.
2. I thoroughly enjoyed the chapters on the history of DES and password protection and the like. They were a little more relatable to me. It was amazing to think about how decrypting technology improves right along with encrypting technology. It's almost as if the two are growing exponentially at the same rate. However, it seems as if decrypting technology is growing a little faster simply because it mentioned the drastic change in time it took to break DES on two different occasions about a year apart.

Sunday, September 19, 2010

Section 4.1, 4.2, 4.4 Due 20 Sept 2010

1. Whoa boy! There was so much technical stuff in these sections that most of it went over my head. Even on a second and third reading I was still having trouble following where they were going. I think I need to go back and really understand block ciphers. I understand bits just fine as far as binary arithmetic goes. However, I'm having trouble following all of the arithmetic they describe in the assigned sections. Also I get lost on some of the vocab they throw around. "Rounds" for example is a new word for me in this context. I also have a hard time with the idea behind encrypting the bits, using the key, and decrypting the bits.
2. Only one thing that I was really thinking about during the reading that I've thought about before: error detection. There are things instilled in the system to detect if things have been encrypted wrong. They use this in ISBNs for example. However, the thought occurred to me, What if there is an error in the error detecting? Shouldn't there be an error detector for the error detector? But then where do we stop with the error detecting?

Thursday, September 16, 2010

Section 2.9-2.11 due Sept 16 2010

1. The reading seemed pretty straightforward, but I always struggle with how to attack the various cyphers we look at. For the One-Time-Pad it seems to me the way of attacking it as explained was to generate random lines of bits of various lengths. My thought was that this seems like a very arbitrary way of attacking ANY sort of code. In addition, if this is the way of attacking, isn't it going to take FOREVER to randomly produce the key? Even with a computer it seems like this method would take a long time.
2. Upon reflection of the cryptosystems introduced in this reading, I thought about the high applicability in the world of computers. Encrypting bits is perfect for sending lots of information across cyber networks. Not only because that is how computers communicate information, but because, as mentioned above, it seems like computers would be the only practical way of attempting to crack the codes.

Wednesday, September 15, 2010

Breaking the code on Homework 3

This post is not required. However, I wanted to catalog my experience breaking the code given in Homework 3 and I figured, "What better place to do this than on my blog?"

I was given the following CODE to break:


TIFSYCUG YIVSCYIT B XBYUCFSLBY KCPJTIU TIFSYCUG
XYEHITTCEPBLT BU LIBTU URI ZEEJ EPIT TII URI MEYLJ JCHHIYIPULG URIG
FBPU MBLO CPUE B TUEYI MCURESU PEUCFCPZ REM URIG KCZRU TREXLCHU URIG
FBPU STI B FEKXSUIY MCURESU MEPJIYCPZ BNESU URI TIFSYCUG
DSLPIYBNCLCUCIT URIG FBPU DEUI MCURESU UYGCPZ UE HCZSYI ESU REM UE
DEUI UMCFI URIG QSTU FBPU RILX CU URCT OCPJ EH URCPOCPZ CT PEU
PBUSYBL HEY KETU XIEXLI CUT PEU PBUSYBL HEY IPZCPIIYT ZEEJ
IPZCPIIYCPZ CPDELDIT URCPOCPZ BNESU REM URCPZT FBP NI KBJI UE MEYO
URI TIFSYCUG KCPJTIU CPDELDIT URCPOCPZ BNESU REM URCPZT FBP NI KBJI
UE HBCL CU CPDELDIT URCPOCPZ LCOI BP BUUBFOIY BP BJDIYTBYG EY B
FYCKCPBL GES JEPU RBDI UE IAXLECU URI DSLPIYBNCLCUCIT GES HCPJ NSU
CH GES JEPU TII URI MEYLJ URBU MBG GESLL PIDIY PEUCFI KETU TIFSYCUG
XYENLIKT

As you can see, it makes little sense. I started examining it on paper, looking for multiple appearances of the same word, for example.I first noticed that the most frequent letter was U. I tried substituting the letter E in for U, but discovered that U wouldn't work because then UE would have to translate to a word with E as the first letter. There aren't very many good candidates for a word that begins with E. So I ruled out E going to U. Then I tried E going to the second most frequent letter: I. Making that substitution led me to notice that URI and URIG appeared pretty frequently. I figured that whatever words they represented, they had the first three letters in common. I looked up words that are four letters long, had E in the fourth position (and only in the fourth position) and in which the first three letters were also a word. I came up with a list of about 12 words, among them were THEN, THEY, THEM. I tried putting in the appropriate substitutions with THEN but found words that ended with TN. So I switched and tried THEY. It fixed the TN problem. I then noticed that TIFSYCUG would then translate back to TeFSYCty. I looked up words that were eight letters long and had e and ty in the appropriate positions. I found a list of six words, among SECURITY. I tried the appropriate substitutions and started to see other words appear. ATTACKER became apparent, followed by a long list of other words. As new words became apparent I finished decyphering them. The results led to more words. Eventually I decyphered the whole message, resulting in:

SECURITY REQUIRES A PARTICULAR MINDSET SECURITY
PROFESSIONALS AT LEAST THE GOOD ONES SEE THE WORLD DIFFERENTLY THEY
CANT WALK INTO A STORE WITHOUT NOTICING HOW THEY MIGHT SHOPLIFT THEY
CANT USE A COMPUTER WITHOUT WONDERING ABOUT THE SECURITY
VULNERABILITIES THEY CANT VOTE WITHOUT TRYING TO FIGURE OUT HOW TO
VOTE TWICE THEY JUST CANT HELP IT THIS KIND OF THINKING IS NOT
NATURAL FOR MOST PEOPLE ITS NOT NATURAL FOR ENGINEERS GOOD
ENGINEERING INVOLVES THINKING ABOUT HOW THINGS CAN BE MADE TO WORK
THE SECURITY MINDSET INVOLVES THINKING ABOUT HOW THINGS CAN BE MADE
TO FAIL IT INVOLVES THINKING LIKE AN ATTACKER AN ADVERSARY OR A
CRIMINAL YOU DONT HAVE TO EXPLOIT THE VULNERABILITIES YOU FIND BUT
IF YOU DONT SEE THE WORLD THAT WAY YOULL NEVER NOTICE MOST SECURITY
PROBLEMS




Tuesday, September 14, 2010

Section 3.8, 2.5-2.8 due 15 Sept 2010

1. Again the most difficult part of today's reading was following the mathematical steps specifically those involving matrices and vector multiplication. I know I know this stuff, it just takes a little longer to process than it used to. The part that really gets me is when it says things like "try various values [of n] until we find the right one." The problem I have is that this would be an efficient use of code-cracking time. At first thought, it seems like there are an awful lot of integers that could be tried until we "find the right one." What I'd like to know is how exactly and efficiently does one try new values of n until the right one is found.
2. I'm starting to worry about some of the future codes that we will have to break using some of these methods. As mentioned above, it seems as if the only methods we'll have is to essentially shoot in the dark until we find solution.

Sunday, September 12, 2010

Section 2.3 due 13 Sept 2010

1. The first part of the explanation concerning the cracking of Vigenere makes a lot of sense. The part that was confusing was the mathematical explanation as to why it works. I understand the mathematical concepts behind the explanation. I think it was just all being thrown at me in quick succession, causing me to have a perplexed expression. However, I think a second, maybe a third reading with some scratch paper would solve this.
2. It's interesting to see how much math there actually is behind cipher breaking even when it comes to simple substitution methods and variations thereto. I knew that there was a lot of math behind ciphers, but I had no idea there would be so much behind substitution methods and solving them.

Saturday, September 11, 2010

Section 2.1-2.2, 2.4, Due 10 Sept 2010

1. The only real difficult thing about today's reading was the description of the last two attacks in each section, Chosen Plaintext, Chosen Ciphertext. I didn't quite understand what it meant when it said "Choose the letter a as the plaintext," and "Choose the letter A as ciphertext." Does this mean the decoder is to simply choose a letter and by doing so is able to crack the cipher? Or does this mean that the attacker only needs to know one letter of the corresponding text? The wording is throwing me off I guess. This was similar for both sections on substitution and affine ciphers.
2. The most interesting thing about the two ciphers is there different difficulties. As the text points out, the simple shift-substitution method only has 26 possible keys, so a brute force attack wouldn't be too hard. In contrast, the affine cipher has 312 possible keys, making a brute force attack much more un-realistic. However, it occurred to me that an individual still only needs one letter from the plaintext and its corresponding letter in the ciphertext. When even one letter is obtained, the rest of the text in either cipher method is easy to obtain through simple mathematical operations.

Wednesday, September 8, 2010

Guest Speaker: "Codes and Ciphers in Mormon History," 8 Sept 2010

1. I didn't find anything overall about the guest speaker's material to be difficult. The only difficult thing I can imagine was not being able to view more of her slides and not being able to hear her commentary.
2. The most interesting part of the presentation was learning about the various things that can be considered codes or ciphers. Many of the things presented I have had prior experience with. It wasn't until the presentation that I considered them to be codes or ciphers. For example, the "unusual names" used in the Doctrine and Covenants. I have known about those for a while (a lifetime). It does make sense, though, to consider those to be a cipher of sorts. They achieve the same purpose and they fit under the "Code Words" category of cryptosystems, similar to using code words for various operations of the U.S. Military. Also, the Deseret Alphabet is another system I'm familiar with but have never considered it to be an authentic cryptosystem. This is somewhat ironic when I consider that my friends and used to use it to communicate when I was younger.

Another thing I noted upon further reflection of the presentation was the fact that most, if not all of the codes she presented came AFTER the time of Joseph Smith. Specifically, they seemed to appear right around the time of Brigham Young. I would like to learn more about the historical context of the various ciphers mentioned. The number of enemies of the Church were the same at both times. My immediate reaction would  be to guess that the Saints at the time of Brigham Young need to communicate over longer distances by trusting those maybe not friendly to the Church to a more extent than was needed in the earlier days of the Church.

Thursday, September 2, 2010

Section 3.2, 3.3 Due on 3 Sept 2010

1. The most difficult part of the reading was grasping the idea of division with congruences. The actual act of dividing makes sense, the only trouble is to remember that you can divide both sides of the congruence sign only if a (the divisor) and n (the modulo) are relatively prime. This I must remember. I also must remember that  if the gcd of a and an is greater than 1, I have to step back and examine a few things? If gcd(a,n) does not divide b, then there is no solution. If d does divide b then you can divide a and b by d but you  must remember to divide n by d as well. Addition, subtraction, and multiplication all work just fine in congruences, it's division that I must remember. Even more so, fractions seem to be a hairy deal to mess with when working with congruences.

2. I think about congruences and how they relate to equalities. You can do some similar operations (as mentioned above) on both. It seems as if congruences modulo n package larger numbers down. I've taken Abstract Algebra, but we haven't really dealt with a whole lot of applications. I'm not a huge fan of applications, but I would be interested in pursuing a career as some sort of information analyst. I recognize that by using congruences, you can simplify a lot of information in such a way so as to make it easier to spot patterns and the like. I don't know exactly how, though.

Tuesday, August 31, 2010

Sections 1.1-1.2, 3.1

1. The most difficult concepts I'm seeing right now is keeping all the terminology and vocabulary sorted out. There are so many variations on the word "text," (plaintext, ciphertext, known plaintext, chosen ciphertext, etc.). I'm having difficulty understanding Kerckhoffs's principle. What does it mean for the security of a system to be based on the key and not on the obscurity of the algorithm? If we assume that Eve has knowledge of the algorithm used to perform encryption, why doesn't that defeat the purpose of encryption?

2. The most interesting part of the reading to me were the historical examples given. I understand there is a book that would relate to this subject, The Codebook. I'm going to look into this.
I've taken Math 371 (Abstract Algebra) so I'm familiar with the Euclidean algorithm. The thing I was wondering during crypto reading was, "How is this going to fit into ciphering?" I've known that math plays a huge part in programming and encryption, but how does it specifically play in? I imagine that since algorithms are used to encrypt and decrypt, perhaps Euclidean algorithm is a good candidate for doing such a thing.

Introduction, due Sept 1

I'm a mathematics major in my fifth (and final) year at BYU.

I've taken MATH 214, 190, 315, 342, 343, 334, 351, 362, 371. 

I'm taking this class because I find the subject material. I think it will stand out on an application amidst the other math courses I've taken. I want to see if I might be interested in pursuing further a career field related to crypto-analysis.

I have a little experience with Maple. I worry that might be a major struggle during this course.

As far as programming experience, the only experience I have is through taking CS 142.

The first math professor I had at BYU was Prof. Villamazar. He remains one of my favorite professors. I took 119 from him. He explained the concepts thoroughly. More than that, he inspired the students to take the material seriously and to strive for understanding.

I'm unique among others because I'm a math major. There aren't that many of us. Among my peers in math, I'm unique because I like to branch out and study other subjects, mostly history.