(back to academic signature homepage)

Academic Signature

A tool for public key cryptography using elliptic curves,
critical questions experts may ask to the developer

Naturally as the developer of a cryptography tool I occasionally get into heated debates with colleagues about security and cryptography.
There are some obvious critical questions that are definitely justified and that I am confronted with on a regular basis.
See them below and click on them to get my answers.
(If you have a question not yet included in the list, feel free to mail me "an@fh-wedel.de" and I shall address it too on this page.)



1.) How can an amateur like me dare to deal with cryptography at all ?

2.) GnuPG is already there. Why does anyone need something else?

3.) Why use elliptic curve cryptography and not approved RSA?

4.) Why do I not rely on a PKI?

5.) Why did I introduce new symmetric algorithms? 

6.) Why did I develop my own longnumber arithmetics?

7.) Why did I interweave a GUI?

8.) Why do I make symmetric cryptography directly accessible? 

9.) Why did I develop a new PRNG and new hash algorithms?

10.) Why did I not support/rely on smartcards? 

11.) Is it not too dangerous to make a strong cipher publicly available?

12.) When will there be a critical evaluation of the programs safety by an independent committee ?

13.) Why did I not strengthen my program against side channel attacks ?

14.) Why do I not consistently use authenticated symmetric ciphers in academic signature ?

15.) The NSA is encouraging the replacement of RSA by Elliptic Curve Cryptography. Is this not a strong indication that they can crack it and that it is insecure?

16.) It is good practice not to invent crypto algorithms and to stick to scientifically challenged and approved methods. Why do I not stick to this rule?

17.) Academic Signature uses the standard addition formulae(in jacobian projective coordinates). Bernstein and Lange state on their safe curves website that it were unlikely that this addition will be implemented correctly and can only be implemented correctly at the price of decreased efficiency. Is Academic Signatures ECC implementation slow and complicated or insecure??

18.) Why are the websites of academic signature not secured by https ?

19.) Why did I not automate deleting the plain text after encryption to allow for convenient locking of files on your storage medium ?

20.) Will the advent of quantum computing make Academic Signature obsolete ?









*************************************************************************************************

1.) Why deal with cryptography at all?

Mantra: It is soo hard to get it right!!
Don't do it if you are not a top expert!!
You won't get it right!
Never ever write an algorithm yourself!
Don't even mess around with the cryptographic primitives!
Take something "out of the box".
Leave it to the professionals.

In a slightly different context an older writing puts it this way: "Of every tree of the garden thou mayest freely eat: But of the tree of the knowledge of good and evil, thou shalt not eat of it"

Well, Eve seemed to have had a physicists heart. For me, being a physicist, the mantra is neither a threat, nor good advice but an outright provocation.

Once you start to think about IT-security and cryptography as an educated amateur, you will easily and early on arrive at the following conclusion:
If you do as you are told, you get exactly the level of privacy and security "the experts"(= the US American NSA) decided you should have. While this may be a formidable level of security against anyone not being the NSA, it should be up to you to decide if this is enough.
Please do not misunderstand me, I still consider the US government to be the good guys(well, sort of... I concede that loved ones of victims of the evil US-torture program may disagree). But some other governments may be worse. Think of Russia, China, Iran,...... I would definitely not want to leave it to any of them to decide what level of privacy and security I ought to have.
However, the US has a mixed track record of respecting the privacy of its computer users despite laws that forbid large scale spying on its citizens. The track record regarding the respect for IT privacy of non US citizens is not mixed to say the least. Someone in some agency just has to murmur "National Security" and all your data are open to whatever agent issued it. (To get it correct I should say all your data they can get hold of, but this may well be all your data.)
Being a citizen of the EU, I feel continental Europe respects my right to privacy and IT-security much better than the US Administration. (Let me explicitly exclude the UK and their louts from GCHQ here.) Yet most IT-security tools on the market are from the US or are at least based on US Standards or use US-designed elements. Let's not forget that your OS -if it is not Linux- most likely MS-Windows or Apple OS is under US-control and as we know today is a "very friendly habitat" for US agencies.

The program Academic Signature is a maverick and I designed it to be a maverick. Just casting a GUI around OPENSSL or GnuPG has ceased to be an option early on. I felt the need for a US/NSA-independent, self contained, open source, free, easy to use(=GUI-based) tool for strong cryptography. The current affair around NSA whistleblower Edward Snowden and the disconcerting stubbornness of the unmasked administrations in retrospective strongly justifies this desire. Since I didn't see any tool that would satisfy my needs (GnuPG comes close), I had to write it myself.
And it was fun.
(back to question list)

*******************************************************************************************************




*************************************************************************************************

2.) Why don't I simply use GnuPG?

GnuPG is a formidable tool which is usable in the way I need it(mainly for digital signatures for letters of recommendations). So why do I feel the need to create some additional tool?

At the time of the decision to create academic signature, there was a good package for windows (GPG4win) present that featured a GUI and there was no good GUI solution for Linux(which I use).
I do not consider it acceptable to access keys via the command line and I simply reject to learn structureless hex strings by heart to type them in on the command line. So as a Linux user I had to have access to a stably working GUI also in Linux. I had tested numerous GUIs and, frankly, none did work properly. Some even messed up my GnuPG installation so thoroughly that I needed to reinstall GnuPG to get it working again. The mailer plugins seemed to work nicely but the GUIs were unbearable. I had to dig out an old version of GnuPGSHELL and compile it myself to get something that was operational. The current binaries I found and tried were broken. Thus a sound GUI was as necessary as lacking and I would have had to develop that myself anyway.
This was the prime reason for entering into the effort to write the crypto tool "academic signature".

GnuPG is a wonderful tool. I use it frequently as a complement to academic signature for e-mail security and privacy. It solves many problems elegantly and I devoted some time to include a graphical interface to GnuPG in my tool Academic Signature. Yet I found aspects of GnuPG which I do not like.

1.) Inconsistent choice of return channel
Depending of which function of GnuPG I called, the return comment was sometimes given in the error channel, sometimes in stdout. I could not find any comprehensible pattern in the choice of the return channel. Well it all ends up in the console anyway. But if you e.g. want to write a GUI for GnuPG(which I did) this inconsistency drives you nuts. You eventually have to try out several channels not to miss info in order to filter the result appropriately for incorporating them into your GUI-dialogs. This made me wonder about the internal consistency of GnuPGs code.

2.) Any good signature will do...(this is a bad one!)
A console based program cannot give much guidance to fulfill protocols properly. Yet there is still a feature in GnuPG I consider to be a protocol flaw. In verifying a  signature, GnuPG automatically picks a public key from internal storage, then tests for correctness of the signature versus this public key and if correctness is given informs the user  about a "good signature from...." whoever is assigned to the public key in GnuPGs database.  If we are lucky, the verifier will invest the extra time to compare the name of the "good signer" to the name of the presumed signer to catch a discrepancy. Unfortunately a sloppy verifier will easily miss that.
It is definitely not enough to find any good signature-key pair for a document. It has to be made absolutely sure that the signature is from precisely the same person, the verifier presumes the signature is from. In my opinion this has to be achieved by explicitly prompting the verifier for the name/key of the presumed signer before even touching the signature. Picking a public key for verification (from the signature file?)is none of GnuPGs business, this is a core responsibility of the human verifier. In fact for logical cleanliness the reference to the signer name and/or key does not even belong into the signature file, it does belong into the document to be read and eventually used for verification by the human recipient. I presumed it would be hard to convince GnuPGs developer (Werner Koch) to change that and didn't even try.
This problem can be healed by proper GUI-design.

3.) Only first generation asymmetric algorithms can easily and stably be used yet.
As of now only first generation asymmetrical algorithms i.e. elGamal and RSA are supported by GnuPG. It is well known that subexponential attacks exist for these algorithms and that in the near future exceedingly longer keys are going to be necessary to  ensure safety. The US administration e.g. does not allow these algorithms to be used for the level "TOP SECRET" any more because of this reason. If you want safety for a decade and use these cryptosystems, you should stick to asymmetric keys lengths of more than 3072 bit.
Elliptic curve cryptography is a modern cryptosystem. No subexponential attack is presently known for this cryptosystem and an ECC-keylength of 256 bit is considered safe for decades. I found an experimantal patch for using ECC with GnuPG. When I tested it on my system, it crashed. So I couldn't tell how close Werner Koch was to having usable elliptic curve cryptography.
Recently, he released a new version of GnuPG 2 that is said to support ECC. I downloaded and tried to compile it. The building process is complicated and consistently failed on my system. I will try again some time later. At any rate GnuPG will only offer domains of up to 512 bit length (and possibly the 521-bit NSA-domain?) and will not allow the import of freely created domains. This would not be acceptable to me.

4.) Restrictive specifications for future ECC-use in GnuPG(a boring reading for that matter.)
In GnuPGs mainstream development, introduction of ECC is in progress for some years now. Two years ago I stumbled on a paper in the Internet with specifications for openPGP-ECC (which I couldn't find anymore now). I didn't like what I read. There seemed to be plans to solely allowing three NIST Curves(256, 384 and 521 bit) for use in GnuPG. They are all of a special form and I presume using new and larger ECC-domains of a general form will be excessively difficult. (Update: the new GnuPG version also seems to support the ECC-Brainpool Domains and, in a specific setting, a 256-bit Edwards curve created by Bernstein and Lange.)

5.) Diffuse state regarding certificates, web of trust and self published public keys(a subtle weakness)
GnuPG can be used in the context of different trust models. You can use a PKI and X509 certificates (I presume that, didn't try it out). You can use it in the decentralized context of the "web of trust" and you can use it with self published keys (published on your website) which I call the "Mosaic of Trust". In principle, I like this.
Yet the different modes are to my knowledge not strictly separated in GnuPG and you can simultaneously use e.g. keys retrieved from your partners website or keys fetched from a keyserver. The different trust models require different protocols to ensure trustworthiness of the keys. The user may easily slip from one mode into the other and forget about the differences in evaluating the keys trustworthiness. This may facilitate attacks.
I favor the sole use of self published keys with the inherent obligation for users of such public keys to explicitly and self dependently decide about the trustworthiness of the source of the respective keys. I admit that there may also be valuable objections against this preference of mine and many business models of IT-security companies are based on the service of doing this assignment for their customers. I do not like to delegate the assignment of trustworthiness to an external entity and insist on doing this myself.
At any rate, compared to commercial products, GnuPG already goes quite far into the direction I favor. Yet I like it even more extreme so.

6.) Diffuse use of time stamp like info(cryptographic ambiguity)

GnuPG obviously documents the time a signature was produced. This sort of "time stamp" has a poorly defined significance. I presume the system time of the signer computer is used as a source for the time but I didn't bother to find out exactly. If so, the time is basically a statement by the author(and signer), which may not be clear on the side of the receiver. I like extreme clarity in the status of security relevant information. A statement by the signer/author belongs into the document to be signed and not into the signature. In my eyes this is an abuse of the signature file as a message board.
There is such a thing as a cryptographic time stamp. But this time stamp makes sense only in a three party protocol: client(who gets the time stamp for a document), witness(the notary public-like person/entity issuing the time stamp) and a judge(the person deciding about the validity of the time stamp). Done properly this requires a different protocol than signing/verifying a document. It is not possible to use such a protocol with GnuPGs time comment included in the signature.
In my opinion, unnecessary information of undefined cryptographic status should be avoided in a signature.

7.) Need to dig for information on the PRNG(made me uneasy)
The PRNG in GnuPG seems to take entropy from the system(i.e. Linux, Windows etc.) I do not know how good the PRNG is and if it uses state of the art components. It takes more to be a good PRNG than to be an efficient collector of as much entropy as possible. The internal state has to be protected securely. I do not know how GnuPG achieves that. From screening some discussion fora I got the information that it is done differently according to some “security level” set in some way. I did not bother to look into the code.
The quality of the PRNG is extremely important and it is seemingly not described in the standard manual(http://www.gnupg.org/gph/en/manual.html) or other easily attainable documents. That did not give me a good impression at the time of the decision to work on a crypto-program.
One must not forget that under windows, the OS is fully controllable by US-Agencies.
If I were the NSA and would like to spy on you, I would slightly "update" your systems PRNG. If you would use GnuPG with elGamal keys, getting your private key would be a piece of cake. I'd just need one of your signatures made with a known ephemeral "random" number and you wouldn't even notice. If you would be using RSA, recovering your private key would be more difficult(via fabricating the "randomly" filled patches in signature generation to my advantage). At any rate deciphering your secrets would still be a piece of cake since I could short circuit any asymmetric crypto altogether and reproduce the "randomly selected" symmetric keys directly.
I dug out more information on the GnuPG-PRNG lately which satisfies me more or less.

8.) Excessive use of a radix64 format(a nuisance)
GnuPG uses a special type of radix64 code for all kinds of information. While this may be a good choice for transporting info through text oriented channels like e-mail, it is bothering me anywhere else.
In my opinion non secret information has to be readable by humans, secret information is then easily recognizable as such (and be unreadable to anyone except the intended recipient). As a user of GnuPG I may encounter text like: “SgFnvwJRvZk3foUvTXMWFnhVfJ/X6BmYbEjq/ZKH7ob50q49wV89wMjCaSQ6pFmJr3l6/bDhkuWMJ/KVpYMKrvGqMP0FQ/A+/w9owARAQABtDJQcm9mLkRyLk1pY2hhZWwgQW5kZXJzIChrZXlfMjAxMikgPGFuQGZoLXdlZGVsLmRlPokCPgQTAQIAKAUCTwWAYQIbL” Most certainly I will want to know what this means, if it is not secret. Yet I do not even know if it is secret at all and in order to keep in control of what I am doing I would need to convert the stuff back to normal radix256 ascii. In a special situation I figured I had to manually copy and paste stuff like that. Thus I felt kept in the dark about what Information I was handling. This did not feel good!

9.) GnuPG closely trails the RFC-Documents. Surely it is to be expected that the RFC's are influenced by some NSA moles. At least we know today that the NSA has substantial funding for such activities. GnuPG to my knowledge still sticks to unsafe defaults (e.g.SHA1 as hash) and keeps the option to change that concealed from non-experts - why? Werner Koch (who I tend to have trust in) presumably knows about that but does not change this deplorable state presumably because he feels bound to the "standard". Furthermore the standards seem to enforce a jungle-like hierarchy of subkeys, may require such bullshit as to have to sign your own key with itself in order to be able to use it and many more features that make me pull my hair out.
In my opinion there is absolutely no cryptographical need for complicated subkey structures, self signatures, key expiration dates or other such features other than to disconcert bright but inexperienced users and to create circles of pundits proud of knowing how to navigate the maze. The truth is that asymmetric cryptography indeed is crisp and simple on the surface and should look crisp and simple to the user: Select the file you want to encipher, select the public key you want to encipher it to, do the enciphering and send the cipher to the recipient!
Sometimes the OpenPGP maze and lack of transparency is justified by the perceived need to make usage of GnuPG idiot-proof. This is without avail anyway, yet the attempt only makes it impertinent for non-idiot lay people.
I used to be very humble about Academic Signature, but during the last weeks in which I had some "Crypto Parties" on how to use GnuPG and Academic Signature, it turned out to be way easier to get inexperienced users to install and use Academic Signature than to install and use GnuPG. (Installing Academic Signature can be done in a few seconds, compiling Academic Signature from source on a standard notebook can be done in a few minutes, compare that to GnuPG...)


Despite my critical remarks I greatly appreciate the availability of GnuPG and I admire Werner Koch for the creation of this wonderful and widely accepted tool. The above mentioned points are not meant as a criticism of GnuPG. If this were true I would have to dig much deeper into the code of GnuPG and other documents to substantiate and/or prove the "allegations". Since I did not do that, they cannot be understood as allegations at all. Points 1 through 9 are merely my personal impressions. However, these impressions made me decide to devote part of my time to creating an additional open source cryptographic tool to thoroughly fulfill my personal conceptions and my personal wishes for such a tool.
(back to question list)

*******************************************************************************************************







*************************************************************************************************

3.)  Why does academic signature use elliptic curve cryptography?

This is an easy question. It is a new tool and as such should use the best available algorithms. It is a well established opinion in the cryptographic community that elliptic curve cryptography is the most future proof asymmetric cryptosystem (if quantum computing is not included in the threat model). For first generation systems -RSA and elGamal- subexponential attacks do exist, for elliptic curve cryptography they don't.
Up to fall 2015 the NSA strongly advised to migrate to ECC and posted a web page explaining the recommendation:
 http://www.nsa.gov/business/programs/elliptic_curve.shtml
(The page has been removed in fall 2015 and I truncated the link to one layer above.) There this point was competently elaborated on.
Whereas the benevolence of the NSA is well within, its competence is well beyond question.
The NSA recently changed its mind and switched to discouraging American companies from transition to ECC, allegedly for preventing unneccessary investment. I suspect ulterior motives for that move: They simply lost grip on controlling ECC i.e. limiting its strength and distribution.

At last: It was fun to code elliptic curve algebraic operations for fast execution with long numbers. (back to question list)


*******************************************************************************************************






*************************************************************************************************

4.) Why doesn't academic signature rely on a Public Key Infrastructure?

I need the crypto tool primarily to sign letters of reference or testimonials for students. There is no PKI that keeps a reference about professional status to the degree I need it and that is free as well as easy to use. I prefer the consistent use of self published keys. In this case it is obvious that before acceptance of an external public key, the user has to thoroughly evaluate the trustworthiness of the source of the public key - no ambiguity. Self reliance is allowed and required. The tool is not to be used in a fully anonymous community. If in doubt about the authenticity of a new public key you pick up the phone and call!



(back to question list)

*******************************************************************************************************






*************************************************************************************************

5.) Why did I introduce new symmetric algorithms?

It is a well established dogma that you should never ever invent a symmetric algorithm yourself. I seem to be one of those dummies who think they are smarter than anyone else and blatantly violate this dogma.....
1.) The National Security Agency (NSA) of the United States of America clearly has the mandate to be able to eavesdrop on all communication of non us citizens.
2.) The NSA has a strong influence on standards and requirements for cryptographic algorithms and on selecting and publicly recommending cryptographic algorithms for use in the United States. It is well aware of the fact that foreign civilian organizations tend to adopt its (and NIST's) recommendations.
3.) It has a history of consistently using its influence to lessen key size and limit block size of recommended algorithms(DES: 8 byte, AES: 16 byte) without giving convincing evidence in what way this is to serve security for users of the algorithms.
It thus seems likely that this is done to facilitate eavesdropping. This inclined me to develop new symmetric algorithms which greatly exceed the NSA's limits on keysize and blocksize. As a side effect dropping these restrictions greatly facilitates designing safe ciphers. Furthermore using large blocksizes (>≈ 64 byte) allows to create ciphers whose safety is stable against marginal algorithm changes.
Generally, my crypto package “Academic Signature” is the result of a less conservative and more gutsy approach than in other security packages. I systematically transgress criteria and guidelines that  I cannot identify as serving security. This resulted in substantially more development work and contemplation than might be considered adequate by more traditional developers. In a related setting I felt the need to extend the range of elliptic curve domain parameters from the former maximum of an NSA recommended 521 bit curve to 1033 bit in two newly designed curves(I wanted to definitely trespass a legal milestone of 1024 bit). The search for safe elliptic curve domain parameters of high bit length is (rightfully) considered a difficult task and developers are generally discouraged from attempting this(why? Verification of safety is not hard!).
Users who are worried about this dashing approach of Academic Signature and prefer a more submissive approach can opt to use the traditional algorithms SHA(hash), AES(symmetric cipher) and publicly endorsed elliptic curve domain parameters from ECC-brainpool for their keys, signatures and ciphers.
Lately I added the threefish cipher and the skein hash algorithm. This is sort of an intermediate offer:
Threefish and Skein were created by renowned  cryptographers and reviewed by other competent cryptographers. So the social background is presumably much more comfortable to users. Yet some mainstream IT security people not including the NSA in their threat model still warn to use it "in production" because it were not yet matured enough. Threefish offers 1024 bit encryption - a substantial improvement against aes. So the NSA may not be amused about that. It is also very fast, way faster than my "fleas" ciphers and even faster than aes. So I recommend these algorithms as default when you regularly handle big Files at or beyond gb-size.
Yet threefish was created according to the criteria of the NSA controlled SHA3 competition, it uses a fixed mixing pattern, the majority of the authors live under US jurisdiction and at least one of the authors is affiliated with Microsoft. So I did not set it as "default default".
From version 53 on I set primary "default defaults" to chimera and skein. Individual defaults replacing my default selection can be set in a respective dialog.

Paranoiacs who neither trust my competence nor the NSA' s benevolence nor the "Schneir" groups product Threefish alone should stick to the default and use my recently (fall 2015) introduced cipher "chimera". It is a cascade of Threefish and a variant of my flight_x cipher. This would yield full safety if just either one of the two components were safe or both would only have limited vulnerability (and your system were clean and there were no spy cameras in your office and…and...and.........  it is so hard to be a paranoiac  ;-).

(back to question list)

*******************************************************************************************************






*************************************************************************************************

6.) Why develop a new, own longnumber arithmetics module?

Again there is plenty of good advice to the contrary available: Use an existing longnumber library! It is unprofessional, difficult and superfluous to do it yourself. If you do it yourself it is going to be slow and deficient.
So why did I do it myself ?

1.) I expected it to be fun and it was fun.
When developing you must frequently "walk in the mud" i.e. use defective tools e.g. for creating a GUI.
So if you code pure mathematics it feels really good to be in a clean setting at last. The compilers basic mathematics libraries are (hopefully) error free and almost all you have to rely on is yourself.

2.) It is easy to check correctness and despite the crypto-croakers omnipresent melody I got it right and it is fast.
To my knowledge mine is the only open source longnumber module around that is truly scalable. The longnumbers grow and shrink as necessary and there is no need to assign a special longnumber size at allocation time. Let me add the remark that the majority of commercial products do not use freely scalable arithmetics. Instead of fixing this deficiency they influenced standards setting bodies into adding regulations to catch overflow attacks targeting this flaw. Most naturally I refuse to obey these stupid parts of standards in academic signature.

3.) It is desirable that a security package is as self reliant as possible.
It happened to me more than once that a library/an environment I used was changed and confronted me with newly introduced errors. This usually triggered a search for patches and workarounds in a flurry that can at best be described as displeasing. On top of everything else this reveals an attack path via altering external libraries used in the security package...
So I learned my lesson and did as much as possible myself.

For the GUI(wxWidgets) I had to rely on other's libraries that were not error free. The wxWidgets source code is not digitally signed and the maintainers refused my request to do so, the checksum were supposed to suffice. Furthermore a check revealed e.g. that wxWidgets(2.8) introduces memory leaks (yuck!) but I can't work myself into the ground and try debugging other peoples software - and who knows if that it is wxWidgets fault at all. Maybe the leaks are introduced by even more basic routines of the OS. At least I linked the wxWidgets libraries statically and thus conserved the status that works reliably.
(Don't misunderstand me: wxWidgets is a great tool. The developers of this tool deserve praise. I don't know of any other tool that gets even close in aiding multiplatform GUI-Development. Yet I wish they would get their act together and employ modern authentication means.)
(back to question list)

*******************************************************************************************************






*************************************************************************************************

7.) Why interweave the GUI in academic signature?

GnuPG, the other open source tool for asymmetric cryptography, is to be used via the console. Werner Koch had good reasons not to incorporate a Graphical User Interface(GUI) in his GnuPG.
The modules needed for a GUI are usually not written with the level of care necessary  for a security package and may(and will) contain bugs, memory leaks and the like. They are maintained by people who are content if functionality is given. Their job is hard enough! Yet the GUI may open attack paths and handle sensible information like passphrases and key info. So why do I interweave the wxWidgets GUI and not stick to the console which is a transparent interface?
1.) While some people may feel comfortable typing cryptic commands and long hex numbers in the console, for me this is simply not acceptable. I reject to learn complicated strings and commands by heart (I am a physicist!). Probably I share this disgust with the vast majority of computer users.
So a GUI is necessary at last. Accepting this, it is safer to develop the GUI in conjunction with the cryptographic routines and not leave it to other developers who may or may not apply due diligence. At least the protection of software integrity can include the GUI as well as the cryptographic primitives if the GUI routines are linked statically.
2.) In cryptographic procedures certain protocols are to be obeyed. If in charge of the GUI I can use the GUI to remind users of necessary actions at the right time, guide their actions without bossing them around, give help to correctly interpret the outcome of cryptographic operations, warn if they are about to do something risky and give explicit error messages. I want to take this responsibility myself and don't want users to be at the mercy of future GUI developers of unknown expertise.
(back to question list)

*******************************************************************************************************






*************************************************************************************************

8.) Why do I offer separate symmetric cryptography?

Academic signature is a tool for asymmetric cryptography. Asymmetric cryptography can accomplish en-/deciphering and signing/verifying. What do I need old fashioned direct use of symmetric cryptography for?
Some people may suddenly and unexpectedly feel the need to communicate with you confidentially. This happened to me once in a while. Most of these normal people unfortunately outrightly reject to strain their brain. This is sad but it is not up to me to criticize this.
The furthest you may get is to tap their childhood knowledge of password usage and ciphers in agent movies. If you are very very lucky, you may be able to talk them into installing a crypto tool e.g. academic signature(technically this is easy). In this case you may be able to convince them to exchange a password by phone and send an enciphered file that can be deciphered using this password. This is the reason for including “Hardened Symmetric Crypto” in academic signatures menu.
You most certainly would have no chance whatsoever to talk “normal people” into using asymmetric cryptography and create a public/private key pair and tell you their public key. Again, this is very very sad. Since the basics are so easy, it is beyond my mental horizon to understand this. Yet this is the way it is....
With more than state of the art stretching and salting, block and key sizes up to 4096 bit, protection of cipher integrity and an explicitness that allows to understand what is done in the respective dialog, academic signature surely offers the highest security, you can get with symmetric cryptography anywhere.

And there may be Quantum Computing:
Recently, concern has been expressed about the approach of quantum computing within the next decades which -using a modified  Shor's algorithm for the discrete log problem in elliptic curve algebra- might render the ECDL-Problem solveable for powerful organizations(RSA and elGamal as well).
The NSA is publishing documents now, advising companies obedient to them to not migrate from RSA to ECC not to burn money for a possibly unnecessary project.
The threat to ECC is real, but I do not believe in NSA's altruistic motives. A year ago, when it seemed they could control and govern ECC, they strongly urged the commercial sector(their groupies) to migrate to ECC because of imminent threats to the security of RSA. Now since they are loosing grip on ECC, it doesn't seem so attractive any more and RSA , surprisingly, is seen as less endangered than before.......
At any rate, they promise to work on a transparently selected future suite of algorithms and protocols, which will be resistant to quantum computing.
They better do! Who has the most dangerous dirty secrets(assassinations, bribery, blackmail etc.) to be kept confidential ? It's not you and me - I bet it is them :-))

A cheap evasion for us, if quantum computing(QC) would suddenly be here tomorrow, is to resort to symmetric enciphering. There is Grover's algorithm to apply QC to database searching or breaking symmetric ciphers, but this one is not as effective as Shor against public key crypto. Just doubling keylength and blocklength will suffice to keep symmetric crypto safe. So as an insurance for day X, users of academic signature can flexibly switch to using huge blocklength and keylength symmetric crypto immediately should ECC be rendered insecure against the NSA on day X. (Another easy solution would be to keep different confidential "public keys" for different circles of friends according to different levels of trust.)
We don't need to worry about QC!


(back to question list)

*******************************************************************************************************






*************************************************************************************************

9.) Why did I design a new Pseudo Random Number Generator and new cryptographical hash functions for academic signature?

PRNG:
1.) While there may be several established versions of PRNGs around and several recommendations on how to build them from symmetric ciphers I could not find one, which is clearly earmarked as a standard. Furthermore the constructions from small blocksize ciphers are awkward and I can do better(excuse my hybris ;-). So I felt free to set up one myself.
2.) I favor the concept of a maximum self-containedness in a crypto package and do not want to involve entropy harvesters from the system. They may be an easily visible target for attacks by high budget organizations trying to break your privacy and security. 
3.) It is fairly easy to create a safe PRNG if you have a cryptographically secure one-way function with good statistical properties. I do have such one way functions in several variants and use them already for the “Fleas” ciphers. So I take a large size state vector (2011 byte), derive a consumable pool of 2011 random bytes by applying one variant of such a one way function and subsequently propagate the state vector by another variant of the one way function. If the consumable pool is used up, the process is reiterated again. This results in a fully protected internal state.
The degree of difficulty is comparable to the difficulty of designing a cipher(i.e. easy for large block size). The adversary doesn't know the internal state or key, respectively, but does possibly know the random output or cipher, respectively. Goal of the adversary  is to find out about the key of the cipher or the internal state of the PRNG. Thus the adversary primarily cannot know what was happening in your system during state vector propagation or enciphering, respectively.

Hash function:
Designing presumably secure hash functions (which I also did) is significantly harder. In fact this is by far the hardest task. In this case the adversary knows and can reconstruct each and any single bit flip that happened during hash generation. There are no secrets. Finding one collision e.g. by finding one input-bit-difference canceling in this process, would be enough to consider the hash algorithm broken.
You might argue that in the concept of provable security you can derive a provably secure hash function from a secure cipher e.g. via a Merkle-Damgard construction and be content and secure. This is, however, just the official truth..... I would present this argument in a hearing or a hostile discussion.
The real truth is that there is no such thing as a perfect cipher(indistinguishable from noise to the adversary), any real and practical cipher e.g. is brute forceable and thus distinguishable from noise. So the premise for “provable security” is never fully true and you have to deal with “almost secure” ciphers and need special additional provisions to treat the difference between secure and almost secure in the hash generation process. 
Designing secure hash functions posed a real challenge. Addressing challenges is fun ;-)

(back to question list)

*******************************************************************************************************






*************************************************************************************************

10.) Why does academic signature not use or at least support smart card solutions?

Many experts claim, that only a smart card solution can be safe. The private key is sitting safely on the smart card, is supposed to never leave the card and is concealed from the holder of the card. Using a smart card is the only way to produce electronic signatures that are considered legally equivalent to legacy ink signatures in Germany.
The advantages claimed by smart card proponents are:
a) Handling a smart card is comparable to handling a traditional key. So users will intuitively treat it right.
b) The user can not copy the smart card. If the smart card is lost or stolen, the user cannot sign any more and is forced to get a new one. He/she cannot go on using a compromised key.
c) The user is protected from blackmail, because he cannot give away his private key information even if he would want to do that.
d) Even if a smart card is used on a compromised system(virus, trojan, rootkit or the like), the key stays safe.

My reply:
Yes, using a smart card and a secure card reader is a safety improvement. If I could, I would give a smart card and a smart card reader to every user of academic signature for free and support that in the software.
Yet I think the advantage of using a smart card has been greatly exaggerated by their proponents and disadvantages have been played down.
a) I agree with statement a).
b) I also agree on this point as an advantage. However, I would like to be able to make copies for myself for personal comfort. I also like to have duplicate car keys, don't you? And I don't need some organization to boss me around about admitting to the loss of a key - do you? So it is certainly good to do this to other people but I'd rather not have it ......hmmmm.
c) This is outright ridiculous. There is no need to comment on this.
d) This is an advantage for the card issuing trust center because it limits their liability. To the user it barely makes a difference. The compromised system can give a hash about an order of 1000 overpriced electric blankets or your offer to sell your kidney for transplantation to the card and tell you it signs your income tax declaration. Your signed organ donation will be forwarded to the mafia of course and not to the revenue office. The signature is legally binding. You cannot sue the trust center because the key never left the card  .....hmmm.  
I didn't speak about the disadvantages yet: You must feed another administration for certifying security of the products, for producing and distributing cards and card readers. You bet these administrations will be cumbrous and expensive.
You can be sure to get a “special” card using a weak key if your government does not like you. I would expect that even from my administration in Germany(they did buy FinSpy lately in order to eavesdrop on people they don't like!), let alone the US or even Russia. You have no chance to check that because you cannot access the key information.
You are fully liable but have to use means that were created beyond your control and are in critical parts concealed from you. This is not good.

(back to question list)

*******************************************************************************************************






*************************************************************************************************

11.) Is it not dangerous to make a safe cipher publicly available?

This is a hard one.
As much as I dislike government agencies tendency to eavesdrop I do acknowledge that I may profit from their uncovering terrorist plots. Most naturally I do not want me or my family to be blown into pieces by some mazed extremist nor do I want to give protection to child pornography distributors or lovers of other repelling vices.
It may be justified from a certain perspective to view strong cryptography as a weapon and I like the European way to strictly limit access to guns. Yet strong cryptography is mainly a defensive weapon and may be more comparable to a bullet proof vest than to a gun.  I wouldn't mind distributing such protective gear for anyone, even if I risk giving it to the mafia and make it harder for law enforcement to shoot mafiosi.
To my deep belief the benefit of distributing defensive digital gear to empower people harassed by powerful organizations greatly outweighs the risk of giving a free advantage to the bad guys. Benevolent governments should concentrate on seizing swords(i.e.
digital bed bugs) not restrict access to shields(free crypto software). My own government used my tax money to buy the digital vermin "FinSpy" lately and joined the illustrious customer list: Mubaraks Egypt, Turkmenistan, Bahrain, .......
(back to question list)

*******************************************************************************************************






*******************************************************************************************************

12.) When will there be a critical evaluation of the programs safety by an independent committee?

Never (probably).
I am an amateur and as such I have a natural interest myself not to recklessly cut corners in order to "bring something to the market soon". So I don't need to be pushed to try hard to do things right. Yet I know well about the value of a true evaluation by bright people and regret not to be able to discuss safety aspects with such people more frequently.
The program academic signature is intended for bright people who read manuals and try to understand what they are using. Thus I have some hope that, should a user find a potential weakness in the program, the user will talk/mail to me about it. I will then do my best to strengthen the respective target for an attack.
Allow me to state here that I am confronted with committees and hearings frequently in my professional life(teaching and developing curricula for students of Engineering and Business Administration). Sadly most of the time the ordinances resulting from such hearings are purely formal in nature and are hardly ever improving the subject at hand.
So my deep belief is that in the general perspective and anywhere we need more personal commitment of people doing things and need less committees vainly trying to force people to do things "right" which they do not like and which both may not understand.
Whoever wants to review the code is more than welcome and will get my full support.

(back to question list)

*******************************************************************************************************







*******************************************************************************************************

13.) Why did you not strengthen your program against side channel attacks ?

Strengthening elliptic curve arithmetics against side channel attacks is a popular current activity. OpenSSL introduced it lately and the Openssl developer obviously regarded it as a necessary and urgent security fix.
You cannot fully prevent side channel attacks. All you can do is impede them. The fix usually involves changing the arithmetics and comes at a cost regarding execution time. It is necessary in systems where the adversary may have access to high resolution power consumption data, em-stray radiation, may run concurrent processes on other cores on a multicore systems or have high precision timing information on execution times.
You may find these conditions
A) in crypto-smartcards which may get into the hands of the adversary.
B) in servers, where adversaries may run concurrent processes or may precisely determine the timing of (many) crypto calculations.

None of this applies to the single user computer sitting in the office of a lecturer who manually signs letters of reference. If an adversary manages to run processes on this computer, it has been hacked and a side channel attack is the least of the users problems. So it would be unwise to sacrifice performance to encumber side channel attacks.

Let me add some further remarks regarding the smart card case (A). The awkward security concept of using smart cards relies on the fact that the owner/handler of the card is considered to be the adversary and may under no circumstances gain access to key information. If you have the card in your possession you can get as close to it as you want and can e.g. monitor power consumption or em-stray radiation as highly resolved as you desire. Trying to prevent this by whatever barrier is an uphill battle they cannot win. Whatever the engineers claim, be suspicious! They cannot prevent this. Believe me, I am a physicist and in my main research field I used to record hiccups of single atoms and measure magnetic fields close to the quantum limit ;-). To state it uncouthly - You cannot keep secret what you are eating from someone sitting on your lap.

Another short remark about the server case(B).
The EC-Openssl developer deserves respect for hampering side channel attacks.
Yet a system producing digital signatures on external demand should not be configured to be able to run processes owned by adversaries.
It should also not allow to the adversary to trigger (many) signing processes with precise timing. (I presume a study on exploiting this to reconstruct a private ECC-Key triggered the strengthening activity versus side channel attacks in Openssl.)
So the strengthening just attenuates the consequences of reckless behavior that should not happen in the first place. In my opinion a random time delay for signature handout would be a more reasonable fix not taxing the processor(rather than altering arithmetics).

For me it all boils down to the following principle: Keep the adversary at a good distance instead of trying to succeed in dirty infighting on your own territory.

(back to question list)

*******************************************************************************************************






*******************************************************************************************************

14.) Why do you not consistently use authenticated symmetric ciphers in academic signatures ?

Well I claim to use it consistently but that's precisely why I don't always use it.
The dogma says: "Never ever use an unauthenticated cipher!" 

It is sad to see that dogma invades cryptography like mold invades noodles forgotten on a plate in summer. Dogma belongs into the religious realm and not into science.
Fact one is: The authentication of a cipher(or plain message) comes at a price regarding performance. It takes roughly twice as long as enciphering only. (There is a somewhat faster one pass mode "OCB", that even made it into some standard. I don't use this one because I could not find any convincing logical explanation on an abstract level about how it works and why it is secure. The mathematical proof of its security given in the primary peer reviewed paper is so laden with details of practical bit shuffling that I rejected to follow through. I am used to mathematical proofs of a different style and plainly don't trust it.)
Fact two is: Authentication is needed to ward off the adversaries tampering with the cipher(bit-flipping attack). This is possible only if the adversary already knows part or all of the message and is in control of the transmission channel. Thus it is necessary only for direct usage of symmetric encryption to protect the cipher en route. This is where I use it in academic signature and almost nowhere else.

a) Transfer of a private key from your office pc to your notebook?
You carry it in your pocket with a memory stick. The adversary has no access to the channel. Consequently I use "encryption only" here. 

b) Protecting an elliptic curve cipher en route?
Authentication is done by digital signatures, which is much stronger anyways. Furthermore the symmetric key used with an ecc public key cipher is created on the fly and its knowledge by the sender does not prove anything other than that the sender is the sender. (The ECIES "proposed standard" recommends this bullshit.) Consequently for the symmetric part I use "encryption only".

c) Storage of your private key on disk?
The adversary has to have read/write access to your disk AND has to know your private key already to launch the attack. If these prerequisites are met, a possible bit flipping attack is the least of your problems. Consequently I use "encryption only" also here.  

d) Storage of your trusted public keys on disk?
This is the only exception. One might argue that again the adversary has to have read/write access to your disk and has to know the public keys already to launch the attack. The public keys might well be known to the adversary. And if you use a windows or an apple OS, the NSA at least and god knows who else has read/write access to your disk ;-)
So in this case there might be a need to protect the integrity of the public keys cipher. Since I don't want to take any chances, academic signature thus protects this cipher with "encryption and authenticity" by using an HMAC ("encrypt then authenticate" pattern).

Let me stress again here: Submission to dogma is the death of thinking and the death of progress. Our society is all too ready to accept dogma and install agencies and institutes who make money from certifying submission to dogma as indispensable "seals of quality". Dogma is so popular because it protects stupid people from having to think.

(back to question list)






*******************************************************************************************************

15.) Until recently the NSA has been encouraging the replacement of RSA by Elliptic curve cryptography. Is this not a strong indication that they could crack it and that it is insecure?

In the light of Edward Snowden's recent disclosures this is indeed a significant allegation. Even the most outlandish conspiracy theories regarding the NSA seem to be surpassed by reality. However the newly publicized disclosures seem to indicate rather that RSA and/or DH have been rendered less secure by some progress achieved by this infamous agency. RSA and DH are still used for internet traffic enciphering and the migration to ECC is sluggish, so "ground braking progress" most likely refers to these older crypto systems. In fact a recent paper convincingly demonstrated how the NSA might have gained "ground braking progress" in breaking into encrypted communications via subverting the standard DH key agreement.
Now let's turn the paranoia switch to maximum and comb through crypto publications. It seems to me that alleged NSA spin doctoring is rather directed towards locking users to the NIST elliptic curves, gladly the small ones, and discourage from using other let alone larger(i.e. higher bit length) ones.
Let me give you some examples:

a) Standards regularly give advice not to develop new domain parameters, but rather to use domain parameters published by NIST. While it is true that creating new secure domain parameters is a difficult task, the verification of security is not hard. Thus there is nothing to be lost in an attempt to find new domains. I smell a rat in this advice and sense spin doctors at work.

b) There is plenty of (NSA funded?) scientific work available, where specific shortcuts are developed to speed up elliptic curve calculations for one specific (usually small) elliptic curve from the NIST set. I suppose a decent developer shudders on the thought of spending endless hours of work for optimizing code for one specific elliptic curve that might not be used anymore tomorrow and which is not applicable to any other curve. Again I can smell a rat.

c) Standards - known to be influenced by the NSA - show other NIST domain lock in features. On page 22 of the BSI document TR-03111, a german standard for usage of elliptic curve cryptography, it is stated
quote:
"Note: It is RECOMMENDED to use a hash function H() (cf. Section 4.1.2) with an output
length = τ , i.e. the output length of the hash function and the bit length of the order of the
base point G SHOULD be equal. If for any reason the hash function has to be chosen such
that > τ , the hash value SHALL be truncated to Hτ (M ), the τ leftmost bits of H(M )."
Truncating the hash instead of modulo reducing with respect to group order of the curve is only well defined, if NIST curves are used. They have the special property of consisting of Oxff bytes in the upper half of the group order. Thus truncating the hash to that length will result with almost certainty in a number smaller than the group order giving a unique and well defined result. Any other e.g. the ECC-brainpool domains or my domains will lead to ambiguity if used according to this "standard": Truncate to same bitlength then modreduce or not modreduce ? Thus there will be compatibility problems if non NIST domains are used. I can smell a rat again.

So with the paranoia level set on maximum I smell rats all over the place - I just gave you some examples for NIST domain lock in attempts.
Turning the paranoia level back to normal I tend to go with occams razor and attribute these "suspicious facts" to plain ignorance or lack of competence. It doesn't take evil NSA to have commitees crank out flawed or even stupid documents, you know!
In Academic Signature I just quietly disregarded smelly parts of standards without making a fuss about it.
To sum it up: I would avoid the NIST domains not to take any chances - academic signature doesn't offer them by default. But most probably they are secure. If my life wouldn't depend on it I could happily use them.

(back to question list)

*******************************************************************************************************

16.) It is good practice not to invent own crypto algorithms and rather to stick to scientifically challenged and approved methods.
Why do you not follow this rule?
 
Yes, there is good reason for this rule. The highway of  IT security is littered with broken, self invented crypto. That's why I offer AES and SHA (and JH, Skein and Threefish) as scientifically challenged and officially approved algorithms alongside Fleas in Academic Signature.
A truth generally forgotten with good grace in this context, however,  is that there are many good algorithms littering the highway of IT security as well without ever having been used seriously. How about the finalists of the contest finally leading to the selection of Rijendael as AES by the NSA.....?
Some are doubtlessly still to be considered as safe, the main drawback being the NSA enforced small key/blocksize. Especially if you disregard restrictions to small block size(one of the hoops the NSA makes us jump through), I do not consider it to be hard any more to develop a secure cipher. But for some reason uniformity seems to have been set as an implicit goal. And let's not forget that the official scientific process itself has been targeted by the NSA in a despicable way and needs to be regarded with caution in the field of cryptography.

For signatures it is a necessity that they are acceptable for other entities. You may have noticed, that for signatures the default hash algorithm in Academic Signature thus is indeed SHA512.
I give myself more freedom in the field of ciphers and offer Chimera as default. First ciphers have to be safe - period.
If business people would consult me on what cipher to use, I would have to recommend AES and advise them not to use e.g. my Fleas, Chimera or e.g. even the Threefish algorithm.

However, I feel deeply obliged to be honest, even more so in a conversation with students or other private individuals. In this case the advice I am supposed to give would simply not coincide with my personal belief.
While AES is the US endorsed standard I have the least problem with, I am still not fully convinced that AES remains safe if a three letter US agency decides to be your adversary. In my mind I personally and honestly rate this risk low but still higher than the risk that my fleas cipher might be insecure.
So if my life would depend on it, especially in the face of a threat by US agencies, I would prefer my algorithms and would encipher using Fleas_d in the counter mode ("F_cnt_ld"). Others may consider this bad advice, I am only being honest. You may be extremely cautious and in matters of life and death use chimera, a cascade of flight_x and Threefish.
Very conservative super-paranoiacs my manually cascade a newer, larger key and block size algorithm like F_cnt_ld  or Threefish with AES - first encipher with F_cnt_ld or Threefish and then again with AES. (Paranoiacs should also always use the option to apply a NADA-cap to asymmetric ciphers in order to achieve zero adversary advantage.)

Some people may claim I would advertise snake oil by not disabling the "Fleas" option - I had to defend against such allegations in the past. But keep in mind that:
1. I offer you 1024 bit elliptic curve domains, they are undoubtedly very hard to calculate but easy to verify. The NSA wants you to have no more than 521 bit domains.
2. I am a physicist not an engineer, territory free of any standards is my natural habitat ;-).
3. You find more than state of the art salting and stretching, state of the art integrity protection of ciphers and more than state of the art elliptic curve domain sizes. I managed to write a scalable, darn fast long number arithmetics module. May be I am not a complete idiot after all and may indeed be able to come up with a secure algorithm.

Decide for yourself.
Fact is that no other GUI based open source elliptic curve cryptography tool for Windows and Linux does exists so far. Thus I can be quite relaxed concerning the sporadic but sometimes surprisingly ferocious hostilities towards my project.

(back to question list)

*******************************************************************************************************


17.) Academic Signature uses the standard addition formulae(in jacobian projective coordinates).
Bernstein and Lange state on their safe curves website (http://safecurves.cr.yp.to/complete.html) that it were unlikely that this addition will be implemented correctly and can only be implemented correctly at the price of decreased efficiency quote:"it produces a slower and more complicated implementation." Is Academic Signatures ECC implementation slow and complicated or insecure?

I am somewhat surprised about this statement by the renowned experts Bernstein & Lange. As I understand Bernstein is an academic and a versed developer..... I can find solving this "problem" neither complicated nor inefficient.
Let's have a structured look at the problem:

a) The general point addition P + Q in jacobian projective coordinates requires about 10 multiplications which scale with bitsize n as n^2 (Karatsuba breakeven in my implementation lies somewhere around n= 4000-6000 bit depending on processor make and speed). During the addition you need one longnumber comparison to decide whether Q equals P or -P (and if so another one to decide about the sign). Adding two chained if statements doesn't seem to be complexity fireworks to me.
Let me show you my code excerpt from Academic Signature(in elliptic1.cpp):

int proj_jacobian::add(proj_jacobian* summand, ellipse* ewp)
{
    ......

    ......
    //check for accidental equality, check if x-values are equal
    if(u1.compare(&u2)==0)
    {
        //check if y-values are also equal
        if(s1.compare(&s2)==0)
        {
            //do point doubling
            return dopp(ewp);
        }
        else
        {
             //neutrales element
            x.storlong(0);y.storlong(1);z.storlong(0);
            neutral=true;
            return 0;
       }
    }
    ......
    ......
}

With all due respect - if this were too complicated for our professional developers, boy we'd have a problem! Do we?

b) Extending 10 longnumber multiplications that scale with n^2  by a comparison usually decided after comparing one byte is definitely negligible (and easily recognizable as such for that matter.....).

So I claim that Bernstein and Lange erred in this aspect.

(back to question list)

******************************************************************************************************


18.) Why are the websites of academic signature not secured by https ?

(A) I apologize for that, yet (B) I think there is little to gain with it.

A) These pages of Academic Signature are hosted by my employer, University of Applied Sciences Wedel and I am quite grateful about that. There is no provision yet to allow for https secured websites of staff members and I didn't want to produce a big wake about my "special pages" and "special needs". (As a matter of fact my university thankfully moved my crypto pages into https secured space recently, my commercially supplied mirror domain "academic-signature.org" still remains in plain.)

B) My employer is a German university, situated close to Hamburg and having close ties to aviation industry (Airbus, Lufthansa Technik). Thus as sure as a dog has ticks and fleas you can be sure we have NSA implants in our internal network. We are certainly high up in the target list of our friends.
Securing the pages with standard https would be of little value since these guys could probably access the certificate of our webmaster anyways or even break standard https(1024 bit RSA lol). But I can hardly think of any other party having substantial interest in disrupting or spoofing the academic signature pages that would have a more limited capability and could be kept out by standard https.
Thus it is all the more important that I do secure the files on the university network by nonstandard means. I did secure the webpages and downloadable files with aca_sig ECDSA-signatures. I strongly urge you to always check my digital signatures of your downloads at least against my GnuPG signatures - better against my ECDSA-Signatures - before installing ore even before unzipping and compiling the source.
Furthermore I secured my public key by reading it aloud on a video - this should be hard to fake. Up to now I have not yet registered any manipulation of one of my secured files on our network yet. There have been some strange changes to unsecured files in one incident in the past. But I believe this was due to some normal bug or fault that may happen accidentally, yet I responded by securing the files more systematically with my digital signature.
Frequently I do download my own Files via TOR and check the signatures validities. There has never been a faulty signature or any indication of tampering with these secured files yet.


(back to question list)

******************************************************************************************************

 19.) Why did I not automate deleting the plain text after encryption to allow for convenient locking of files on your storage medium ?

It has been pointed out to me that users want to use academic signature to secure files on their own storage media by enciphering to a private key owned by themselves. It is somewhat bothersome if the user always has to delete the plain text manually after producing the cipher.
I agree on the apparent usefulness of this feature and I am even using this pattern myself in some cases. Furthermore, this would be easy to implement: Add a tick box in the encipher dialog to have aca_sig delete the plain file after a quick check that the corresponding private key is indeed in the users protected repository. Yet there are some intricate "difficulties" with this option:

Truly deleting a file is very difficult! The plain delete command just marks the files space as useable space and does not delete anything in itself. Thus usually the content can be easily recovered by a hostile intruder (or a nosy British customs officer for that matter).  This may at most be a suitable safeguard for your mother not to find your girlie pics.
True deletion can be achieved by overwriting the exact file space repeatedly with random bits. There are specialized software tools to achieve this. I would not want to write such a program since the task would be highly dependent on system hardware and of course also the operating system - in short "this task is a mess"!
It can probably be achieved, but I was told that where it really matters, professionals do not rely on whatever software tool but rather BURN! the hard disk(or other medium) under surveillance by a trained, loyal security person.  So if I add that option I might lure the users of academic signature into believing the deleted file were gone, which it most probably is not.
You could indeed use aca_sig cipher for safe storage for yourself, if you would move the cipher and only the cipher to a long term storage medium. For a recovery you first move the cipher back to the temporary storage and decipher on the temporary storage. In any case leftover parts of the plain file will always have to be considered to still reside on the temporary storage. So if you really have content on your system, which is dangerous to you under your administration(e.g. gay literature in nigeria), for heavens sake do not rely on any "delete function".

I am under accusation of offering snake oil by not always sticking to fishy NSA endorsed standards. With good faith I do call this accusation false and short sighted. Yet would I offer the "pseudo-delete" tick box, my good faith could erode.....


(back to question list)

******************************************************************************************************




 20.) Will the advent of quantum computing make Academic Signature obsolete ?

Recently, concern has been expressed about the advent of quantum computing (QC) within the next decades. The advent of QC -using a modified  Shor's algorithm for the discrete log problem in elliptic curve algebra- might render the ECDL-Problem(RSA and elGamal as well) solveable for powerful organizations which have access to a quantum computer.
The NSA is publishing documents now, advising companies to not migrate from RSA to ECC if they haven't done so so far. The alleged reason would be not to burn money for a possibly unnecessary project.

The threat to ECC is real, but I do not believe in NSA's altruistic motives. A year ago, when it seemed NSA could control and govern ECC, they strongly urged the affiliated commercial sector to migrate to ECC because of imminent threats to the security of RSA. Now, since they are loosing grip on ECC and e.g. this webpage doubles their limit and offers 1024-bit ECC, it doesn't seem so attractive to them any more. Now RSA , surprisingly, is seen as less endangered than before.......(Remember the press releases about the imminent cryptocalypse?)
At any rate, they promise to work on a transparently selected future suite of algorithms and protocols, which will allegedly be resistant to quantum computing.
They better do! Who has the most devastating dirty secrets(murder, torture, perversion of justice, bribery, blackmail, perjury  etc.) to be kept secret ? It's certainly not you and me - I bet it is US(and other state's) agencies which consistently see themselves above law.

There is a cheap way to evade breakdown of privacy for us law obedient mortals, if QC would be available to NSA and its cronies tomorrow: It is to resort to symmetric ciphers. There is Grover's algorithm to apply QC to database searching or breaking symmetric ciphers, but this algorithm is not as effective as Shor against public key crypto. Doubling key length and block length will suffice to keep symmetric crypto safe.
Academic Signature already offers block ciphers with block sizes up to 4096 bit (US endorsed AES has a meager 256) and arbitrary key size. So as an insurance for day X, users of Academic Signature can flexibly switch to using huge block length and key length symmetric crypto immediately, should ECC be rendered insecure against the NSA.

There would be another solution for users of Academic signature on day X:
Keep a hierarchy of confidentiality for your "public" keys. An outer shell for the public, with a "publicly published" public key for everyone. This shell would be accessible to state agencies having access to quantum computing and thus be insecure against them for the price of some effort.
Keep a second level key for your trusted environment in your company of say the 50 colleagues and only share the public component of this key with this circle. Regard this key as a corporate secret.
Then add one or more primary level keys for very select groups of friends who you ultimately trust. Again only share the corresponding public keys within this group and keep them secret from anyone else. Paranoiacs may select the option to assign one key pair to each friend and arrive at the logical equivalent of symmetric encryption. You can easily do that using Academic Signature. QC cannot crack your ciphers if the public key is unknown!
Let's draw the following conclusion: There is not a binary set of two options, symmetric or asymmetric encryption. In fact there is a continuum of combinations of the two to our disposition. On day X let's use this continuum intelligently. It is my gut feeling that this continuum will be the solution against QC. Using NADA capped ECC ciphers in Academic Signature is one option to go QC-safe today already.
It is my gut feeling also that the candidates(e.g. lattice based crypto or seasoned mcEliece) for QC-safe
standard pattern asymmetric crypto will succumb to yet to find other quantum algorithms. As a seasoned physicist I have reasonable trust in my gut feeling regarding Quantum Physics.

So let's not worry about QC! On the political side, the advent of of QC and the subsequent exposure of some more of nation state agency's dirty secrets will give us the next chance to muck out the hog house of democracy's secret agencies. Unfortunately we missed the last chance which Edward Snowden gave to us.



(back to question list)

******************************************************************************************************