dimanche 1 février 2015

Solving the code delivery problem via hashing the source GET'd



TLS gives you the confidence you're talking to the servers you trust, right? Yet, why should I give confidence to a page (likely to be a application today) that I've never read the source code, neither someone else has studied it?


Yes, you can read the source code, but it's very likely that you will have to deal with the output of a template framework, with indentation completely messed up, no comments, no documentation, no license, ... Stuff that you're likely to have when using a local application.


Course, not everyone is interested in giving clients the ability to read/study the source, but my impression is that even free software advocates have no choice when writing web application, ending up providing a truly obfuscated closed-source code.


What happens when the application contains a reference to, say, a git repository?


This way, the browser could make the production process, hence compare a digest of the application served and the one given by the build process made by the browser.


Say inside the response the browser can read



X-Git-Repository: git@...
X-Latest-Commit: d6c1e864e...
X-Task-Runner: ...
X-Build-Command: ...


It can now pass the built repository inside a (fast) digest function, and do the same for the source GET'd. If the digests are the same, you are using software that in principle could be easily audited -- without avoiding to uglyfy the source for performance issue in production.


Is it gonna solve the problem of code delivery? Is that impractical?, useless? No one needs it?





Aucun commentaire:

Enregistrer un commentaire