vendredi 27 mars 2015

VeraCrypt secure in public?



I have maked a Veracrypt Containment that use AES to encrypt eveything inside. But my question is: Can I securly put the containment file to the public (everybody can acces it) whitout anybody can access the files inside it?


PS. Some information below may help to answer the question:



  1. I use veracrypt to encrypt files.

  2. It uses AES-256.

  3. My password-lenght is 32 characters.

  4. The password contains characters from ASCII





OpenPGP (RFC4880) - do you agree with my SimpleS2K (string-to-key) implementation?



Background: I'm writing a GPL Python OpenPGP to JSON parser which I'm testing on files generated with GPG 1.4.16.


If given a passphrase, the parser will generate keys using the string-to-key methods and ultimately decrypt messages.


I'm starting off with symmetric encryption messages:



echo "hello" | gpg --s2k-mode=0 --symmetric > symmetric.simples2k.gpg


... and using "foo" as the passphrase.


This generates a packet with a SymmetricKeyEncryptedSessionKeyPacket and a SymmetricEncryptedandIntegrityProtectedDataPacket packet, as expected.


The S2K paramters that GPG created are: Simple S2K (http://ift.tt/1ocZkzF) with SHA1 hash and AES256 symmetric cipher.


Problem: When I derive the key from the passphrase foo using SimpleS2K then attempt to decrypt with AES256, it doesn't decrypt correctly. So part 1 of my investigation is verifying that I'm doing the S2K correctly.


Here's my understanding of how to generate the key from the passphrase foo using SimpleS2K



  1. Create two SHA1 hashers (because AES256 needs 32-byte key, SHA1 produces 20-byte hash)

  2. Don't preload hashers[0]

  3. Update hashers[1] with 0x00

  4. Update hashers[0] with UTF-8 encoded foo

  5. Update hashers[0] with UTF-8 encoded foo

  6. Concatenate hashers[0].digest plus hashers[1].digest

  7. Take first 32 bytes of result (ie drop last 8 bytes)


Here's a minimal implementation in Python 3:



import hashlib

hasher_0 = hashlib.sha1()
hasher_1 = hashlib.sha1()

hasher_1.update(bytes([0x0]))

hasher_0.update('foo'.encode('utf-8'))
hasher_1.update('foo'.encode('utf-8'))

key = (hasher_0.digest() + hasher_1.digest())[0:32]
print(' '.join(['{:02x}'.format(x) for x in key]))


Which outputs



0b ee c7 b5 ea 3f 0f db c9 5d 0d d4 7f 3c 5b c2 75 da 8a 33 5a 8c aa 40 39 fd bc 02 c0 1a 64 9c


The full JSON output is here: http://ift.tt/1IDM8Au


Hopefully we can rule out the S2K part and get onto the AES part :)


Thanks!





Will gmail close my well intentioned botnet account? [on hold]



I'm part of a company that has at least 1000 PC's distributed in different buildings over a radius of 25 miles.


I've been asked to make a program to survey technical information on each PC.


Since not all of them share a LAN connection but all of them have internet access, my solution was to use a gmail account to share the encrypted data.


I've created client/server like services to send the data and retrieve it to the data base through emails, all of them using the same gmail account (to send and receive).


I know that this look a lot like a botnet...actually I think it is, since each client would be able to receive configuration mails from the server to request certain registry entries.


All external IP's are dynamic, the server is internal, the resolution to establish a trusted unattended connection would demand publishing the IP's somewhere, Dynamic DNS has been suggested but in terms of reliability it's just as safe as gmail.


Anyway my question is: will gmail detect all this traffic made on the same account as a botnet and close it? if so, should I use several accounts?





How can a very small company handle PCI-DSS requirement 6.4.2?



PCI-DSS 3 requirement 6.4.2 calls for



Separation of duties between development/test and production environments.



Based on the guidance text and this, answer to another question, it appears that the purpose of this requirement is to ensure that no one person holds all the access.


While this is easy enough in a large company, does this automatically mean that a 1 person company (or a company small enough to be unable to afford hiring separate DBAs and syadmins for each environment) cannot possibly be PCI-DSS compliant?





External websites in logs



I have a website, let's call it www.good.com.


I've been getting a lot of requests to www.good.com under completely different URLs than www.good.com. I suspect this traffic is also causing some site performance issues. I'm running a .NET solution on IIS for reference.


I have a logger that is constantly picking up 404 errors for external hosts. Below are examples of some of the log data:




Original URL: http://ift.tt/1EaQ0tY


Request URL: http://ift.tt/1EaPYlN %911 h%8D%BAX '%C3x5%F0 %DF%E8&peer_id=-SD0100-%E6%B2 Ql%C0 ]=x %8C&ip=192.168.2.23&port=8956&uploaded=1019809319&downloaded=1019809319&left=192985&numwant=200&key=9135&compact=1


Request Path: /announce


Referrer URL: None


User host address: 222.210.108.246


Server: WWW-GOOD-COM-SERVER


User:


IsAuthenticated: False


Authentication Type:


Thread account name: NT AUTHORITY\NETWORK SERVICE


User Agent: Bittorrent




I also see other weird requests from all kinds of other domains, like



  • vl.ff.avast.com

  • graph.facebook.com

  • eztv.tracker.thepiratebay.org

  • trackhub.appspot.com


Almost always the IP involved is from outside the US.


What I don't understand, is why my server is trying to fulfill requests for any of these urls when it is obviously not the host.


I need to know:



  1. Why this could be happening

  2. If this activity seems dangerous

  3. How I should attempt to prevent it, if possible.





How to configure utorrent for a vpn?



I am using a vpn connection successfully for browsing but having trouble for torrents,I am sure that p2p is enabled by vpn company so i must be missing some configuration.I know the host ip address and username password nothing else.So plz let me know the settings





Why does rfc6797 say "An HSTS Host MUST NOT include the STS header field in HTTP responses over non-secure transport."



Why does the RFC prohibit the server from sending HSTS to the client over HTTP?


I can see that if a HTTP client responds to that unsecure HTTP response it might cause that site to be inaccessible to the client, but I don't see any reason for the server to have a MUST in the protocol.


Rather the client MUST NOT respond to HSTS in unsecure HTTP responses is the correct approach in my mind. What am I missing?



7.2. HTTP Request Type


If an HSTS Host receives an HTTP request message over a non-secure transport, it SHOULD send an HTTP response message containing a

status code indicating a permanent redirect, such as status code 301

(Section 10.3.2 of [RFC2616]), and a Location header field value

containing either the HTTP request's original Effective Request URI

(see Section 9 ("Constructing an Effective Request URI")) altered as

necessary to have a URI scheme of "https", or a URI generated

according to local policy with a URI scheme of "https".


NOTE: The above behavior is a "SHOULD" rather than a "MUST" due to:



* Risks in server-side non-secure-to-secure redirects
[OWASP-TLSGuide].

* Site deployment characteristics. For example, a site that
incorporates third-party components may not behave correctly
when doing server-side non-secure-to-secure redirects in the
case of being accessed over non-secure transport but does
behave correctly when accessed uniformly over secure transport.
The latter is the case given an HSTS-capable UA that has
already noted the site as a Known HSTS Host (by whatever means,
e.g., prior interaction or UA configuration).


An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.