ASP.NET MVC: Secure Data Storage

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will review secure data storage in the ASP.NET MVC framework.

Secure Logging
The system should not log any sensitive data (e.g. PCI, PHI, PII) into unprotected log storage. Let’s look at an example from a healthcare application. The following snippet shows when the application was not able to successfully store the updates to a particular patient. In this case, the logger is writing the patient’s name and other PHI information to an unprotected log file.

[sourcecode]
log.Info(“Save failed for user: ” + user.SSN + “,” + user.Prescription);
[/sourcecode]

Instead, the application can write the surrogate key (i.e. ID of the patient’s record) to the log file, which can be used to lookup the patient’s information.

[sourcecode]
log.Info(“Save failed for user: ” + user.Identifer);
[/sourcecode]

Secure Password Storage
Passwords must be salted and hashed for storage. For additional protection, it is recommended that an adaptive hashing algorithm is used to provide a number of iterations or a “work factor” to slow down the computation. If the password hashes are somehow exposed to public, adaptive hashing algorithms slow down password cracking attacks because they require many more computations. By default, the ASP.NET Identity framework uses the Crypto class shown below to hash its passwords. Notice it uses PBKDF2 with the HMAC-SHA1 algorithm, a 128-bit salt, and 1,000 iterations to produce a 256-bit password hash.

[sourcecode]
internal static class Crypto
{
private const int PBKDF2IterCount = 1000; // default for Rfc2898DeriveBytes
private const int PBKDF2SubkeyLength = 256/8; // 256 bits
private const int SaltSize = 128/8; // 128 bits

public static string HashPassword(string password)
{
byte[] salt;
byte[] subkey;
using (var deriveBytes = new Rfc2898DeriveBytes(password, SaltSize, PBKDF2IterCount))
{
salt = deriveBytes.Salt;
subkey = deriveBytes.GetBytes(PBKDF2SubkeyLength);
}

}
}
[/sourcecode]

Secure Application Configuration
System credentials providing access to data storage areas should be accessible to authorized personnel only. All connection strings, system accounts, and encryption keys in the configuration files must be encrypted. Luckily, ASP.NET provides an easy way to protect configuration settings using the aspnet_regiis.exe utility. The following example shows the connection strings section of a configuration file with cleartext system passwords:

[sourcecode]
<connectionStrings>
<add name=”sqlDb” sqlConnectionString=”data source=10.100.1.1;userid=username;
password=$secretvalue!”/>
</connectionStrings>
[/sourcecode]

After deploying the file, run the following command on the web server to encrypt the connection strings section of the file using Microsoft Data Protection API (DPAPI) machine key from the Windows installation:

[sourcecode]
aspnet_regiis.exe –pef “connectionStrings” . –prov “DataProtectionConfigurationProvider”
[/sourcecode]

After the command completes, you will see the following snippet in the connection strings section.

[sourcecode]
<connectionStrings configProtectionProvider=”DataProtectionConfigurationProvider”>
<CipherData>
<CipherValue>OpWQgQbq2wBZEGYAeV8WF82yz6q5WNFIj3rcuQ8gT0MP97aO9S…</CipherValue>
</CipherData>
</connectionStrings>
[/sourcecode]

The best part about this solution is that you don’t need to change any source code to read the original connection string value. The framework automatically decrypts the connection strings section on application start, and places the values in memory for the application to read.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

Cloud Encryption Options – Good for Compliance, Not Great for Security

Guest Editor: Today’s post is from David Hazar. David is a security engineer focusing on cloud security architecture, application security, and security training. In this post, David will take a look at the encryption options for applications hosted in the cloud.

Over the last decade, due to new compliance requirements or contractual obligations, many, if not most, companies have been implementing encryption to better protect the sensitive data they are storing and to avoid having to report a breach if an employee loses a laptop or if backup media is lost in the mail. One of the more popular ways of adding this additional protection is to implement some form of volume-based, container-based, or whole-disk encryption. It would be difficult to argue that there is an easier, more cost-effective method to achieve compliance than to utilize this type of encryption. Also, although there are potential weaknesses to some implementations of the technology, it is pretty effective at protecting data from incidental exposure or non-targeted attacks such as the ones listed above. So, why does the steady rise of the use of this type of encryption concern me?

As referenced above, there are many legitimate uses for this type of encryption, and if you are looking to protect against those types of threats, it is a good choice. However, many companies and cloud vendors are now utilizing this type of encryption in the datacenter. Let’s review what type of threat we are protecting against when we use this type of encryption in the datacenter. If someone were to walk into a datacenter and walk out with some of the equipment, they wouldn’t be able to read the data on the disks stored within that equipment without breaking the encryption or brute forcing the key (this assumes they were not able walk out with the equipment still running and the encryption unlocked). More realistically, if the datacenter decommissioned some older equipment and did not properly erase the data from that system before selling or disposing of the equipment, this type of encryption would provide some additional protection. Although, even if it were not encrypted, in most datacenters, the data is probably striped across so many disks that the process of mining that data would be painful.

I am sure there are a few other examples of threats that would be mitigated by this type of encryption, but the point I am trying to make is that when the system is running and the disk, volume, or container are unlocked the data is unprotected. So, while you may be able to check “Yes” on your next audit, you would have a tough time claiming data was not exposed if unauthorized parties were able to get access to that system while it was running. The most interesting use of this encryption technology I have seen is to satisfy the need to encrypt database management systems (DBMS) that don’t offer some form of Transparent Data Encryption (TDE). This is especially true in the Cloud.

I will discuss TDE in more detail later, but first let’s focus on encrypting the database using one of the techniques we have already discussed. To shorten the discussion, I will focus on the encryption offered by Amazon RDS for their MySQL and PostgreSQL RDS instances (technically it is offered for Oracle and MS SQL as well, but you can also choose TDE for these types of instances). Here is some text from the Amazon Relational Database Service documentation:

“Amazon RDS encrypted instances provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for data-at-rest encryption.”

Now the concern here is that the statement is vague enough that most individuals and smaller companies would read the statement, follow the simple enablement guide, and sleep soundly at night assuming that the data is now unreadable. However, if that is the case, how is your application not totally broken? Well, here is another statement from Amazon’s website:

“You don’t need to make any changes to your code or to your operating model in order to benefit from this important data protection feature.”

Great, I don’t have to do anything and now I am protected. Sounds great, right? However, the documentation fails to explain the extent of that protection. The reason your application is not totally broken is because, while the system is running, you have no additional protection. Therefore, you are not protected if someone gains access to the server or in the more likely scenario that you have a SQL injection flaw in your application.

With TDE, you get some additional protection because the DBMS will only decrypt the data as it is being used. Both Microsoft and Oracle offer the ability to encrypt the entire database or tablespace or to implement column-level encryption. However, at some point this data is rendered readable in memory and possibly on disk. While this may be a small concern for some, the larger issue is that, while the database is running, the application has access to all of the plain-text data. The process of decrypting the data is by name and definition transparent to the application. Therefore, we gain no additional protection from injection flaws unless we are following the principle of least privilege when establishing and using database connections within our applications.

Historically, we have used one connection per database and granted this connection all of the rights the application needs to function. In some cases, for ease of setup and maintenance, we have granted these connections even more privileges than necessary. This concept of having single database connection per application is very common and almost universally accepted. One example of this is the recommended “Integrated Security” model of Microsoft SQL Server. This method of connecting to the database is architected in such a way that making multiple connections to the same database is difficult since the connection to the database is made using the credentials assigned to the “Application Pool”. With this model, if the application must read and decrypt sensitive, encrypted data, then that data must be available to be read and to be decrypted for any database call using that connection.

To solve this problem, there are a few options. You can either manage the decryption of sensitive data at the application layer (more to come on the complexities of this at a later date), you can eliminate all current and future SQL injection vulnerabilities, or you can start following the principle of least privilege when connecting to the database (in a future post and upcoming webcast I will give some specific examples of this technique). While eliminating all current and future SQL injection vulnerabilities would be ideal, it is not always best to put all your eggs in one basket. Therefore, being able to reduce the access provided to the database by the application when making specific database calls can provide additional protection and additional benefits.

For example, if a SQL injection flaw is found in one of your endpoints using a connection string that does not have the rights to transparently decrypt data or that does not have rights to the sensitive columns or tables you are protecting with whole-disk or volume encryption, an attacker exploiting the injection flaw would not be able to access this data. Another benefit of this technique is that it will allow you to focus your code reviews and development efforts on securing the code which is using the connections that provide access to your most sensitive data. In some cases, this may only be usernames and password hashes. In other cases, your database may contain credit card numbers, national identifiers, personal health information, or any number of other sensitive data types. Maybe you are not as concerned about data disclosure as you are about data integrity. In that case, you could limit access to state changing SQL commands and focus your efforts on securing the connections that allow these types of operations.

I am not saying we stop using the forms of encryption provided by cloud providers or built-in to the DBMS. They provide value, they do not require a lot of effort to implement and maintain, and they are an essential part of your defense in-depth strategy. However, as we are increasingly removed from the responsibility to implement these controls ourselves through the use of cloud services, let’s take some time to focus on other potentially more realistic threats to our data and ensure we have protections in place for those as well.

References
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
https://aws.amazon.com/blogs/aws/new-encryption-options-for-amazon-rds/

About the author
David Hazar has been involved in information security since he began his career in 2000. In 2007, he took over the infrastructure and security teams for a business process-outsourcing firm in Utah to help them meet their own compliance requirements and contractual obligations and also those of their customers. This is where he began dedicating his time to security and has not looked back. David has worked in a wide variety of industries including government, healthcare, technology, banking, retail, and transportation. Over the last few years he has been focused on cloud security architecture, application security, and security training while working for the cloud services divisions within Aetna and Oracle. David also moonlights as an application security consultant-providing web and mobile application penetration testing.

David holds a master’s degree from Brigham Young University and the following certifications: CISSP, GCIH, GWAPT, GMOB, GCIA, GCUX, Certified FAIR Risk Analyst, ITIL v3 Foundation and the Microsoft Certified Database Administrator – MCDBA

Ask the Expert – Johannes Ullrich

Johannes Ullrich is the Chief Research Officer for the SANS Institute, where he is responsible for the SANS Internet Storm Center (ISC) and the GIAC Gold program. Prior to working for SANS, Johannes worked as a lead support engineer for a Web development company and as a research physicist. Johannes holds a PhD in Physics from SUNY Albany and is located in Jacksonville, Florida.

1. There have been so many reports of passwords being stolen lately. What is going on? Is the password system that everyone is using broken?

Passwords are broken. A password is supposed to be a secret you share with a site to authenticate yourself. In order for this to work, the secret may only be known to you and that particular site. This is no longer true if you use the same password with more than one site. Also, the password has to be hard to guess but easy to remember. It is virtually impossible to come up with numerous hard to guess but easy to remember passwords. As a result, users will reuse passwords, or use simple passwords that are easy to guess. Most users would have to remember several hundred hard to guess passwords to make the password system work.

2. What are the best practices today for managing and storing passwords for applications?

As recent breaches have shown, passwords may be leaked via various vulnerabilities. As a first step, you need to ensure that your applications are well written and as secure as possible. Code with access to passwords needs to be reviewed particularly well, and only limited database users should have access to the password data to avoid a SQL injection in one part of your code that doesn’t need access to the password data from being escalated to affect password data.

Password data should not be stored in clear text, but encrypted or hashed. Most applications never need to know the clear text form of the password, making cryptographic one way hashes an ideal storage mechanism for passwords. But it is important to recognize that any encryption or hashing scheme is still vulnerable to offline brute force attacks once the data is leaked. After a password database is leaked, you can only delay the attack, but you can not prevent it.

A proper hash algorithm should delay the attacker as much as possible, but should not be too expensive to cause performance problems for the application. Any modern hash algorithm, like sha256 will work, but you may need to apply it multiple times in order to slow down the attacker sufficiently. Other algorithms like PBKDF2 and bcrypt are slower and may be preferred over the rather fast and efficient hash algorithms like sha. But it is important to use an algorithm that is well supported by your development environments. Applying a fast algorithm like sha256 multiple times will in the end be as efficient as using a more expensive algorithm once (or fewer times).

In addition, data should be added to the password before it is hashed. This data is frequently referred to as “salt”. If possible, each password should use a different salt. The salt serves two purposes. First of all, multiple users tend to pick the same password. If these password will be hashed, the hashes will be identical. An attacker breaking this hash will be able to access multiple accounts. If a unique “salt” is added first, the password hashes will be different. In addition, by using a sufficiently large salt, brute forcing the password will be harder. One of the most efficient brute force methods is the use of “rainbow tables”. Rainbow tables are pre-computed hash databases that are compressed to allow for very efficient storage and queries. Currently, these tables are widely available for strings up to 8 characters long. If one restricts the table to more common dictionary terms, larger tables are available as well. By adding a long unique salt, the string to be hashed will exceed the length commonly found in rainbow tables, and by picking a non-dictionary salt, and attacker would have to create specific rainbow tables for each salt value, making them impractical. The salt is stored in clear with the account information. Sometimes, it is just prepended to the hash. A random salt is ideal, but in many cases, a simple salt may be derived from information like the user id or the time stamp the account was created at.

As a basic summary describing secure password storage: Take the users password, prepend a salt unique to this account, and hash the result as many times as you can afford using a commonly available hashing algorithm.

Over time, you should expect CPUs to become more powerful, and as a result, the number of iterations required to protect your password will increase. Design your application in such a way as to make it easy to change the number of iterations.

References:
1. https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet
2. http://blog.zoller.lu/2012/06/storing-password-securely-hashses-salts.html

3. How can people test or review their apps to make sure that they are handling passwords in a secure way? What should they look for?

Quick Answer: When reviewing an application, make sure that authentication is implemented only once and easily changed later to adopt to a more complex hashing scheme. Developers should be instructed to use these authentication libraries and refrain from writing code that accesses the password database directly.

ASP.NET Padding Oracle Vulnerability

A very serious vulnerability in ASP.NET was revealed this past month that allows attackers to completely compromise ASP.NET Forms Authentication, among other things. When things like this happen, as developers it’s important to see what lessons can be learned in order to improve the defensibility of our software.

Source: ‘Padding Oracle’ Crypto Attack Affects Millions of ASP.NET Apps

This vulnerability illustrates a couple of important lessons that all developers should take note of:

  • Doing cryptography correctly is challenging
  • Don’t store sensitive information on the client

They’ve fixed the first issue with an out of band patch which means Microsoft took this vulnerability as a very real and serious threat to ASP.NET. If you’re not convinced as to the seriousness of this attack, watch this YouTube video of POET (Padding Oracle Exploit Tool) vs. DotNetNuke (which uses Forms Authentication) here.

Doing Cryptography Correctly Is Challenging

As developers, we must be masters of many different domains and it’s impossible to stay on top of all the different technologies, platforms, frameworks, and security issues that apply to each domain.  Case in point: Cryptography and this Padding Oracle Vulnerability. As developers, how many know what a side-channel attack is, as well as how to protect against it?

This is why we rely on Frameworks – we stand on the shoulders of others who were able to sit down, focus on a problem and come up with a solution that is safe, stable, reusable way of a doing a common task. This helps us save time on development and focus more specifically on the function requirements. The other benefit of using a widely propagated Framework is that in the event of a foible or huge screw-up, there will be a fix. Using obscure libraries doesn’t actually provide more protection; instead it just means that the problems that do exist may be taken advantage by attackers without it ever coming to light – though for some, ignorance is bliss.

A good example of this is authentication. Home grown web authentication mechanisms are typically fraught with mistakes that often lead to the compromise of web applications. There are many things that can be done wrong in web authentication. With ASP.NET Forms Authentication the common task authenticating users is abstracted into some reusable objects, controls, datastore, and configuration settings. Now, in theory, developers don’t need to think about authentication very much and can instead rely on someone else who took the time to sit down and to worry about the intricacies of doing this correctly. That doesn’t mean developers shouldn’t understand Forms Authentication because there are still many ways to incorrectly configure and use Forms Authentication that can leave sites vulnerable to attack. Using Frameworks is a good thing since it’s impractical for every developer to keep re-inventing the authentication ‘wheel’ every time they build a web site that needs to authenticate users. However, the benefit of having someone else implement a feature such as authentication can also be to the detriment of the system if it’s done incorrectly.  ASP.NET Forms Authentication has handled this job pretty well for many years and Microsoft should be commended for this, even though it’s not been without any mistakes.  The latest misstep isn’t limited to Forms Authentication in ASP.NET (JavaServer Faces, Ruby on Rails, and OWASP’s Java ESAPI), this is just one of many place where this vulnerability manifests itself.

ASP.NET encrypts information and stores it in several places on the client – in the URL, in a cookie or in a hidden form field when using features such as WebResources, ViewState, Forms Authentication, Anonymous Identification, and the Role Provider. ASP.NET typically uses AES (or 3DES depending on configuration) to encrypt this data and it uses a common key called the Machine Key. AES and 3DES are block ciphers, or symmetric key ciphers, that perform a transform on a fixed number of bytes of input (the plaintext) and then outputs encrypted data (or ciphertext) that is the same number of bytes as the input. Since the block size is fixed (64 or 128 bits) and the plaintext text size can vary, the transform needs to be able to encrypt data longer than the initial block size. This is accomplished using a ‘mode of operation’ (Cipher-block chaining or CBC is what ASP.NET uses) combined with a padding scheme (PKCS5 Padding) that is applied before the transformation (so that the input size matches the block size).

Because of the way these block ciphers are designed, if an attacker can change the encrypted data and send it back up to the server and get a SUCCESS or FAILED message as to if the padding is correct (hence Padding Oracle), they can actually use this information to decrypt the data. It turns out they can also encrypt new data as well using a different technique.

If a crypto system leaks just one bit of information about the encryption/decryption process – that’s all that may be needed for an attacker to beat the crypto system.

What’s interesting about this vulnerability is that it can easily occur through very legitimate use of the .NET Crypto API – meaning if you aren’t aware of this sort of vulnerability and you allow an untrusted client to bounce data that’s been encrypted with a block cipher against a service of some sort and the service returns error messages, then this vulnerability exists.

One solution you may arrive at is to just use one generic custom error page that doesn’t reveal the .NET cryptographic exception, but the attacker just needs to INFER a padding error occurred, they don’t require the actual padding error message. So the HTTP status may also give this away…so what if the web site just returns a 404 status code or 200 status code and an error page? An attacker can just use a timing attack to determine if it was a padding error in the decryption process – this is why one of the recommended workaround from Microsoft was to put a random timer loop in the error page to throw off these timing attacks used to detect if a Padding Oracle existed.

To cryptographers, this was an obvious mistake – verifying a MAC before decrypting the data or verifying a digital signature of a hash of the encrypted data would have provided the protections needed to prevent the tampering of the data and thus would have silenced the Padding Oracle. This is exactly what Microsoft has done in their patch, so the vulnerability has been fixed. The other important thing to consider is that encryption algorithms have a limited shelf-life as well – it’s always only a matter of time before they can be circumvented.  I would prefer to see ASP.NET put a large cryptographic pseudo-randomly generated ID in a cookie or URL and keep the actual sensitive data related to Authentication and roles on the server. As it turns out, ASP.NET does apply a MAC in almost every case – for ViewState, Forms Authentication Tickets, Anonymous Identification, and Role Cookies. However the MAC is used to check the authenticity AFTER decryption which is too late to protect against this vulnerability – it doesn’t silence the Padding Oracle. As for the WebResource that has encrypted data in the URL Query String, no MAC is used at all.

This is why it’s important to never store sensitive information on the client to begin with – because doing cryptography correctly is not a trivial task.

Top 25 Series – Rank 20 – Download of Code Without Integrity Check

Teaching Sec503, our intrusion detection class last week, we yet again wrote a signature for a CVS exploit from a few years back. Sure, it is kind of old news by now. But I think it is is very timely if you are concerned about the integrity of your software. If you are not familiar with it: CVS is software used to manage source code repositories. A compromise of your CVS server means that you can no longer trust the software maintained by this server. Hardly anybody installs software from a CD anymore. Most software today is downloaded and installed. But how do you know if the download has been tampered with?

There are two main methods to verify file (and with that software) integrity: Cryptographic hashes and digital signatures. Cryptographic hashes are essentially fancy checksums. Unlike a checksum, it is very hard to find two documents with the same hash. An attacker will not be able to for example add a backdoor to your code, and then add additional comments or empty code to make the hash fit. This may no longer be fully true for MD5, but even MD5 is still “pretty good” and exploits at this point only work if certain conditions are met.

But there is a fundamental weakness that comes with using a hash to ensure file integrity: How do you make sure that the hash didn’t get changed? In many cases, the hash is stored in the same directory, with the same permissions, as the original code. An attacker could just replace the code, and the software using the same attack.

This problem is solved in part by using digital signatures. A digital signature starts out with a cryptographic hash of the software, but this hash is now signed using a private key. Nobody needs to know this private key, and it should be kept tugged away and out of reach. An attacker will now no longer be able to modify the hash undetected. However, in order to verify the signature, a user will need a copy of the public key. How do we make sure the public key is correct? This may be easy if you already have the public key, or are able to validate it out of band. But in many cases, you obtain the public key at the same time you obtain the signature and the code.

One solution to this problem is the hierarchical system implemented by SSL code signatures. In this case, the code is signed by a public key which in itself has been signed by a trusted entity. You operating system will usually trust a number of these certificate authorities by default and then trust every key signed by one of these trusted certificate authorities. This works very well, as long as these certificate authorities are careful in how they hand out these signed certificates. Sadly, bad certificates have been handed out in the past.

So we got our software. We verified that the signature is correct and installed it. We are not in the clear yet. Next thing you want to do is either download additional components or updates. In particular updates are typically verified by the application itself. There are a number of pitfalls that can cause problems:

– the application validates the signature, but does not ensure that the signature was created using a valid certificate.
– the application allows downgrades. An attacker can offer an older version (and the attacker of course has the valid signature for it) and then have the application downgraded so an old vulnerability can be exploited.
– a badly implemented update mechanism can lead to a DoS issue if the upgrade fails half way and does not provide for a simple “undo”.

So in short, here a quick checklist on what to look for:

– if you use regular hashes, store them on a different system then the original software, and secure them well
– try not to use MD5. SHA256 is probably the best algorithm to use at this point. Offer multiple hashes if you can.
– if at all possible, use proper code signing certificates.

Argument for Database encryption in web apps

I regularly get consulted on various web application security issues and defensive strategies. One of the recent “frequently asked questions” is around database encryption of web application. My answers to these kind of questions usually lead to awkward looking faces. I always start off asking more questions about the requirements, “Who are you trying to protect the data from?” and “What data are you trying to protect?”  The  answers to those questions are usually good indicator whether the person is on the right path or not.

In most cases, database encryption does not prevent the hacker from accessing the backend database via an application compromisse. The reasoning is very simple. If the web application needs to be able to access the data for normal operation and hackers are able to compromise the application, the hackers can essentially access the same data by controlling the applicaton (attacker owns the application). Encryption does not prevent a hacker to access the backend database if the hacker can control the application.

Some may argue that if the database encryption is per user based, then one user’s compromise does not lead to compromise of another user’s data. That is very valid but that usually requires authentication of the user to some backend pieces (or the database itself). Maybe authentication alone is sufficient for the security requirement? The database can provide access control based on user’s identity so one user’s compromise does not lead to access to other user’s data.

You might think I am an anti-encryption person. The fact is – I am not. Encryption totally has its place for web application security reasons. Asymmetric encryption usage to protect data that is “one way” in nature is a good example. As a retail store trying to store the user’s credit card details for recurring monthly payment, the data should be encrypted with asymmetric encryption so that the Web frontend can never read the data back but another party with another related key can. Also, there are numerous other scenario where encryption should be used, such as backup, protecting against rogue DB administrators and against physical theft of database machine.

I often recommend a threat risk assessment before deciding on the database encryption solution. That would allow you to quickly understand the threats that affect the web application and also  possible countermeasures that can help protect against the threat. Due to the cost of deploying and maintaining database encryption, I am seeing very limited deployment of database encryption, most folks tend to turn to other alternatives for risk mitigation.

Encryption is a useful defensive technology that can help protect the web application but it is not a silver bullet, blindly deploying it may not address any threats that you are trying to mitigate. The managers and auditors tend to request these technologies to be deployed to defend against web application compromise after seeing another competitor getting compromised thru web application security flaws (eg. SQL injection). Encryption is likely not the best choice of defensive technology given the scenario.