Review Minimum SQL Server Security Practices
In the recent webinar, Development Tips for SQL Server, we looked at some minimum security practices. In this tip, we review some of the minimum security practices with a few code examples. Due to the changing nature of digital security, minimum security practices along with best practices do not guarantee that a company will be immune to attacks, breaches or infiltrations. As I mentioned when presenting, the best way to think about this is captured by what Dale Meredith says - "We can slow an attacker, but we can't stop an attacker." These techniques can assist with slowing an attack and companies should have a strategy for handling what steps come next once an attack is detected.
Least permissions with clear demarcations
In some environments, we may only manage a few users while in other environments, we may manage hundreds. We can assign permissions to users directly or by using roles, where users are assigned to roles. Also, we can restrict further access by routing some operations through procedures only where users may only have access to part of objects, like tables, and not the full objects themselves. Different companies may have requirements to comply with the law, but even if you don't face these requirements, I suggest being as strict as possible with permissions. Developers will generally have full access to a development environment (which may be their own or shared), but higher environments (not including production or preproduction) should carry stricter limits - with senior staff having more access than junior staff for troubleshooting.
Here is a script to setup security settings for this tip.
USE [master] GO CREATE LOGIN uReadLimited WITH PASSWORD = 'ExampleOnly:NotAnActual,PasswordDoNotUse.' CREATE LOGIN uMinExecute WITH PASSWORD = 'ExampleOnly:NotAnActual,PasswordDoNotUse.' /* DROP LOGIN uReadLimited DROP LOGIN uMinExecute */ USE [DatabaseOne] GO CREATE USER uReadLimited FROM LOGIN uReadLimited CREATE ROLE ReadLimited ALTER ROLE ReadLimited ADD MEMBER uReadLimited GRANT SELECT ON OBJECT::dbo.etlImport TO ReadLimited /* DROP USER uReadLimited DROP ROLE ReadLimited */ USE [DatabaseTwo] GO CREATE USER uMinExecute FROM LOGIN uMinExecute GRANT EXECUTE ON OBJECT::dbo.stpViewData TO uMinExecute /* DROP USER uMinExecute */
Using the above example, we see two different approaches which might be routes we use.
In one example (DatabaseOne), we assign permissions to a role on the object itself. The user that we created for the example (uReadLimited) is assigned to that role. If the user doesn't need any more permissions, we want to avoid giving it more.
In the second example, we assign execute permission to the user directly (uMinExecute). As for which route it better, this depends on how you audit these permissions. The most important part of setting security is consistency: if we are going to use roles for permissions and assign users to roles, we want to be consistent with this practice and the same holds true for if we assign objects directly to users. Inconsistency complicates auditing and also allows for an attacker to use complexity against you - it becomes hard to identify an inconsistency if everything is inconsistent.
As a note, if you test this in a development environment, make sure to remove any permissions that were only for testing, as you do not want invalid permissions checked into source control or accidentally promoted to higher environments due to a test. With security, setting strict permissions is only one step - auditing these is another step to make sure that none of these are altered in ways they shouldn't be.
Data encryption or data re-architecture
We can use transparent data encryption in SQL Server (as well as Azure SQL) when the situation calls for encrypting sensitive data or sensitive architecture in case of theft. Transparent data encryption is not a free operation in that it will add overhead and we must be careful about backing up the certificate (see warning in the code comments), or else we may not be able to access our database again. Also, as we see in the below example, after we create our master key and certificate along with encrypting our database, this also has an affect on tempdb's encryption by encrypting it (see second image). Even though this contrived example has 1 user database encrypted out of 4 user databases, tempdb still becomes encrypted and this does have an effect on performance for non-encrypted databases.
USE [master] GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'ThisIsJustAnExample,DoNoUseThisPassword' CREATE CERTIFICATE DatabaseThreeCert WITH SUBJECT = 'Database Three Certificate' SELECT [name], [is_master_key_encrypted_by_server], [is_encrypted] FROM sys.databases SELECT [name], [subject], [expiry_date], [start_date] FROM sys.certificates USE [DatabaseThree] GO CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE DatabaseThreeCert /* --- Message right after: Warning: The certificate used for encrypting the database encryption key has not been backed up. You should immediately back up the certificate and the private key associated with the certificate. If the certificate ever becomes unavailable or if you must restore or attach the database on another server, you must have backups of both the certificate and the private key or you will not be able to open the database. */ ALTER DATABASE DatabaseThree SET ENCRYPTION ON USE [master] GO SELECT database_id , encryption_state --- 3 means database is encrypted, while 2 means the encryption is in progress , key_algorithm , encryptor_type --- details the encryption - notice DatabaseThree is certificate and tempdb happened automatically FROM sys.dm_database_encryption_keys
When considering transparent data encryption, I caution people to extensively test and scale ahead of time, where encrypted data may be extracted to other databases. In addition to costs in performance where scale may help, the effect that transparent data encryption may have on other operations should also be tested. For instance, replicating sensitive data will be protected in the publisher with transparent data encryption, but if the feature is not enabled on the distributor or subscriber, these data are still exposed. In a similar manner, you will want to make sure that replicas have the settings of the primary in availability groups, as well as monitoring how transparent data encryption may affect data processing in availability groups.
Data re-architecture involves changing your data approach around storing sensitive information. Due to the rising presence of attacks and breaches, a few companies have abandoned the data model of storing private information. The discussion starts with the assumption that the environment is already compromised - with this knowledge, what would we avoid storing? This type of architecture requires out-of-the-box thinking in that we have to re-consider most data models - the credit system, which stores significant identity information provides us with an example of a challenge - how would we design it without identity?
Use delay as a feature
In some business contexts, attackers and customers both want the same feature - speed. As we've seen in some attacks over the past few years, the faster a transaction can occur (such as wiring money to an account), the more likely that the speed feature will be used when a system is attacked. Delay can be a security feature in some contexts - it slows the speed of a transaction or activity. Using the same scenario of wiring funds involving bank accounts, imagine a scenario in which we had two types of bank accounts - accounts that allowed wires and accounts that did not. Some customers may prefer to have the latter account, knowing that if attacked, the attacker has a limited number of options. Delay can add security.
In the below example, we see a queue table structure from a select query that includes the date of request - when the record is added - along with the date of fulfillment. The date of fulfillment is when the execution will occur. The design prevents a transaction from happening before this execution date. This gives a feature where people may want the date of fulfillment to be later than the date of request. While we see the basis for the table with a minimum required column set - an ID column for linking to details, and an added date when the record is added along with the process date, this table can also include more columns or can be part of the entire data set. We could save the entire transaction details along with this information, or we could save it to a queue table and perform a join. The key is how this design can add security in some contexts - how we choose to use it and what other columns we may want to add may differ as far as data sets are concerned.
SELECT RecordId , AddedDate , ProcessDate FROM queueProcessing /* --- Basis: RecordId --- referencing a record that will be processed AddedDate DATETIME DEFAULT GETDATE() --- the automatic date the data are added ProcessDate DATETIME --- the date the record will be processed; will always be after AddedDate */
This type of queue table could be designed in multiple ways, but the basis of the table will involve at least three columns: the record id that will be processed (or a referencing id of some data point), the date the record was added for processing, and the date the record will be processed. The two dates are important for business rule validation, as well as auditing. While this example shows a table with a time added, along with a time that the transaction will be processed, we can build this alternatively into the application. We could have a table with a time where the record was added and within the processing application only process records after a certain time. This would mean that this is built into the actual processing code rather than used from the queue table and has the advantage of preventing an attacker from being able to edit records to process them further. This doesn't mean that an attacker couldn't find ways to manipulate code or even the date additions. Either design involving the database or application layer adds more difficulty for an attacker, but isn't immune to an attack.
In our below examples, we'll look at audits with users, roles and permissions to objects related to our least permission examples from above this. We use regular audits to detect abnormalities and identify possible attacks or compromises. Also, we may have business rules about adding users to roles, or business rules about when objects can be altered, so when we see something outside these rules, we can spot a possible security issue (or development mistake). Audits verify our scheduled tasks, which is a security technique, and audits also provide us with a comparison that we can use on previous data. We can audit on a schedule, but we should be careful as an attacker will try to discover this. A random audit offers more robust security, provided that it's not managed in a way that can be detected by an attacker.
SELECT [name] , create_date AS CreateDate , modify_date AS ModifyDate FROM sys.database_principals ORDER BY create_date DESC SELECT t2.[name] AS ObjectName , t1.[permission_name] AS [SpecificPermission] , t3.[name] AS PermissionName , CASE WHEN t3.issqluser = 1 THEN 'User' WHEN t3.issqlrole = 1 THEN 'Role' END AS [PermissionType] FROM sys.database_permissions t1 INNER JOIN sys.objects t2 ON t1.[major_id] = t2.[object_id] INNER JOIN sys.sysusers t3 ON t3.[uid] = t1.[grantee_principal_id]
Notice that when we use the CASE WHEN statement for sys.sysusers, we extract the related information about the role or user matching our setup. This is important because if we use an alternative setup, then our audit will need to match this. These examples show what we would expect to see based on the creation of roles and users in the first example of this tip - if we used another approach, we would design our audit to match it. The design you use will determine how you will audit your environment and I suggest creating your audit before designing permissions, as the design will map out how you want to establish permissions. We may want to audit more than permissions as well, such as auditing object creation and modification time. Provided that we have regular backups from a schedule, or we maintain objects in source control, object definition audits may also be required.
In this tip, we reviewed four minimum security practices with some applied examples of each. In each of these practices, we still need to understand where they can be applied for our business use - such as how we will apply these rules to our environments (all or some), what the effect will be on our customers (can these be features or are these applied to everyone), along with these practices complying with other parts of business. Security in a corporation can be challenging because we have to work with multiple teams who have different agendas and often security takes a backseat in consideration. Still, even with strong security practices, attackers have numerous ways to attack and compromise our systems and only in a best case scenario will these slow an attacker.
- From the beginning, we should design accounts the least permissions required, whether that involves using roles and assigning users to roles, or assigning permissions to the user directly. In addition, these permissions should be under constant random audits so that we can catch possible compromises.
- For protecting confidential data, we should use data encryption along with encrypting data that may be useful in deriving confidential data. While this may invite some performance complexity, security trumps performance in situations where a data breach or compromise carries significant costs.
- Delay offers a strong security feature, as infiltrators and attackers often use urgency against companies. While delay cannot stop attackers, it may give a window of review or "consciousness-raising" where an attacker is detected. In addition, delays are useful when companies consider not allowing an "always-on" timeframe, which only gives attackers a lot of time to plan an attack.
- We should regularly audit our systems by having an "expected" set of values to compare with what we "observe." Be careful about using automation here - an attacker can easily discover and compromise automation, making you believe something abnormal is normal.
About the author
View all my tips
Article Last Updated: 2018-12-07