Reflect on SQL Server Security Considerations After Recent Breaches
Recently, the technical community has been hit by the CloudFare issue - described in detail here and when compromises of any kind occur, I like to review and consider security in general. In addition to the CloudFare issue, another very concerning hack that has been unfortunately increasing is the sim-swap hack. For those who may be unfamiliar with this, this individual experienced this and wrote a great summary of it. As we develop applications and databases with user's private data, what are some good considerations, practices and design patterns - if possible in our situation?
Mandatory security questions and considerations when using any third party tool, and even a native tool. Security has always been and will always be a chess match against an opponent. What this means is that for every implementation of a security practice, there is a counter-practice the opponent will use. This never changes because life isn't one game of chess and it's over - it's constant.
- What are the least permissions the tool needs? For instance, a tool that only needs to read metadata of a few system objects should only have the appropriate access to read those data.
- How can the environment restrict the flow of connections that the data tool uses? Is this being measured before the design is implemented? If a tool makes a connection every minute, is this reflected in monitoring?
- Are randomized audits in place, which look at the tool's permissions along with auditing of other security information, for an example new users, roles, permissions, etc.?
- When transmitting data, have options to ensure the data are encrypted during transmission occur? SQL Server can use SSL security and in some environments, this is mandatory. Additionally, you may want to consider partitioning data, encrypt the data in transmission, and only at a point, produce the data together.
- Can the company use time to limit possible data compromise? In the case of sim-swapping, what victims have discovered is that hackers were using the social media accounts as "timing points" for targeting their victim. However, imagine if you could simply have no phone service in a time window - this would limit a hacker's ability. While this sounds inconvenient, it's important to consider that inconvenience is generally more secure.
- Is the security practice introducing a dependency on other security practices? With this question, think of the case with sim-swapping on cell phones. Many companies use cell phones for security validation, yet by using cell phones for extra layers of security, they introduce security dependencies with these cell phones. In the case of the above linked article, the user thought the cell phone secured his accounts, in reality, his cell phone actually made him more vulnerable. This is a big concern with code - make sure to use libraries that are understood and wrapped carefully.
- How does our environment retain, encourage and promote privacy - which is another word for information control? From encouraging clients to be careful with their privacy and what they disclose, to discussing security practice with only appropriate audiences, the best security practice historically has been privacy. The old idiom "silence is golden" has always been and will always be true.
- How many different tools are you running for monitoring, maintenance, etc.? The more tools we introduce, the more we create a factorial situation where numerous interactions open possible security breaches - even if we don't think of them. A simple example of this using cell phones is messaging applications: many of these will let the senders know when their message has been read or seen, yet this is a security breach in the context of determining what time the receiver is using the cell phone. What some sales people describe as "features" are actually dangerous for security. In the same manner, a tool which captures the metadata of objects might be a security concern if those object details reveal information a hacker might want and are not stored or transmitted in a secure manner.
- Finally, like chess, consider that for every security strategy, there is an opposite and equal counter strategy. When I was in high school and designed my school's website, I remember people telling me that "A 6 character password will never be hacked - it's just too much." Nobody would say that today.
Do you actually need the tool? When I reviewed the list of companies that were affected, based on the public list, a few companies didn't appear to need the services offered. I've seen many tech companies spend unnecessary amounts of their budget on unnecessary tools while also failing to realize that a tool introduces effort that might not be best used. Think of a tool that performs monitoring, but also requires a few weeks to understand and relate to others; if native monitoring could be built in less time, is this new tool really useful? We can also ask this same question on configuration - if we have to make many changes to the configuration of a tool to match our security which may limit its uses, is this tool useful? Third party tools cost both money and time learning. They also add security overhead and often must be used in a manner that doesn't allow for security compromises - in CloudFare's case, some companies were unaffected because they had their own custom security on top of using their services.
This isn't an argument against using tools, rather an argument about considering all of the costs involved, including financial cost, time cost and security costs from designs that ensure the introduction of the tool doesn't lead to a compromise of data, architecture or other compromising information.
Is there an alternative to your design? ShapeShift provides a great example of a company that has solved a user security and user data problem in one of its tools. A person can use ShapeShift to swap a bitcoin into another crypto-token (or vice versa) without ever sharing their private information - no username or password required. For this service ShapeShift offers, their technical solution means they don't need to focus on security for user's data because they don't have it. Even if the "swap" address was hacked or compromised, the actual user's information would not be compromised (in the same manner that if you sign a bitcoin transaction, the receiver doesn't have your private key, only the bitcoin you send).
In a sense, this model accurately assumes that a hack is going to happen eventually, so how can we prevent data compromises from happening? If a company doesn't have user data, the hack's damage is much less than if they also lost customer information. This may be an option in some jurisdictions - for instance, ShapeShift is a company out of Switzerland - and it shows that there are other ways to consider data storage - maybe not storing some data is the best practice. While companies may be limited here based on legal jurisdiction, consumers are not - if a company needs information that you deem too much, terminate business with the company.
Can you build custom security that your customers prefer? Related to the above point, if you can build custom security in your legal jurisdiction, this would add more work on your application and database developers, but you might win over many consumers because of it. Using the sim-swapping example above this, when talking with different cell phone providers, the most advanced option they offered to prevent this was a password. Yet, as the article highlights, hackers will use social engineering techniques to get around this (in the movie Die Hard 4, the call with OnStar is an example of using social engineering to bypass security). One question I asked was, "Why can't you [the company] simply have an option that a customer could choose that prevents any sim-swap changes over the phone - all of it must be done in person?" None could offer that. This is custom security that is inconvenient, but very secure.
Custom security, while being more expensive, would offer customers more confidence in your security. I would happily pay an extra $50 a month (or more) if I could configure some of my own security for tools I use - such as no changes to some things unless I've been "verified" in person. On the database side, this could be built by partitioning customers with one form of security from the other, whether using separate databases, tables, etc. Using the cell phone example, a customer service agent would be unable to see some account information - these data would only be accessible in branches. These type of configurations make more work for data thieves and hackers, which often results in them picking other targets.
When we build technical solutions, we should always consider why and what effects they may have. A poorly built application that leads to compromises of user's data leads to a loss of confidence, not only in our company, but in digital security in general. Developers and architects should consider a cautionary warning from the Mongolian Empire's collapse: when people lost confidence in the Mongolian system of paper money, it amplified their loss of confidence in other areas of the Empire and it quickly collapsed. If people lose confidence in our ability to store their data or protect their security, we will witness the end of the digital world due to a loss of confidence in digital security, which will create a cascade of dominoes falling in other digital areas.
- If you are in a jurisdiction that allows flexibility in design, I would strongly consider a design that is careful about storing too much user data. The less, the better. Think about how people can use your solution without storing their information.
- Take inventory of how many native and third party tools you use and ask questions about security, interactions, and other related variables. Determine if your environment is able to wrap other layers of security around these.
- Always consider limitations: from security to privacy, know where the limits should be and build those. As an example on this point, companies encouraging people to tie their cell phone to their email, to the bank account, to their address, etc. are actually encouraging very stupid security practices. This is incredibly dangerous and the article I linked shows why - from the cell, the thief can get the email, bank account, etc.
About the author
View all my tips
Article Last Updated: 2017-04-03