API Security and FHIR Recommendations
Ep. 12: Alissa Knight, recovering Hacker and API Security Expert
Moesif’s Podcast Network: Providing actionable insights for API product managers and other API professionals.
Joining us is Alissa Knight, partner at Knight Ink Media, CISO, cybersecurity expert, cinematographer and accomplished content creator for telling brand stories at scale.
Larry Ebringer, Moesif’s CMO, is your host today.
Listen to the episode on SoundCloud above, watch it on our YouTube Channel or download it on Apple or Google.
Table of Contents
- 2:23Avoid Prison Yellow - Become an Ethical Hacker
- 5:43Authentication Does Not Equal Authorization
- 8:14Protect Against BOLA with Scopes
- 10:06Don’t Use WAFs to Protect Your APIs
- 13:08Know What Traffic is Going to Your API
- 14:35Shift Left Security - Shield Right
- 17:39PHI is Worth 1000X Credit Card Info
- 21:41APIs are the Weakest Link in Healthcare
- 24:27APIs Have Multiple Attack Surfaces
- 27:20Banning Apps From Jail-Broken Phones Doesn’t Help
- 29:40Use MobSF to Find API Keys
- 30:44APIs Need to Comply With FHIR
- 34:55Implement FHIR Correctly
- 37:11Get FHIR Certified
- 38:11FHIR Certification Versus HIPAA Compliance
- 39:59No One Right Solution for API Security
- 42:59Instrument Your APIs
Larry Ebringer (Moesif): Welcome to Episode 12 from Moesif’s APIs over IPAs Podcast Network. I’m Lawrence Ebringer your host today and the Chief Marketing Officer of Moesif, the API Observability Platform.
Joining me is Alissa Knight, CISO (Chief Information Security Officer), cyber security expert, cinematographer and accomplished content creator for telling brand stories at scale. She’s a regular conference speaker, which is incidentally where I first saw her as the keynote on apidays’ Interface Conference. She was giving a fascinating talk on security and healthcare apps, and I thought our audience would love to hear her perspectives on API security.
So here we are. Welcome Alissa. Where in the world do we find you?
Alissa Knight (Knight Ink Media): So, first of all thanks Larry. That was a great intro. I always feel bad for people that they have to give my bio and introduce me, because it feels like it could just go on forever. So I’m going to start having people just introduce me as, you know, that “security chick,” that “hacker chick.” But no. Thanks for having me on your show. It’s a real honor to be here. Thank you so much.
So I’m in Las Vegas. A lot of people don’t know people live here. It’s not just for coming to conferences, but people actually do live here. My wife and I live in an area called Summerlin, which is about 20 minutes from the strip.
Larry: Well, it’s great to have you on the show and we’re really glad that you’re going to share your perspectives on security with us.
Just as a quick side note, when I mentioned to my family that I was hosting you on our podcast, one of my 12 year old daughters asked if you were white hat or black hat. The language of technology has become so mainstream, that even pre-teens are familiar with it. So on that note, why don’t you share with us your journey from black hat to white hat, starting two cyber-security companies, and then on to today running Knight Inc. Media.
Avoid Prison Yellow - Become an Ethical Hacker
”Fast forward to when I was 17. I made a very bad decision and hacked a government network. Got caught… and ended up working for the US Intelligence Community in Cyberwarfare.” Even if you start off as a black hat hacker, you could end up an ethical one.
Alissa: So, that’s amazing that even a 12 year old knows the idiosyncratic distinctions between black hat and white hat. Great question, and I think a great way to start the show off.
I started out hacking when I was around 13 years old. Unfortunately, I had very little guidance. But things were very different back then. You’re talking about a different time when there wasn’t all of these resources available that are available to you today. You know SANS, or even Google, wasn’t a thing back then.
I got involved in hacking through IRC. There was this IRC server, Internet Relay Chat, called EFnet, and I was very involved in these EFnet IRC channels. And so, that really was my family. Because, I mean I spent more time online than interacting with my actual family, so they pretty much raised me. I grew up on IRC.
And in these channels I learned through other people, and I learned on my own. At the time, there was very little knowledge out there. We have access to so much knowledge these days with Google and with all of these resources, but back then it was really difficult to learn. You really had to learn things on your own.
Fast forward to when I was 17. I made a very bad decision and hacked a government network. Got caught, they arrested me at school, believe it or not. After the charges were dropped, ended up working for the US Intelligence Community in Cyberwarfare. So I was forced into a white hat role. I look terrible in orange. Prison wasn’t for me.
I went from black hat to white hat outside of my own control and it’s good, because I realized who I was and it was clear to me that I could make a significant amount of money doing what I love most, and that was hacking.
And thus was born penetration testing/ethical hacking — actually hacking company networks and explaining to them how we did it in order for them to defend themselves against the real attacker.
Larry: I didn’t think anyone looks that good in orange.
Alissa: Orange is not the new Alissa Knight.
Larry: In one of your great YouTube videos, which by the way I recommend our listeners to tune into, you gave a splendid definition of a hacker, and it was someone who wants to understand how something works and then send a stimulus that the developer didn’t expect or account for. So, really not that nefarious as one would imagine. Also on your YouTube and on your blog content, you really backup all of your claims with great empirical data, which really makes it a lot more valuable.
Protect Your APIs With MoesifLearn More
Authentication Does Not Equal Authorization
The most systemic issue developers make when developing APIs is that they’ll remember to authenticate API requests, but they’ll fail to authorize them. Understand the distinction between authorization and authentication: you may have an API key that proves you have legitimate access to send API calls, but you’re not necessarily authorized to receive the data that you’re requesting.
Larry: So, as someone who has sat on both sides of the table so to speak, and written extensively about hacking banking & healthtech APIs, and now I just heard a government network. And also someone who presented to Gartner recently about what you thought about API security, what are the most common mistakes that developers make when it comes to API security?
Alissa: Good question. I think, for me, the most systemic issue is developers will remember to authenticate the API request, but they’ll fail to authorize it. So, it’s understanding the distinctions between authorization and authentication. Authentication being something that you have or something that you know, versus authorized which is being allowed to actually view the data.
I may have an API token or an API key to be able to prove that I have a legitimate user account, have legitimate access to the API to go to send API calls, but I’m not authorized to receive the data that I’m requesting. And this is a problem across all of the APIs that I’ve looked at where there’s just a lack of authorization. Specifically around what are called Broken Object Level Authorization (BOLA) vulnerabilities.
For our listeners who don’t really fully understand what that means, the best analogy that I like to use is the whole coat check thing. If you and I went to a cocktail party and I saw you check in your expensive Burberry coat into the coat check, and I wanted to take that home. You were given the number 18 from the coat check person, and I was given 17, and I just take a sharpie and I change that 7 to an 8 and give it back to the coat check and say I want my coat. That’s a great example of a BOLA vulnerability. I’m authenticated, I have a ticket, but I’m not authorized to take home your Burberry coat. That’s basically how authorization vulnerabilities work.
Protect Against BOLA with Scopes
BOLA vulnerabilities often occur when you haven’t specified what your users are authorized to access. If you use Tokens like OATH, then you can limit access by tying scopes to your Tokens.
Larry: You actually preempted one of my last questions about BOLA. So tell us, how can our developers protect against such BOLA attacks.
Alissa: So there’s different things you can do. One of the things that I noticed is a lot of developers will implement tokens, but they won’t implement scopes. A really big recommendation is if you’re going to implement tokens like OATH, you want to make sure that you tie scopes to those tokens, which defines the level of access or the records that you’re allowed to see. It basically sets parameters around your token and defines what you’re permitted to be able to request. So if I’m a clinician for example, and I have a login, through those scopes I can only view these specific patients. Or if I’m a patient, my scope for my token should only allow me to see my patient records, not /patient/1,2,3,4,5, all these other patient records. I should only be able to just request my specific patient record.
There’s obviously commercial solutions that can implement this as well. But a lot of the vulnerabilities that I’m finding are just basic authorization issues. It’s not like they’re sequel injection, which can be handled by legacy Web Application Firewalls (WAFs). WAFs are great for rules-based/logic-based security control and so can’t really protect against BOLA attacks.
Do Not Use WAFs to Protect Your APIs
Analysts have promoted Web Application Firewalls and API Gateways as effective security controls against API attacks. WAFs are good for identifying sequel injection in the payload or cross site scripting, but they aren’t able to to know whether I’m authorized to see the data that I’ve requested.
Larry: On the subject of rule-based WAFs like ModSecurity or even API Gateways, I noted that in one of your recent YouTube postings you effectively said that they’re pretty useless for protecting your APIs.
Alissa: I’m pretty opinionated!
Larry: And funnily enough, I think it was the 2019 report from Gartner What you need to do to protect your APIs, where they espoused the virtues of API Gateways and WAFs. And then you were on Gartner and you basically shot that down.
Alissa: I don’t think they like me very much.
Larry: Is there anything else that can be done, perhaps like advanced anomaly detection, that can further protect your APIs.
Alissa: This is my position on it. I don’t think that security should ever be a feature of a product. For example, when you have an API Gateway they’re adding security in as a feature to their primary responsibilities. For me, that’s a concern.
Gartner put a report out on Web Application Firewalls, where they said that WAFs were an effective security control against a business. This is something I could go on all day about — with pay for play and the analyst industry and stuff like that. But I won’t go there. The fact of the matter is, I think it’s creating this false sense of security, because CISOs are buying WAFs and they’re implementing API Gateways and turning on security and it’s creating this false sense of security.
Every single one of the APIs that I hacked in my most recent healthcare breaches were protected by WAFs. These CISOs are listening to the information coming out from the analyst industry, who have to print and talk about it because those are their paying clients. But the problem is that they’re not an effective security control against API attacks. How is a WAF going to know whether or not I should be requesting data that doesn’t belong to me? It’s going to be looking for things like sequel injection in the payload, cross site scripting attacks, that sort of thing. But it’s not going to know whether or not I’m authorized to see something or not, that’s outside of its realm of understanding. I think that answered like three fourths of your question, what was the other fourth?
Know What Traffic is Going to Your API
The first step in protecting your APIs is to understand what’s actually going to it. There’s a big difference in legitimate human traffic versus synthetic traffic. You can’t protect what you don’t know you have.
Larry: If WAFs and API Gateways don’t swing it, how can you protect your APIs? In fact, I think you touched upon a very cogent point, which I heard in one of your YouTube channels as well, which is that a lot of CISOs and developers view security as an afterthought or a bolt on. And you said, don’t do that, you should security should be an important consideration from the get go.
Alissa: I’d like to preface this answer with the fact that any organization that has APIs needs to know what’s going to their APIs. They need to know: is it human traffic — legitimate human traffic, is it synthetic traffic, is it an Account Takeover (ATO) attack using credential stuffing? It’s also a good idea to look for high frequency tools that might be consistently pounding your APIs and stealing very expensive bandwidth and resources from legitimate users. Everyone should know what they’ve got — you can’t protect what you don’t know you have. That’s incredibly important. So I want to preface this with the fact that you should know what’s inside the traffic going to your APIs.
Monitor All API Traffic With MoesifLearn More
Shift Left Security - Shield Right
Use a two-pronged approach Implement security while the code’s being written, send devs to secure code training and use tools that alert when writing insecure code. Shield right: secure your protect after its been deployed into production.
Alissa: After understanding what types of traffic are going to your API, one of the most important things is shift left security, but also shield right. The concept of shift left security is that when you’re writing the code you should be sending your developers to secure code training, you should be implementing a tool that will watch out for them writing insecure code, and yell at them when they are. Look for a solution that you can compile an SDK with the app, if you’ve got an app-based architecture So, shift left security: implement security while the product’s being created, while the code is being written.
Shield right is the idea that okay, not only are you securing it while it’s being written, but after it’s been deployed into production. Because we know that cybersecurity is a very fleeting industry. It moves very quickly. Several new zero day exploits came out just in the period of you and I are talking. There’s new exploits and new vulnerabilities every few minutes, so you need to shield right in anticipation for what’s to come.
The unknown unknowns. That’s what’s getting everyone nervous right now, and that’s what’s getting a lot of people breached right now — the unknown unknowns. The knowns are things that, to me, are legacy security controls like network intrusion detection, like the old Snort days when you’re writing Snort signatures for these exploits, you’re basically documenting known knowns. But what about the unknown unknowns? I don’t know what I don’t know yet, and so that is where shield right can take us into the future for the things that we don’t know about.
Larry: I just finished a book from Nicole Perlroth, This is how they told me the world ends.
Alissa: Oh Nicole, I actually just interviewed her on MoneyFest, Money2020. So if you haven’t seen it yet, I urge you and your audience to go see it. It’s a great book. She’s a brilliant, brilliant journalist from the New York Times.
Larry: Right. The book is long and thick. A weighty tome. It reads like a crime thriller, covering the whole exploit industry, starting from how it started off in the early naughts, 2000, 2002. A great perspective on shift left, shield right. I love that graphic analogy.
PHI is Worth A Thousand Times Credit Card Info
Your healthcare information is persistent, if it’s released onto the dark web it’s gone, there’s no changing it or getting it back. Contrast that with a bank simply mailing you a replacement credit card after Target is hacked.
Larry: Moving on a little. Since Moesif is used in a bunch of healthtech apps and some of our audience works in healthtech, in your experience, is ePHI (electronic Patient Health Care Information) or personal banking information, more valuable on the dark web, and why?
Alissa: Oh, I actually did research into that. That’s a great question. So obviously, in the work that I do I’m browsing Tor sites a lot, dark websites, and one of the things that I can tell you is that electronic health records are worth 1,000 times more than a US credit card number. So when you have a PHI record, EHR record, whatever you want to call it, you have a lot of data. So if I compromise Target and in that compromise, I steal Larry’s credit card number, your bank can very quickly send you out a new card. And it cost a few bucks to the bank. They send you a new card, you’ve got a brand new card and you’re fine.
If I compromise your health history and put that up for sale on the dark web, how easy is it for you to get new health history sent to you in the mail? It’s impossible. There’s no such thing. It’s gone. Once it’s out there, it’s done. If I want to figure out how to kill Larry, I find his PHI and I find out he’s allergic to bee stings. So I go after you with some bumblebees. But you can’t undo that once it’s done. That’s one of the reasons why I believe it’s worth so much. The other thing is that, having compromised so many healthcare APIs, I can tell you that there is a treasure trove of data in there.
In one instance, not only did I see the admission records for a hospital, but also the family member information of that individual. So when you go into a hospital and they ask for next of kin information and all this other stuff, all of this other data they’re going to need to know, it makes for a very content-rich environment, a very data-rich environment. There’s a lot of data on individuals’ information within PHI records, which is why we need to take this so seriously and which is really the impetus for a lot of my healthcare research.
When you’re talking about people’s health, it’s way different than defacing a corporate website. So much has changed over the last two decades. I mean, before when I was hanging out on IRC and doing this, it was all about, world-of-hell mass defacement. And now it’s money. It’s such a lucrative business to be in. When you talk about the dark side and the tens of millions of dollars that these ransomware groups are bringing in, it’s insane. I mean a lot of these ransomware groups are bringing in more money than some countries, more cash on hand than some large companies have. So it’s scary. It’s a huge business and the answer, the long winded answer, to your question is PHI is definitely worth way more than financial data.
And that’s not to say, don’t get me wrong Larry, that’s not to say that it’s less important, it just demands a much smaller amount of money than PHI.
Insist On Client-Side EncryptionLearn More
APIs are the Weakest Link in Healthcare
One of the biggest problems is chain of custody with PHI, where data goes from a very secure API to a less secure API. Cerner and Epic have very strong security, but once that PHI leaves over an API you have no idea how secure the next system might be. Hackers target the less secure API — rob Paul to pay Peter.
Larry: Given that the PHI and the EHR patient records are so valuable, are there unique challenges to locking down healthtech APIs, versus maybe those from say fintech.
Alissa: I think probably APIs could be the weakest link in security. Different health care providers will use different HR systems, whether it’s Epic or Cerner or whatever, fill in the blank here. Those systems prior to, and I’m sure we’re going to talk about this, but prior to FHIR, they could not talk to each other. There’s problems where, for example, if I’m targeting something like a Cerner EHR system that may be very, very well protected and very secure, but as soon as those PHI leave that system and go to a less secure API, where do you think I’m going to target as a hacker? I’m going to target the less secure API. I’m going to go after the path of least resistance.
I think one of the biggest problems is this chain of custody, or weakest link — PHI can go from a very secure API to a less secure API. It’s less secure because the person who wrote it is just some small business who doesn’t know what they’re doing and didn’t know anything about security. I robbed them. I robbed Paul instead of Peter.
Also, I think the second thing is the prevalence, and I would dare to even say it’s very systemic across a lot of the mobile apps that I looked at, where in this new concept of mHealth, or mobile health, apps, where API keys and tokens are being stored in clear text or hard coded in clear text in the mobile app. And developers just throw their arms up in and say, “hey, where am I supposed to store them if I can’t store them in the app?” They don’t really know where to put them. So it’s a real problem. I think there’s problems on the app side, where keys and credentials are getting hard coded, and then problems on the back end where the APIs are.
APIs Have Multiple Attack Surfaces
Since data is everywhere, castle-and-moat security isn’t possible. Attack surfaces range from partner-facing APIs, think supply chain threats, through web APIs, where there’s no SDK to bundle extra security into, through to person-in-the-middle attacks, where certificate hygiene impacts server/client API security.
Larry: Right. Can you talk to a little bit more about the different attack surfaces that ePHIs could be compromised over and what could be done to harden them? We covered a little bit about the apps themselves, but what about key stores, the network, the API endpoints, and also then data leaks.
Alissa: Sure. There’s just obviously data everywhere. The whole concept of castle-and-moat is completely erased at this point. You can’t control it. So, on the client side the hard coding of keys and tokens, or credentials, that’s the real problem. You have different types of APIs, so let’s talk about the different attack surfaces.
You have partner-facing APIs, where supply chain attacks are a real threat today. I don’t even have to go after you from the Internet, I can just find out who you’re doing business with and go after them. Since there’s connectivity between the two of you, through a partner API, a B2B API that’s just facing the two of your companies, and I’m in. Because whoever wrote it felt, well, this is a partner facing API, not facing the Internet, so we don’t have to worry as much about security.
You have web APIs, and if you don’t have a mobile app, the security controls on the client side are going to be far different. Whereas you can compile an SDK with the mobile app to add that additional layer of security, what do you do about the web APIs where you can’t necessarily compile a Chrome browser with the SDK? Get Google to distribute that for you?
There’s also the concern over women-in-the-middle/man-in-the-middle/person-in-the-middle attacks, where you have a lot of organizations that are not implementing certificate pinning. What I’m able to do as an attacker with a lot of these apps is insert myself in between the communications between the client and the back end API. Then I submit SSL certificates in both directions, telling the API that I’m the client and telling the API client that I’m API server at the API endpoint. They both think they’re talking to each other, but they’re really talking to me. That allows me to decrypt the SSL encrypted traffic and look at it. I can learn how the API works just by intercepting the traffic and decrypting it, and then copying and pasting those API requests into my own API client like Postman. I can then going after the API endpoint myself manually with an API client.
Moesif Keeps Your Data SecureLearn More
Banning Apps From Jail-Broken Phones Does Not Help
Hackers don’t care if your mobile app can be run on a jail-broken phone, or not. Downloading the APK, or intercepting the L7 traffic to & from the mobile app, is often enough to figure out how to access all the data.
Larry: Got it. I followed one of your tutorials recently on how to actually do that and it was surprisingly straightforward. And I’m not a super-great coder. In fact, I’m not a coder at all. But it was a little bit shocking that you could, with Postman and some packages downloaded onto your MAC, pull all of that information out.
Alissa: Pacman makes it surprisingly simple. It’s a package manager like Red Hat RPM and really powerful. A lot of these tools are free downloads. For example, I think it’s called Advanced REST Client, that’s a free download. The thing is that when you’re hacking, and that’s what I don’t understand, a lot of developers will implement security and say “oh, don’t worry, we looked to make sure that the mobile app isn’t being run on a jail-broken or routed phone.” I don’t care about that. I don’t need to run it on a jail-broken phone. With a lot of these API attacks I just extract the APK off of my android device. And then load it into my tools on my workstation using Apk Extractor, which you can download from the Google store, ironically enough. I don’t need to execute it. I don’t need to run it in a jail-routed environment. I can just do all this from my laptop or workstation.
Alternatively, I can even run it on the android device itself and intercept the traffic with a tool. And then again have access to all the data. Believe it or not, I prefer to do it that way rather than looking at the API documentation. With FHIR there’s a lot of documentation out there on it, because the point of FHIR is that you’ve developed for it. But I prefer to actually intercept the traffic and look at it. I’m a packet monkey. I tend to learn better looking at packets and looking at how the API works at the Layer 3 level, rather than just reading documentation.
Use MobSF to Find API Keys
To deconstruct an App simply drag and drop the APK file into MobSF and it just takes it apart, reverses to the original source code and then find hard coded keys/tokens with Grep.
Larry: I noticed that you are a Grep proponent.
Alissa: Yeah, I’m a Grep girl. Wow, you did watch my videos didn’t you. But thanks for stalking me, I appreciate that. Love my fans.
There’s a great tool out there called MobSF, for your audience that may not be familiar with it. It’s called the Mobile Security Framework and it allows you to literally drag and drop the packet APK file into the tool, and then it just deconstructs it. It just takes it apart and reverses it back to the original source code. That’s how I’m able to find all these hard coded keys and tokens. The interesting thing about that, much to your point. is that I don’t like to use the GUI, I’ll actually use that to just have it reverse back to the source code. Then I go into my command shell, into terminal, and use a bunch of Grep strings. That’s my preferred way of finding hard-coded API secrets in apps.
APIs Need to Comply With FHIR
The recent Cares Act stipulated that healthtech companies need to make patient data available to those requesting it. FHIR-compliant APIs present a secure way to meet those requirements.
Larry: Fascinating stuff. Great segue into the work of HL7 and their soon to be released latest version of the FHIR (Fast Healthcare Interoperability Resources) standard. How important is that work, should developers be building to the latest FHIR standard, and what’s your perspective on how important this will be for the API security industry in healthcare?
Alissa: Well it’s not important at all - no, just kidding. This is huge. This is probably, out of all of my vulnerability research, the most important thing that I’ve ever worked on, because you know I’ve had so many people from across the healthcare sector reach out and talk about how important this research is.
A lot of people don’t know this, but they automatically assumed us hackers came out of the womb knowing this stuff. I had no idea how to spell FHIR, let alone knew what the hell it was when I walked into this. I had to do my homework. I had to research. I didn’t even know who the heck HL7 was. HL7 is the name of the organization, and it’s the name of the standard. So it’s like the whole Kleenex tissue. There’s all this that I needed to research and understand. It took me months, and I still don’t know all of it, I still don’t fully understand a lot of it. I’m really excited about this research. I’m currently diving into R4 of FHIR, which is Release 4 of the standard.
It was created by HL7 International, Health Level Seven International, like you mentioned, and before FHIR, there were all these other different versions of HL7 that predated FHIR. And the ONC (Office of the National Coordinator for Health Information) has basically set these deadlines for healthcare payers and healthcare providers and said “you need to make this patient data available to people requesting it, otherwise you’re in violation of this data-blocking rule, information-blocking rule.” It can mean stiff penalties and fines. It’s a big deal if you are found guilty of this information-blocking rule. There’s a deadline around this, and organizations and healthcare payers need to implement FHIR APIs and make this healthcare data available.
The only thing is that I wanted to show what could happen when these APIs aren’t secured properly. That’s the impetus to this research. That’s what I’ve been focused on over the last year. Phase one of this research was targeting mHealth APIs, mobile healthcare APIs, that store medical data. In part two, which we’re calling Playing With FHIR, we’re focused on hacking FHIR APIs. And that’s what this report coming out will detail, it’ll give our findings and what we’re seeing out there. And there’s a lot and I’m really excited to be unveiling that research. And I know you guys are doing a lot in the healthcare space as well. Healthcare is a very target rich environment. There’s so much money there and hackers know it. And there’s just so much data, and that through these vulnerable APIs it’s very easy to get access to.
Identify Suspicious Users With MoesifLearn More
Implement FHIR Correctly
To create rock solid FHIR APIs implement using OAuth, authentication, authorization and other best practices. If humans are implementing them, they’re going to have vulnerabilities. Make them hard to find.
Larry: Poking the bear here, but if Revision 4 is coming out and you’ve been working on it, and lots of other clever people have been working on it, are there going to be many vulnerabilities after it’s released? Or is it going to be the nirvana of secure health, mHealth and ePHI over AIPs, moving forward?
Alissa: I need to clarify for your audience that when you deploy FHIR, you’re not literally going to Best Buy and buying like a shrink-wrap FHIR API “and make sure to include that security with it.” And then it’s just deployed. It’s going to be based on implementation.
One organization may implement a FHIR API, but didn’t follow best practices and have it be completely vulnerable. But, another organization may implement FHIR APIs and have that be rock solid. It may not be immediately evident where vulnerabilities are. Maybe it’ll take much longer than the other organization to find it. I’ll never say that anything is not hackable. If humans are implementing it, it’s going to have vulnerabilities, but they might just be harder to find. That’s the thing about FHIR. You and I could both implement FHIR APIs to serve our PHI records, but yours may be way more secure than mine, because I don’t know what I’m doing and I implemented it insecurely.
So it depends on the implementation. It depends on who’s implementing it and whether or not they know how to secure it properly. So being smart on FHIR brings in OAuth and all these other things like authentication and authorization. So implementing FHIR, of course, is very heavily predicated on the implementer and whether or not they implemented it properly.
Get FHIR Certified
As an added check it’s worth having your FHIR API certified.
Alissa: Now, I do want to throw a wrench in here — there’s actually a certification process that organizations can go through to have their FHIR API certified. So if you’re going to implement a FHIR API and I’m going to implement one, I can actually go and get my FHIR API certified as being implemented according to the standard, with the proper security controls in place. And you can continue to run your FHIR APIs, but not be certified. It’s not compulsory. You’re not required to get yours certified.
The EHR vendors that I’ve spoken to in this research, they’re pursuing certification of course. But you can have non-certified and certified APIs. Which means that there’s going to be a very big mix of vulnerable and non-vulnerable APIs. Let me be careful when I say that, because I’m not saying that a certified API is going to be secure and unhackable, I’m just saying that you’re going to have a mix of the two.
FHIR Certification Versus HIPAA Compliance
Unlike HIPAA, with FHIR, there are third-party bodies that can certify your APIs.
Larry: That sounds a little like HIPAA compliance, where there’s no third-party certification body; there are best practices that you should follow, and there are people who will audit what you’ve built, but HIPAA is basically a bunch of guidelines that you should follow. Sounds like though, that HL7 has gone one step further where there are third-party bodies who can check that you’ve implemented FHIR according to the standard.
Alissa: One of the EHR vendors that I spoke to, I won’t name them, but they did say that they were pursuing certification by the end of the year, which would make them the first company to get FHIR. I don’t know who the actual organization is that’s doing the certification, maybe it’s HL7, I don’t know. But you can definitely have the option to go and get it certified or not, and much to your example, you’re not required to.
No One Right Solution for API Security
There’s lots of great solutions out there for API security. But what the actual best approach is, I don’t know if there will ever be an answer to that.
Larry: Got it. Well, we’ve covered a lot on our podcast today. You’ve given a lot of fantastic insights on how to protect your APIs in general, and your mHealth and your healthtech APIs, in particular. As a penultimate takeaway, what’s next for APIs and security that we haven’t seen yet?
Alissa: I think #moreplease. This is a great way to close out the show and the interview. First of all if I were to put my fortune-teller hat on, this is definitely the direction the world is headed: we’re moving completely away from monolithic architectures and moving to microservices, everything is moving to the cloud, everything is microservices powered by APIs. I think we’re just going to see more and more of this.
The problem is that organizations are going to go from running 1 to 2 APIs, to running 1,000 to 2,000. I’m working with an organization right now that has over 1,600 APIs. That’s a lot of APIs. I think this is going to create a much bigger marketplace. One of the things that’s coming out of the presentation to Gartner on the state of the APIs, is that the marketplace is going through an identity crisis right now.
The API security marketplace doesn’t really know what it is yet. Every company out there with an API security threat management solution believes that they’re doing it the right way. And that’s not necessarily false. Every single company may be doing the right approach to API security, but I think the jury is still out on where that will land. Is it in-line? Is it passive? Do we use SDKs? Do we use distributed tracing? Do we use… There’s all these great amazing approaches, a lot of them are my clients, and they all have great solutions. What the actual best approach is, I don’t know if there will ever be an answer to that.
But I know what the wrong approach is. And we’ve talked about that here on the show and that’s these legacy rules-based systems, like Web Application Firewalls. Or, we’ve got an API management solution in there, let’s just have that do security. Through my research in showing that I can hack and breach these APIs and steal all these thousands of patient records by APIs that are protected by WAFs and API Gateways is proving, is that no one should be using these to secure their APIs. They should be looking at solutions like Moesif, looking at solutions like these other threat management solutions like Traceable, and Approve, and lots of other great solutions out there.
Deeply Understand How Your API Is Used With MoesifLearn More
Instrument Your APIs
The most important step in securing your APIs is knowing what’s going over them. Instrument yourself with a tool that’ll give you the ability to delve into your APIs’ traffic.
First and foremost, what everyone needs to understand is you can’t protect what you don’t know you have. You need to know how many APIs you have, what kind of data they are serving, are they spacing the Internet (can I reach them from the Internet) or are they partner facing, are we authorizing and authenticating (we’re giving you authenticate access to this, but are we authorizing what data you can request)? All of these things are very important. Knowing what kind of data your APIs are serving. If I have 1,600 APIs, do you think I’m going to know which APIs are serving PII or PHI and which ones are serving PCI data? I should know that, and there’s no way I can memorize that, so I’m going to need a tool to do it. My recommendation to all of you out there is know what data is going to your APIs, interdicting it, looking at it, analyzing it. Know what’s going there, know what is taking the bandwidth from your APIs and what API requests are being sent. Instrument yourself with the tool that will give you the ability to look at it.
Larry: And take action on it. What a superb summary and takeaway from this interview. My final question: where can our audience find out more about Alissa and where are you speaking next? I know you’re a very prolific keynote speaker. What’s coming up for Alissa? And what other resources out there should our audience be following?
Alissa: I would say that the primary vehicle for where I distribute my vulnerability research and content as a content creator and filmmaker is YouTube. So definitely subscribe to my YouTube Channel and smash that that bell icon for notifications. But, also follow me on Twitter and connect with me on LinkedIn. I love nerding-out on API security and hacking in general. I have a new book coming out on hacking APIs through Wiley. My second book. I’m in the process of writing a screenplay for a new TV series. There’s a lot going on in my world and I definitely would urge everybody to just follow me on YouTube, LinkedIn, and Twitter because that’s primarily where I’m at. Unless they want to see pictures of my food, I’m on Instagram I post pictures of my food there.
I appreciate all of you, and as part of my network and followers and fans keep an eye out because it’s going to be a really exciting year. I’m going to be speaking at HIMSS next, I would say, arguably the world’s largest healthcare conference, I’ll be keynoting there. Alongside some other amazing keynote speakers like A-Rod and Michael Coats.
I’ll definitely see what happens if anyone wants to stop by and say hi, and I’m going to be also speaking at DEFCON, I’m speaking at over 30 conferences this year. So a lot of exciting things. I’m also keynoting at the upcoming Money2020 conference. Which is really exciting. And I’m actually posting my event schedule on my website here and on KnightInkMedia.com, so keep an eye out. And again, the best way to hit me up is on social media.
Larry: Great. Thank you very much Alissa for your time today. I’m sure our audience will really enjoy this podcast.
Alissa: Thank you. Appreciate it.