tl;dr we (Escape) scanned 3000+ public APIs on the internet with our in-house feedback driven exploration tech and ranked them using security, performance, reliability, and design criteria. The results are public on APIrank.dev. You can also request to index your own API for free and see how it compares to others.
Why APIrank? 🤔
During a YC meetup I spoke with a fellow founder that told me how hard it was to pick the right external APIs to use within your own projects.
It’s true that as developers and hackers, we rely more on external services for building our products.
We interact with those vendors’ services through their public APIs, which often become a central part of our stacks.
Yet, no good resource existed to compare the quality of similar vendors’ public APIs to help engineering/product teams pick the right provider.
What if there was a website that would help you compare the performance, reliability, security, and design of different vendors’ public APIs?
Well, say hello to apirank.dev!
Why is ranking public APIs hard? 🥵
Automating the audit of public APIs is a very hard problem. First, we needed to find all the public APIs and their specifications – mostly OpenAPI files.
We used several different strategies to fetch OpenAPI specifications:
- Crawl API repositories like apis.guru
- Crawl github for
- A cool google dork
Those strategies enabled us to gather around ~20.000 OpenAPIs specs.
Then lies the hard part of the problem:
We want to dynamically evaluate the security, performance, and reliability of those APIs.
But APIs take parameters that are tightly coupled to the underlying business logic.
A naive automated would also not work: putting random data in parameters would likely not pass the API’s validation layer, thus giving us little insight into the real API behavior.
Manually creating tests for each API is also not sustainable: it would take years for our 10-people team. We needed to do it in an automated way.
Fortunately, our main R&D efforts at Escape aimed to generate legitimate traffic against any API efficiently.
That’s how we developed Feedback-Driven API Exploration, a new technique that quickly asses the underlying business logic of an API by analyzing responses and dependencies between requests. (We did a more in-depth blog post about it)
We originally developed this technology for advanced API security testing. But from there, it was super easy also to test the performance and the reliability of APIs.
How we ranked APIs 1️⃣0️⃣0️⃣
Now that we have a scalable way to gather exciting data from public APIs, we need to find a way to rank them. And this ranking should be meaningful to developers when choosing their APIs.
We decided to rank APIs using the following five criteria:
An API’s security score is computed as the combination of two sub-scores: the number of OWASP top 10 vulnerabilities and the number of personally identifiable information (PII) leaks detected by our scanner.
The performance score is derived from the median response time of the API, sometimes referred to as p50.
The reliability score is derived from the number of inconsistent server responses, either server errors or non-conforming return values.
The design score reflects the quality of the specification file. Having a high-quality specification file (with up-to-date types and examples) helps developers understand the API and tools to produce relevant documentation.
The popularity score is computed from the number of references to the API found online.
If you are curious about your API’s performance, you can ask us to index your API on APIrank.dev!
Our ask 😃
Ranking all public APIs on the internet is a massive project, and we would gladly welcome help from the community!
You can help us by: