FAQ
FAQ How do I integrate Oso into my app? There are two main steps to adding Oso. First, you express authorization logic as declarative rules, written in Polar and stored in Oso policy files. Second, you install the Oso library for your application language/framework, and add the is_allowed checks to wherever it is most suitable for your use case. For example, it is common to have checks at the API layer – for example checking the HTTP request, and the path supplied – as well as checks on the data access, e.g. when your application is retrieving data from the database. For more detailed discussion on where to integrate Oso in your application depending on your requirements, please visit our guide, Add Oso to an App. What data does Oso store? When you load policy files into Oso, Oso stores in memory the rules defined in the policy. In addition, Oso stores any registered classes on the Oso instance. In the course of executing a query, Oso caches any instances of classes/objects that it sees, but it clears these when the query finishes. Oso does not, for example, store any data about the users, what groups they are in, or what permissions have been assigned to them. The expectation is that this data lives in your application, and that Oso accesses it as needed when evaluating queries. Because of this, it is rare to need to change policies while the application is running. For example, if you need to revoke a user’s access because they leave the company or change roles, then updating the application data will immediately flow through to policy decisions and achieve the desired outcome. Changes to policy should be seen as the same as making source code changes, and can be implemented through existing deployment processes. Can I query Oso arbitrarily? Absolutely, you can! We use allow as convention to make it easy to get started with Oso. However, all Oso libraries additionally expose a query_rule method, which enables you to query any rule you want. Beyond this, you can even query using inputs that are not yet set by passing in variables. However, this is currently an experimental feature, and full documentation is coming soon. How does Oso access my application data? When a policy contains an attribute or method lookup, e.g., actor.email, the policy evaluation pauses and Oso returns control to the application. It creates an event to say “please lookup the field email on the object instance with id 123”. (The Oso library stores a lookup from instance IDs to the concrete application instance.) What happens next depends on the specific language, but it will use some form of dynamic lookup – e.g., a simple getattr in Python, or reflection in Java. The application returns the result to the policy engine, and execution continues. What is the best practice for managing policy files in a way that’s maintainable in the long-run? This is a common question from those who have used policy languages or rules engines before. Corollary questions may be: Can I have multiple policy files? How do I stop policy files from getting out of control? The answer, of course, varies by use case, but we suggest the following rules of thumb: Yes, you can and should have multiple policy files. All rules loaded into Oso live in the same namespace; you can reference rules in other policy files without importing. We encourage you to think of your policy files the same way you think about source code. You should refactor large rules into smaller rules, where each rule captures a self-contained piece of logic. You can organize source files according to the components they refer to. What are the performance characteristics of Oso? Oso is designed to be lightweight and to have a limited performance footprint. The core library is written in Rust, and is driven directly by your application. There are no background threads, no garbage collection, no IO to wait on. Each instruction takes about 1-2 us, and typical queries take approximately 1-20 ms. For a more detailed discussion of the performance characteristics of Oso, please see the performance page. Use cases, i.e., When should I use Oso, and when should I use something else? The foundation of Oso is designed to support a wide variety of use cases, though given Oso’s focus on application integration there are some use cases that are currently a more natural fit than others. For a more detailed discussion of this topic, please see our use cases page. What languages and frameworks do you support? We currently support Python, Node.js, Go, Rust, Ruby, and Java, and are actively working on supporting more languages. We have framework integrations for Flask, Django and SQLAlchemy. The easiest place to try Oso in your language of choice is the Quickstart. Vote & track your favorite language and framework integrations at our GitHub repository, and sign up for our newsletter in the footer anywhere on our docs if you’d like to stay up to speed on the latest product updates. What operating systems do you support? We currently support Linux, macOS, and Windows. What license does Oso use? Oso is licensed under the Apache 2.0 license. How does pricing work? Oso is freely available as an open source product and will always be free and open source. We are also working on a commercial product that will be built around the core open source product. If you are interested in support for Oso or the commercial product, please contact us. Who builds and maintains Oso? Oso is built by Oso! We are headquartered in New York City with engineers across two continents, and we are hard at work on new features and improvements. If you have feedback or ideas about how we can make the product better, we would be delighted to hear from you. Please feel free to reach out to us at engineering@osohq.com.
Internals
Internals Oso is supported in a number of languages, but the Oso core is written in Rust, with bindings for each specific language. At the core of Oso is the Polar language. This handles parsing policy files and executing queries in the form of a virtual machine. Oso was designed from the outset to be natively embedded in different languages. It exposes a foreign function interface (FFI) to allow the calling language to drive the execution of its virtual machine. Oso can read files with the .polar suffix, which are policy files written in Polar syntax. These are parsed and loaded into a knowledge base, which can be thought of an in-memory cache of the rules in the file. Applications using Oso can tell it relevant information, for example registering classes to be used with policies, which are similarly stored in the knowledge base. The Oso implementation can now be seen as a bridge between the policy code and the application classes. The Oso library is responsible for converting types between Oso primitive types (like strings, numbers, and lists), and native application types (e.g. Python’s str, int, and list classes), as well as keeping track of instances of application classes. When executing a query like oso.query("allow", [user, "view", expense]) Oso creates a new virtual machine to execute the query. The virtual machine executes as a coroutine with the native library, and therefore your application. To make authorization decisions, your application asks Oso a question: is this (actor, action, resource) triple allowed? To answer the question, Oso may in turn ask questions of your application: What’s the actor’s name? What’s their organization? What’s the resource’s id? etc. The library provides answers by inspecting application data, and control passes back and forth until the dialog terminates with a final “yes” or a “no” answer to the original authorization question. The virtual machine halts, and the library returns the answer back to your application as the authorization decision. Data Filtering Oso supports applying authorization logic at the ORM layer so that you can efficiently authorize entire data sets. For example, suppose you have millions of posts in a social media application created by thousands of users, and regular users are only authorized to view posts from their friends. It would be inefficient to fetch all of the posts and authorize them one by one. It would be much more efficient to distill from the policy a filter that can be applied by the ORM to return only the authorized posts. This idea can be used in any scenario where you need to authorize a subset of a large collection of data. The Oso policy engine can now produce such filters from your policy. How it works Imagine the following authorization rule. A user is allowed to view any public social media posts as well as their own private posts: allow(user, "view", post) if post.access_level = "public" or post.creator = user; For a particular user, we can ask two fundamental questions in the context of the above rule: Is that user allowed to view a specific post, say, Post{id: 1}? Which posts is that user allowed to view? The answer to the first question is a boolean. The answer to the second is a set of constraints that must hold in order for any Post to be authorized. Oso can produce such constraints through partial evaluation of a policy. Instead of querying with concrete object (e.g., Post{id: 1}), you can pass a Partial value, which signals to the engine that constraints should be collected for it. A successful query for a Partial value returns constraint expressions: _this.access_level = "public" or _this.creator.id = 1 Partial evaluation is a generic capability of the Oso engine, but making use of it requires an adapter that translates the emitted constraint expressions into ORM filters. Our first two supported adapters are for the Django and SQLAlchemy ORMs, with more on the way. These adapters allow Oso to effectively translate policy logic into SQL WHERE clauses: WHERE access_level = "public" OR creator.id = 1 In effect, authorization is being enforced by the policy engine and the ORM cooperatively. Alternative solutions Partial evaluation is not the only way to efficiently apply authorization to collections of data. Manually applying WHERE clauses to reduce the search space (or using ActiveRecord-style scopes) requires additional application code and still needs to iterate over a potentially large collection. Authorizing the filter to be applied (or having Oso output the filter) doesn’t require iterating over individual records, but it does force you to write policy over filters instead of over application types, which can lead to more complex policies and is a bit of a leaky abstraction. Frameworks To learn more about this feature and see usage examples, see our ORM specific documentation: Filter Collections with Django Filter Collections with SQLAlchemy More framework integrations are coming soon — join us on Slack to discuss your use case or open an issue on GitHub.
Performance
Performance This page explores the performance of Oso across three main axes: 1. In practice. How does Oso perform under typical workloads? 2. Internals and Micro-benchmarks. How is Oso built? What are the micro-benchmarks? 3. Scaling. What is the theoretical complexity of a query? In Practice There are two main areas to consider when measuring the performance of Oso queries: the time to evaluate a query relative to a policy, and the time needed to fetch application data. In a complex policy, the time it takes to run a single query depends on the complexity of the answer. For example, a simple rule that says anyone can “GET” the path “/” will execute in less than 1 ms. On the other hand, rules that use HTTP path mapping, resource lookups, roles, inheritance, etc. can take approximately 1-20 ms. (These numbers are based on queries executing against a local SQLite instance to isolate Oso’s performance from the time to perform database queries.) The time needed to fetch application data is, of course, dependent on your specific environment and independent of Oso. Aggressive caching can be used to reduce some of the effect of such latencies. Profiling Oso does not currently have built-in profiling tools, but this is a high-priority item on our near-term roadmap. Our benchmark suite uses Rust’s statistical profiling package, but is currently better suited to optimizing the implementation than to optimizing a specific policy. Oso has a default maximum query execution time of 30s. If you hit this maximum, it likely means that you have created an infinite loop in your policy. You can use the Polar debugger to help track down such bugs. For performance issues caused by slow database queries or too many database queries, we recommend that you address these issues at the data access layer, i.e., in the application. See, for example, our guidance on The “N+1 Problem”. Internals and Micro-benchmarks The core of Oso is the Polar virtual machine, which is written in Rust. (For more on the architecture and implementation, see Internals.) A single step of the virtual machine takes approximately 1-2 us, depending on the instruction or goal. Simple operations like comparisons and assignment typically take just a few instructions, whereas more complex operations like pattern matching against an application type or looking up application data need a few more. The debugger can show you the VM instructions remaining to be executed during a query using the goals command. The current implementation of Oso has not yet been aggressively optimized for performance, but several low-hanging opportunities for optimizations (namely, caches and indices) are on our near-term roadmap. We do ensure that all memory allocated during a query is reclaimed by its end, and our use of Rust ensures that the implementation is not vulnerable to many common classes of memory errors and leaks. You can check out our current benchmark suite in the repository, along with instructions on how to run it. We would be delighted to accept any example queries that you would like to see profiled; please feel free to email us at engineering@osohq.com. Scaling At its core, answering queries against a declarative policy is a depth-first search problem: nodes correspond to rules, and nodes are connected if a rule references another rule in its body. As a result, the algorithmic complexity of a policy is in theory very large — exponential in the number of rules. However, in practice there shouldn’t be that many distinct paths that need to be taken to make a policy decision. Oso filters out rules that cannot be applied to the inputs early on in the execution. What this means is that if you are hitting a scaling issue, you can make your policies perform better by either by splitting up your rules to limit the number of possibilities, or by adding more specializers to your rule heads. For example, suppose you have 20 different resources, ResourceA, ResourceB, …, and each has 10 or so allow(actor, action, resource: ResourceA) rules. The performance of evaluating a rule with input of type ResourceA will primarily depend on those 10 specific rules, and not the other 190 rules. In addition, you might consider refactoring this rule to allow(actor, action, resource: ResourceA) if allowResourceA(actor, action, resource). This would mean there are only 20 allow rules to sort through, and for a given resource only one of them will ever need to be evaluated. The performance of evaluating policies is usually independent of the number of users or resources in the application when fetching data is handled by your application. However, as in any programming system, you need to be on the lookout for linear and super-linear searches. For example, if you have a method user.expenses() that returns a list of the user’s expenses, the check expense in user.expenses() will require O(n) VM instructions, where n is the length of the list. It would be better to replace the linear search with a single comparison, e.g. expense.user_id = user.id. Be especially careful when nesting such rules. Summary Oso typically answers simple authorization queries in less than 1 ms, but may take (much) longer depending on the complexity of your rules, the latency of application data access, and algorithmic choices. Some simple solutions such as caching and refactoring may be used to improve performance where needed.
Releases
Security
Security This page is split into two sections with two distinct purposes: Security best practices for using Oso, and Our approach to building a secure product Security Best Practices Policy Authoring To reduce the likelihood of writing logic bugs in Oso policies, we recommend using support for specializers as type checks wherever possible. For policy errors that are most likely due to incorrect policies, such as accessing attributes that don’t exist, Oso returns hard errors. Some problems are reported as warnings, when the problem may be a logic bug. An example of this is singletons (unused variables). We additionally recommend the use of Inline Queries (?=) as simple policy unit tests. Since Oso is accessible as a library, you should test authorization as part of your application test suite. Change Management As a reminder, Oso typically replaces authorization logic that would otherwise exist in your application. By using Oso, you are able to move much of that logic into separate a policy file/s, which is easier to audit and watch for changes. Currently, the best practice for policy change management is to treat Oso like regular source code. You should use code review practices and CI/CD to make sure you have properly vetted and kept a history (e.g., through git) of all changes to authorization logic. Auditing If you are interested in capturing an audit log of policy decisions, and being able to understand why Oso authorized a request, please contact us. Our Approach to Building a Secure Product Code The core of Oso is written in Rust, which vastly reduces the risk of memory unsafety relative to many other low-level and embeddable languages (e.g., C, C++). The Oso engineering team codes defensively – we make extensive use of types, validate inputs, and handle errors safely. All source code is available at our GitHub repository. Releases are built and published using GitHub actions. Oso has not yet undergone a code audit. We plan to engage a qualified third-party to perform an audit, whose results we will make publicly available, in the near future. Vulnerability Reporting We appreciate any efforts to find and disclose vulnerabilities to us. If you would like to report an issue, or have any other security questions or concerns, please email us at security@osohq.com.
Use Cases
Use Cases Some typical authorization use cases are: Customer-facing applications. For SaaS and on-premise applications that an organization sells to its customers, how does the application determine what permissions a user has? Internal applications. For SaaS and on-premise applications that an organization uses for internal employees and contractors, how does the application determine what permissions a user has? User-configurable permissions. For any application - SaaS, on-premise, open source, etc. - where users can freely customize permissions, how does the application expose these to users? Infrastructure. For infrastructure hosted in the cloud and in a company’s own data centers, how does an organization manage who is allowed to do what (e.g., provision new machines, access production)? The foundation of Oso is designed to support all of the above use cases. Currently, the ideal use case for Oso is the first: customer-facing applications. The reasons for this are: Oso is currently packaged as a library, with support for various languages. A developer can easily import the library and start using it. Similarly, the hooks that the library provides are designed for calling into an application to act on application objects and data. Oso does not handle assigning users to roles, or assigning permissions to users directly. Although you can do this with Oso, our expectation is that this data is typically managed by the application in whatever database is already in place. Oso can be used to reference that data directly, express what roles can do in an application, and even extend the roles to include inheritance structures and hierarchies. Oso can be a good fit for internal applications where access might be granted on the basis of attributes stored elsewhere, for example in Active Directory, or GSuite. However, as above, Oso does not manage role or permission assignment directly, and for this reason should not be seen as a replacement for something like Active Directory (at least not yet). We set out to build Oso to make it easier for developers to write authorization in their applications. For those who are building frameworks or tools where developers are the target end-user, Oso might also be a good fit to give those developers fine-grained control over permissions. We’d be happy to work together to discuss how to make that happen. We are additionally working on exposing the same level of fine-grained control to non-developers, which in the future would make Oso suitable for use as a way for teams to build and expose IAM-like functionality in their products. Regarding infrastructure: while one might be able to express her desired infrastructure policies using Oso, in order to enforce those policies one would need to build her own access gateway, proxy, or integration points. Currently this is possible but not documented. For this reason, Oso should not be seen as a replacement for things like AWS IAM or VPN tunnels. Oso has meaningful ambitions to address the full spectrum of authorization use cases for users. In the meantime, if you have questions or particular areas of interest, we welcome feedback at engineering@osohq.com, or you can talk to our engineering team directly through the chat widget on this page.

Get updates from the Oso Engineering team
Tell our engineers what it's like