https://github.com/allmonday/pydantic-resolve Skip to content Toggle navigation Sign in * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} allmonday / pydantic-resolve Public * Notifications * Fork 3 * Star 91 * a hierarchical solution for data fetching and processing License MIT license 91 stars 3 forks Branches Tags Activity Star Notifications * Code * Issues 0 * Pull requests 1 * Discussions * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Discussions * Actions * Projects * Security * Insights allmonday/pydantic-resolve This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. master BranchesTags Go to file Code Folders and files Name Name Last commit Last commit message date Latest commit History 301 Commits .github/workflows .github/ workflows doc doc docs docs examples examples pydantic_resolve pydantic_resolve research research tests tests .coveragerc .coveragerc .gitignore .gitignore .python-version .python-version LICENSE LICENSE README.md README.md changelog.md changelog.md poetry.lock poetry.lock poetry.toml poetry.toml pyproject.toml pyproject.toml tox.ini tox.ini View all files Repository files navigation * README * MIT license pypi Downloads Python Versions Test Coverage CI img If you are using pydantic v2, please use pydantic2-resolve instead. A small yet powerful tool to extend your pydantic schema, and then resolve all descendants automatically. It's also the key to the realm of composition-oriented-development-pattern (wip) What is composable pattarn? https://github.com/allmonday/ composable-development-pattern Change Logs discord 4 steps from root data to view data [concept] Install pip install pydantic-resolve Code snippets 1. basic usage, resolve your fields. import asyncio from pydantic import BaseModel from pydantic_resolve import Resolver async def query_age(name): print(f'query {name}') await asyncio.sleep(1) _map = { 'kikodo': 21, 'John': 14, 'Lao Wang ': 40, } return _map.get(name) class Person(BaseModel): name: str age: int = 0 async def resolve_age(self): return await query_age(self.name) is_adult: bool = False def post_is_adult(self): return self.age > 18 async def simple(): p = Person(name='kikodo') p = await Resolver().resolve(p) print(p) # query kikodo # Person(name='kikodo', age=21, is_adult=True) people = [Person(name=n) for n in ['kikodo', 'John', 'Lao Wang ']] people = await Resolver().resolve(people) print(people) # Oops!! the issue of N+1 query happens # query kikodo # query John # query Lao Wang # [Person(name='kikodo', age=21, is_adult=True), Person(name='John', age=14, is_adult=False), Person(name='Lao Wang ', age=40, is_adult=True)] asyncio.run(simple()) 2. optimize N+1 with dataloader import asyncio from typing import List from pydantic import BaseModel from pydantic_resolve import Resolver, LoaderDepend as LD async def batch_person_age_loader(names: List[str]): print(names) _map = { 'kikodo': 21, 'John': 14, 'Lao Wang ': 40, } return [_map.get(n) for n in names] class Person(BaseModel): name: str age: int = 0 def resolve_age(self, loader=LD(batch_person_age_loader)): return loader.load(self.name) is_adult: bool = False def post_is_adult(self): return self.age > 18 async def simple(): people = [Person(name=n) for n in ['kikodo', 'John', 'Lao Wang ']] people = await Resolver().resolve(people) print(people) # query query kikodo,John,Lao Wang (N+1 query fixed) # [Person(name='kikodo', age=21, is_adult=True), Person(name='John', age=14, is_adult=False), Person(name='Lao Wang ', age=40, is_adult=True)] asyncio.run(simple()) More examples: cd examples python -m readme_demo.0_basic python -m readme_demo.1_filter python -m readme_demo.2_post_methods python -m readme_demo.3_context python -m readme_demo.4_loader_instance python -m readme_demo.5_subset python -m readme_demo.6_mapper python -m readme_demo.7_single API Resolver(loader_filters, global_loader_filter, loader_instances, context) * loader_filters: dict provide extra query filters along with loader key. reference: 6_sqlalchemy_loaderdepend_global_filter.py L55, L59 * global_loader_filter: dict provide global filter config for all dataloader instances it will raise exception if some fields are duplicated with specific loader filter config in loader_filters reference: test_33_global_loader_filter.py L47, L49 * loader_instances: dict provide pre-created loader instance, with can prime data into loader cache. reference: test_20_loader_instance.py, L62, L63 * context: dict context can carry setting into each single resolver methods. class Earth(BaseModel): humans: List[Human] = [] def resolve_humans(self, context): return [dict(name=f'man-{i}') for i in range(context['count'])] earth = await Resolver(context={'count': 10}).resolve(earth) LoaderDepend(loader_fn) * loader_fn: subclass of DataLoader or batch_load_fn. detail declare dataloader dependency, pydantic-resolve will take the care of lifecycle of dataloader. build_list(rows, keys, fn), build_object(rows, keys, fn) * rows: list, query result * keys: list, batch_load_fn:keys * fn: lambda, define the way to get primary key helper function to generate return value required by batch_load_fn. read the code for details. reference: test_utils.py, L32 mapper(param) * param: class of pydantic or dataclass, or a lambda pydantic-resolve will trigger the fn in mapper after inner future is resolved. it exposes an interface to change return schema even from the same dataloader. if param is a class, it will try to automatically transform it. reference: test_16_mapper.py you may need it if there has some reuseable transforming params. ensure_subset(target_model) * target_model: class it will raise exception if fields of decorated class has field not existed in base_class. this provide a validation to ensure your schema's field is a subset of targe schema. reference: test_2_ensure_subset.py model_config(hidden_fields: list[str], default_required: bool) (new in v1.9.1) * hidden_fields: list the field names you don't want to expose. + It will hide your fields in both schema and dump function (dict(), json()) + It also support Field(exclude=True) * default_required: if True, fields with default values will also in schema['required'] In FastAPI, if you use hidden_fields only, the hidden fields are still visible with their default value. because __exclude_fields__ will be reset during the second process of dict() in FastAPI. To avoid this behavior, use Field(default='your value', exclude= True) instead reference: test_schema_config.py Run FastAPI example poetry shell cd examples uvicorn fastapi_demo.main:app # http://localhost:8000/docs#/default/get_tasks_tasks_get Unittest poetry run python -m unittest # or poetry run pytest # or poetry run tox Coverage poetry run coverage run -m pytest poetry run coverage report -m About a hierarchical solution for data fetching and processing Topics dataloader fullstack hierarchical hier bff pydantic Resources Readme License MIT license Activity Stars 91 stars Watchers 2 watching Forks 3 forks Report repository Contributors 2 * @allmonday allmonday tangkikodo * @Dambre Dambre Lukas Languages * Python 100.0% Footer (c) 2024 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact * Manage cookies * Do not share my personal information You can't perform that action at this time.