ty
astral.shKey Features
Tech Stack
Key Features
Tech Stack
Pyright/pylance were a boon because they were the first non-pycharm good implementation.
But they still have rough edges and will fail from time to time, not to mention the latency.
Both are rust/open-source/new/fast so it's difficult to understand why I should choose one over the other.
https://htmlpreview.github.io/?https://github.com/python/typ...
If that table is anything to go by, Pyright is not to be underestimated.
Mypy is trash. Nice to have a table to point to to prove it.
The problem is that the conformance tests were mostly written by Eric Traut, so there's a natural bias towards specifying what Pyright does well. There's a lot of things Mypy does really well that should probably be implemented in Pyright.
That said I'm very happy user of uv, so once Ty becomes ready enough will be happy to migrate.
ignore = [
"B008", # do not perform function calls in argument defaults
"T201", # ignore print
"T203", # ignore pprint
"C901", # ignore function too complex
"E721", # igmpre unsafe type comparison
"B904", # ignore banal
"C403", # unecassery list comprehension
]And what do you use instead?
AI generated anything is a hellscape.
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
A shared spec for this is important because if you write a Python library, you don’t want to have to write a different set of types for each Python type checker
Here are some things the spec has nothing to say about:
Inference
You don’t want to annotate every expression in your program. Type checkers have a lot of leeway here and this makes a huge difference to what it feels like to use a type checker.
For instance, mypy will complain about the following, but pyright will not (because it infers the types of unannotated collections as having Any):
x = []
x.append(1)
x[0] + "oops"
The spec has nothing to say about this.Diagnostics
The spec has very little to say about what a type checker should do with all the information it has. Should it complain about unreachable code? Should it complain if you did `if foo` instead of `if foo()` because it’s always true? The line between type checker and linter is murky. Decisions here have nothing to do with “what does this annotation mean”, so are mostly out of scope from the spec.
Configuration
This makes a huge difference when adapting huge codebases to different levels of type checking. Also the defaults really matter, which is tough when Python type checkers serve so many different audiences.
Other things the spec doesn’t say anything about: error messages quality, editor integration, speed, long tail of UX issues
And then of course there are things we would like to spec but haven’t yet!
This is incorrect. pyright will infer the type of x as list[Unknown].
Unknown is a pyright specific term for inferred Any that is used as the basis for enabling additional diagnostics prohibiting gradual typing.
Notably, this is quite different from TypeScript’s unknown, which is type safe.
TypeScript takes the same approach in this scenario, and I assume this helps both be fast.
const x = []
x.push(1)
type t = typeof x // number[]PR is somewhat WIP-ish but I needed some motivation to do OSS work again :)
https://htmlpreview.github.io/?https://github.com/SimonSchic...
I really need better generics support before ty becomes useful. Currently decorators just make all return types unknown. I need something this to work:
_F = TypeVar("_F", bound=Callable[..., Any])
def my_decorator(*args: str) -> Callable[[_F], _F]: ...
Also, I use a lot of TypedDicts and there's not much support yet.https://htmlpreview.github.io/?https://github.com/SimonSchic...
> At time of writing, many of the remaining rules require type inference and/or multi-file analysis, and aren't ready to be implemented in Ruff.
ty is actually a big step in this direction as it provides multi-file analysis and type inference.
(I work at Astral)
Also, it's also too bad we have three competing fast LSP/typechecker projects now We had zero 1 year ago.
It might not be used as much, but to be honest I think that's fine. I'm not a big VC-funded company and just hope to be able to serve the users it has. There's space for multiple tools in this area and it's probably good to have multiple type checkers in the Python world to avoid the typical VC rug pull.
I agree though. Hope this is successful and they keep building awesome open-source tools.
It's definitely a narrow path for them to tread. Feels like the best case is something like Hashicorp, great until the founders don't want to do it anymore.
Wow, that's probably my go-to case of things going south, not "best case scenario". They sold to IBM, a famous graveyard for software, and on the way there changed from FOSS licensing to their own proprietary ones for software the community started to rely on.
You might be on to something with point B, hard to find good examples of developer tool companies that don't eventually turn sour. However, there are countless examples of successful and still very useful developer tools out there, maybe slapping a company on it and sell a "pro" version isn't the way to go?
As for "slapping a company on it", I agree, but also I don't think we've developed a viable alternative. Python has been limping along with one toolchain or another for my entire career (multiple decades) and it took Astral's very specific approach to create something better. It's fair to ask why they needed to be venture backed, but they clearly are and the lack of successful alternatives is telling.
https://www.hashicorp.com/en/blog/mitchell-s-new-role-at-has...
As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?
(Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)
It's really helpful to have examples for this, like the one you provide below (which I'll respond to!). I've been a maintainer and contributor to the PyPA standard tooling for years, and once uv "clicked" for me I didn't find myself having to leave the imperative layer (of uv add/sync/etc) at all.
> As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps.
Could you say more about your setup here? By "rebuild steps" I'm inferring you mean an editable install (versus a sdist/bdist build) -- in general `uv sync` should work in that scenario, including for non-trivial things where e.g. an extension build has to be re-run. In other words, if you do `uv sync` instead of `uv pip install -e .`, that should generally work.
However, to take a step back from that: IMO the nicer way to use uv is to not run `uv sync` that much. Instead, you can generally use `uv run ...` to auto-sync and run your development tooling within an environment than includes your editable installation.
By way of example, here's what I would traditionally do:
python -m venv .env
source .env/bin/activate
python -m pip install -e .[dev] # editable install with the 'dev' extra
pytest ...
# re-run install if there are things a normal editable install can't transparently sync, like extension builds
Whereas with uv: uv run --dev pytest ... # uses pytest from the 'dev' dependency group
That single command does everything pip and venv would normally do to prep an editable environment and run pytest. It also works across re-runs, since it'll run `uv sync` as needed under the hood.Python builds via [tool.hatch.build.targets.wheel.hooks.custom] in pyproject.toml and a hatch_build.py that invokes waf and force-includes the .so files into useful locations.
Use case 1: Development. I change something (C/C++ source, the waf configuration, etc) and then try to run Python code (via uv sync, uv run, or activating a venv with an editable install). Since there doesn't seem to be a way to have the build feed dependencies out to uv (this seems to be a deficiency in PEP 517/660), I either need to somehow statically generate cache-keys or resort to reinstall-package to get uv commands to notice when something changed. I can force the issue with uv pip install -e ., although apparently I can also force the issue with uv run/sync --reinstall-packages [distro name]. [0] So I guess uv pip is not actually needed here.
It would be very nice if there was an extension to PEP 660 that would allow the editable build to tell the front-end what its computed dependencies are.
Use case 2: Production
IMO uv sync and uv run have no place in production. I do not want my server to resolve dependencies or create environments at all, let alone by magic, when I am running a release of my software built for the purpose.
My code has, long before pyproject.toml or uv was a thing and even before virtual environments existed (!), had a script to build a production artifact. The resulting artifact makes its way to a server, and the code in it gets run. If I want to use dependencies as found by uv, or if I want to use entrypoints (a massive improvement over rolling my own way to actually invoke a Python program!), as far as I can tell I can either manually make and populate a venv using uv venv and uv pip or I can use UV_PROJECT_ENVIRONMENT with uv sync and abuse uv sync to imperatively create a venv.
Maybe some day uv will come up with a better way to produce production artifacts. (And maybe in the distant future, the libc world will come up with a decent way to make C/C++ virtual environments that don't rely on mount namespaces or chroot.)
[0] As far as I can tell, the accepted terminology is that the thing produced by a pyproject.toml is possibly a "project" or a "distribution" and that these are both very much distinct from a "package". I think it's a bit regrettable that uv's option here is spelled like it rebuilds a _package_ when the thing you feed it is not the name of a package and it does not rebuild a particular package. In uv's defense, PEP 517 itself seems rather confused as well.
Various tickets asking for it, but they also want to bundle in the python interpreter itself, which is out of scope for a pyproject.toml manager: https://github.com/astral-sh/uv/issues/5802
For example, uv-build is rather lacking in any sort of features (and its documentation barely exists AFAICT, which is a bit disappointing), but uv works just fine with hatchling, using configuration mechanisms that predate uv.
(I spent some time last week porting a project from an old, entirely unsupportable build system to uv + hatchling, and I came out of it every bit as unimpressed by the general state of Python packaging as ever, but I had no real complaints about uv. It would be nice if there was a build system that could go even slightly off the beaten path without writing custom hooks and mostly inferring how they’re supposed to work, though. I’m pretty sure that even the major LLMs only know how to write a Python package configuration because they’ve trained on random blog posts and some GitHub packages that mostly work — they’re certainly not figuring anything out directly from the documentation, nor could they.)
Do library authors have to test against every type checker to ensure maximum compatibility? Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
x = []
x.append(1)
x[0] = "new"
x[0] + "oops"
It's optionally typed, but I would credit both "type checks correctly" and "can't assign 'new' over a number" as valid type checker results.Either way, you didn't annotate the code so it's kind of pointless to discuss.
There are several literals in that code snippet; I could annotate them with their types, and this code would still be exactly as it is. You asked why there are competing type checkers, and the fact that the language is only optionally typed means ambiguity like that example exists, and should be a warning/bug/allowed; choose the type checker that most closely matches the semantics you want to impose.
Well, no, there is one literal that has an ambiguous type, and if you annotated its type, it would resolve entirely the question of what a typechecker should say; literally the entire reason it is an open question is because that one literal is not annotated.
Optimally, this will result in a democratic consensus of semantics.
Pessimistically, this will result in dialects of semantics that result in dialects of runtime languages as folks adopt type checkers.
That hasn't been a fact for quite a while. Npw, it does specify the semantics of its type annotations. It didn't when it first created annotations for Python 3.0 (PEP 3107), but it has progressively since, starting with Python 3.5 (PEP 484) through several subsequent PEPs including creation of the Python Typing Council (PEP 729).
Well, no, you didn't. Because it's not clear whether the list is a list of value or a list of values of a distinct type. And there are many other ways you could quibble with this statement.
const x = []
x.push(1)
type t = typeof x
// ^? type t = number[]
x[0] = "new"
type t2 = typeof x
// ^? type t2 = (number | string)[]
const y = x[0] + "oops"
// ^? const y: string
https://www.typescriptlang.org/play/?#code/GYVwdgxgLglg9mABA...So only the outer API surface of the library matters. That’s mostly explicitly typed functions and classes so the room for different interpretations is lower (but not gone).
This is obviously out the window for libraries like Pydantic, Django etc with type semantics that aren’t really covered by the spec.
I think everyone basically agrees that at the package boundary, you want explicit types, but inside application code things are much more murky.
(plus of course, performance, particularly around incremental processing, which Astral is specifically calling out as a design goal here)
Yes, but in practice, the ecosystem mostly tests against mypy. pyright has been making some inroads, mostly because it backs the diagnostics of the default VS Code Python extension.
> Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
You can provide your own type stubs instead of using the library's built-in types or existing stubs.
Django does a bunch of magic which is challenging for the type checkers to handle well.
> We are planning to add dedicated Django support at some point, but it's not on our short-term roadmap
[1] https://github.com/astral-sh/ruff/pull/21308#issuecomment-35...
Seems like the code isn't actually open source which to me is a bit concerning. At the very least, if ya'll want to release it like this please be clear that you're not open source. The MIT license in the repo gives the wrong impression.
While we wait... what's everyone's type checking setup? We run both Pyright and Mypy... they catch different errors so we've kept both, but it feels redundant.
https://htmlpreview.github.io/?https://github.com/python/typ... suggests that Pyright is a superset, which hasn't matched our experience.
Though our analysis was ~2 years ago. Anyone with a large Python codebase successfully consolidated to just Pyright?
I suspect pyright has caught up a lot but I turned it off again rather recently.
For what it’s worth I did give up on cursor mostly because basedpyright was very counterproductive for me.
I will say that I’ve seen a lot more vehement trash talking about mypy and gushing about pyright than vice versa for quite a few years. It doesn’t quite add up in my mind.
agreed! mypy's been good to us over the years.
The biggest problem we're looking to solve now is raw speed, type checking is by far the slowest part of our precommit stack which is what got us interested in Ty.
The spec mostly concerns itself with the semantics of annotations, not diagnostics or inference. I don't really recommend using it as the basis for choosing a type checker.
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
In my case they just add noise when reading code and make it more difficult to review
No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
Python type checking is also AOT, but (unlike where it is inextricably tied to compilation because types are not only checked but used for code generation) it is optional to actually do that step.
Python type annotations exist and are sometimes used at runtime, but not usually at that point for type checking in the usual sense.
> No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
In fact, Haskell then allows you to add back in runtime types using Typeable!
https://hackage.haskell.org/package/base-4.21.0.0/docs/Data-...
Educate yourself before making such claims.
It's fast too as promised.
However, it doesn't work well with TypedDicts and that's a show-stopper for us. Hoping to see that support!
from anthropic.types import MessageParam
data: list[MessageParam]
data = [ { "role": "user", "content": [ { "type": "text", "text": "" } ] } ]
```
This for example works both in mypy and pyright. (Also autocompletion of typedict keys / literals from pylance is missing too)
I reported this as https://github.com/astral-sh/ty/issues/1994
Support for auto-completing TypedDict keys is tracked here: https://github.com/astral-sh/ty/issues/86
The point is you drop things such as types to enable rapid iteration which enables you to converge to the unknownable business requirements faster.
If you want slow development with types, why not Java?
It's not a prototyping language or a scripting language or whatever. It's just a language. And types are useful, especially when you can opt out of type checking when you need to. Most of the time you don't want to be reassigning variables to be different types anyway, even though occasionally an escape hatch is nice.
It's very foolish to just use types in all programming projects.
This goes all the way back to Lisp vs C in the 1980s with C programs having triple the development time as Lisp programs.
To modern day with Turborepo taking 3 months to write in structually typed Go vs 14 months in statically typed Rust.
Having worked in both dynamically typed and statically typed software development shops, the statically typed programmers are considerably slower in general. Usually they only have 1/3 of the output as programmers who use dynamically typing. Statically typed programmers also tend to be much less ambitious in their projects in general.
They still think they are "fast programmers" but it's complete fiction.
> RUFF 0.14.9
> UV 0.9.18
> TY 0.0.2
> PYX Beta
> GITHUB
11 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.