Does this have support (or is there a plan to have support) for something similar to knip plugins? For example, a webpack.config.js file might import some packages or files, and the knip webpack plugin is able to parse it to understand these dependencies, and feed that information back into all the checks (e.g. unused files, undeclared package.json dependencies, etc.).
Just curious. It's great to have a performance-focused option in this space either way!
silverwind 1 days ago [-]
I prefer to have unused code detected during linting, but sadly, eslint has decided to kill off the APIs that support rules like `no-unused-modules`. Running a separate tool like this one or knip in place of a few lint rules just seems impractical.
dmix 1 days ago [-]
eslint is a good example of why coding in javascript is annoying. Your tools just constantly changing wildly over a version upgrade, so you look for a better one and find there's a new Rust linting tool but it's alpha and is missing half the features.
jayu_dev 1 days ago [-]
eslint is also a good example why Javascript runtime is bad choice for static analysis tools. The biggest problem is that it's single threaded.
Recent release of concurrency mode in eslint promised approximately 30% linting speed increase.
So now it uses multiple threads instead of one, and you got only 1.3x improvement. In any compiled language like Rust or Go you should expect time improvement that correlates with number of CPU cores engaged.
You can use worker threads in JS, but unfortunately sharing data between threads in context of static analysis, where there is a lot of deeply nested objects (AST) or basically a lot of data in general, is slow because it needs to be serialised and deserialised when it's passed between threads.
Javascript based tools for codebases with 1m+ lines of code becomes unusable :/
mattkrick 1 days ago [-]
Biome has been out of alpha for a few years now and is fantastic :-)
skybrian 17 hours ago [-]
Any chance of supporting Deno? (Knip doesn't work with Deno as far as I know.)
jayu_dev 17 hours ago [-]
I didn't test deno, but I guess it should work. Deno is just a runtime, imports/export are the same, compliant with JS specs.
But if it won't work feel free to open issue on GH :)
skybrian 16 hours ago [-]
Imports are not the same; they usually need to be resolved using deno.json.
jayu_dev 16 hours ago [-]
Oh didn't know that! I will take a look over the weekend
e1g 1 days ago [-]
+1 for the idea. Enforcing hard boundaries between is surprisingly helpful for AIs to reason about how to structure their changes.
We recently rolled out our own static analysis using oxc-parser and oxc-resolver, and it runs surprisingly fast (<1s for ~100K LOC). For us, it was definitely worth adding this layer of defence against The Slop.
jayu_dev 1 days ago [-]
Nice!
I’ve come to similar conclusions recently, with the recent increase in code changes velocity, solid static analysis is more important than ever.
When it comes to the performance, I've learned that reading code from file system and parsing it takes most of the time. Then resolving modules takes a little also.
Once that is done, running different checks is almost instant - like miliseconds.
esafak 1 days ago [-]
Looks good; I'm eager to try it. Do you have any questions?
Rendered at 23:05:49 GMT+0000 (Coordinated Universal Time) with Vercel.
Just curious. It's great to have a performance-focused option in this space either way!
Recent release of concurrency mode in eslint promised approximately 30% linting speed increase.
So now it uses multiple threads instead of one, and you got only 1.3x improvement. In any compiled language like Rust or Go you should expect time improvement that correlates with number of CPU cores engaged.
You can use worker threads in JS, but unfortunately sharing data between threads in context of static analysis, where there is a lot of deeply nested objects (AST) or basically a lot of data in general, is slow because it needs to be serialised and deserialised when it's passed between threads.
Javascript based tools for codebases with 1m+ lines of code becomes unusable :/
But if it won't work feel free to open issue on GH :)
We recently rolled out our own static analysis using oxc-parser and oxc-resolver, and it runs surprisingly fast (<1s for ~100K LOC). For us, it was definitely worth adding this layer of defence against The Slop.
When it comes to the performance, I've learned that reading code from file system and parsing it takes most of the time. Then resolving modules takes a little also.
Once that is done, running different checks is almost instant - like miliseconds.