404
Points
245
Comments
Liriel
Author

Top Comments

mauvehausApr 20
Can anyone explain why on earth VC's are making actual investment decisions based on imaginary internet points? This would be like an NFL team drafting a quarterback based on how many instagram followers they have rather than a relevant metric like pass completion, or god forbid, doing some work and actually scouting candidates. Maybe the Cleveland Browns would do that[0], but it's not a way to mount a serious Super Bowl campaign[1].

Are VC's just that lazy about making investment decisions? Is this yet another side-effect of ZIRP[2] and too much money chasing a return? Is nobody looking too hard in the hope of catching the next rocket to the moon?

From the outside, investing based on GitHub stars seems insane. Like, this can't be a serious way of investing money. If you told me you were going to invest my money based on GitHub stars, I'd laugh, and then we'd have an awkward silence while I realize there isn't a punchline coming.

[0] I'm from Cleveland. I get to pick on them.

[1] https://en.wikipedia.org/wiki/List_of_Cleveland_Browns_seaso... I think their record speaks for itself.

[2] https://en.wikipedia.org/wiki/Zero_interest-rate_policy

whatisthisevenApr 20
I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.

Here are the things I look at in order:

* last commit date. Newer is better

* age. old is best if still updating. New is not great but tolerable if commits aren't rapid

* issues. Not the count, mind you, just looking at them. How are they handled, what kind of issues are lingering open.

* some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!

What does stars tell me? They are an indirect variable caused by the above things (driving real engagement and third interest) or otherwise fraud. Only way to tell is to look at the things I listed anyway.

I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.

I hope the FTC comes down hard on this.

Edit:

* commit history: just browse the history to see what's there. What kind of changes are made and at what cadence.

gobdovanApr 20
These kinds of articles make you feel like there are specific, actionable problems that just need an adjustment and then they disappear. However, the system is much worse than you'd expect. Studies like this are extremely valuable, but they don't address the systematic problems affecting all signaling channels: most signals themselves have been manufactured into a product.

Build a SaaS and you'll have "journalists" asking if they can include you in their new "Top [your category] Apps in [current year]", you just have to pay $5k for first place, $3k for second, and so on (with a promotional discount for first place, since it's your first interaction).

You'll get "promoters" offering to grow your social media following, which is one reason companies may not even realize that some of their own top accounts and GitHub stars are mostly bots.

You'll get "talent scouts" claiming they can find you experts exactly in your niche, but in practice they just scrape and spam profiles with matching keywords on platforms like LinkedIn once you show interest, while simultaneously telling candidates that they work with companies that want them.

And in hiring, you'll see candidates sitting in interview farms quite clearly in East Asia, connecting through Washington D.C. IPs, present themselves with generic European names, with synthetic camera backgrounds, who somehow ace every question, and list experience with every technology you mention in the job post in their CVs already (not hyperbole, I've seen exactly this happen).

If a metric or signal matters, there is already an ecosystem built to fake it, and faking it starts to be operational and just another part of doing business.

donatjApr 20
I run a tiny site that basically gave a point-at-able definition to an existing adhoc standard. As part of the effort I have a list of software and libraries following the standard on the homepage. Initially I would accept just about anything but as the list grew I started wanting to set a sort of notability baseline.

Specifically someone submitted a library that was only several days old, clearly entirely AI generated, and not particularly well built.

I noted my concerns with listing said library in my reply declining to do so, among them that it had "zero stars". The author was very aggressive and in his rant of a reply asked how many stars he needed. I declined to answer, that's not how this works. Stars are a consideration, not the be all end all.

You need real world users and more importantly real notability. Not stars. The stars are irrelevant.

This conversation happened on GitHub and since then I have had other developers wander into that conversation and demand I set a star count definition for my "vague notability requirement". I'm not going to, it's intentionally vague. When a metric becomes a target it ceases to be a good metric as they say.

I don't want the page to get overly long, and if I just listed everything with X star count I'd certainly list some sort of malware.

I am under no obligation to list your library. Stop being rude.

NooneAtAll3Apr 20
I remember long ago watching Tom Scott (iirc) video about him buying facebook impressions once

you instantly got like 40k likes - but there was a catch

algorithm saw you getting a lot of likes from Iran/Pakistan, so went on recommending the post to those countries, got no response and stopped recommending said post altogether

in a sense, it became a self-regulating system, where fake impressions extinguish their very reason to be bought

ernst_klimApr 20
I think people expect the star system to be a cheap proxy for "this is a reliable piece of sorfware which has a good quality and a lot of eyes".

I think as a proxy it fails completely: astroturfing aside stars don't guarantee popularity (and I bet the correlation is very weak, a lot of very fundamental system libraries have small number of stars). Stars also don't guarantee the quality.

And given that you can read the code, stars seem to be a completely pointless proxy. I'm teaching myself to skip the stars and skim through the code and evaluate the quality of both architecture and implementation. And I found that quite a few times I prefer a less-"starry" alternative after looking directly at the repo content.

dafi70Apr 20
Honest question: how can VCs consider the 'star' system reliable? Users who add stars often stop following the project, so poorly maintained projects can have many stars but are effectively outdated. A better system, but certainly not the best, would be to look at how much "life" issues have, opening, closing (not automatic), and response times. My project has 200 stars, and I struggle like crazy to update regularly without simple version bumps.
panabeeApr 20
VCs are soccer stars, but founders play basketball.

It’s easy to dunk on VCs, but the herd effect is rational after considering the typical VC’s background, the intense competition for good deals, and the job requirements — to prudently deploy capital.

Who wants to pitch their boss on investing $1-10M in a product no one uses, built by a team of anons?

This is not to defend the process, but merely explain it. It’s not so different from customer marketing. To win a VC, first understand the VC.

Once hired, VCs cannot easily get fired yet they exert immense strategic control.

Nonetheless, many founders interview summer interns harder than VCs.

Heuristic: after removing capital, would you hire the VC to be your boss?

Great VCs are worth the equity and will turbocharge startups. When you find one, don't haggle. Get a fair deal, and get right back to coding.

Bad VCs will destroy companies the same way soccer stars would destroy basketball teams if made the head coach.

Visit the Original Link

Read the full content on awesomeagents.ai

Source
awesomeagents.ai
Author
Liriel
Posted
April 20, 2026 at 08:26 AM


More Top Stories

tryterra.co Apr 20
Show HN: Saunas lower nighttime heart rate more than exercise (n=59,000)
13681 commentsby kyriakosel
Details
qwen.ai Apr 20
Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving
4915 commentsby mfiguiere
Details
theolivepress.es Apr 20
All phones sold in the EU to have replaceable batteries from 2027
206100 commentsby ramonga
Details
letsdatascience.com Apr 20
Atlassian Enables Default Data Collection to Train AI
8821 commentsby kevcampb
Details
opensource.posit.co Apr 20
ggsql: A Grammar of Graphics for SQL
10727 commentsby thomasp85
Details
mastodon.social Apr 19
10 years ago, someone wrote a test for servo that included an expiry in 2026
6217 commentsby luu
Details
👋 Need help with code?