Developers: I will never ever do that, no one should ever do that, and you should be ashamed for guiding people to. I get that you want to make things easy for end users, but at least exercise some bare minimum common sense.
The worst part is that bun
is just a single binary, so the install script is bloody pointless.
Bonus mildly infuriating is the mere existence of the .sh
TLD.
Edit b/c I’m not going to answer the same goddamned questions 100 times from people who blindly copy/paste the question from StackOverflow into their code/terminal:
WhY iS ThaT woRSe thAn jUst DoWnlOADing a BinAary???
- Downloading the compiled binary from the release page (if you don’t want to build yourself) has been a way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into
- There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
- Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
- The install script they’re telling you to pipe is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.
The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice.
I saw many cases of this with windows PowerShell and those Window debloating scripts
What’s that? A connection problem? Ah, it’s already running the part that it did get… Oops right on the boundary of
rm -rf /thing/that/got/cut/off
. I’m angry now. I expected the script maintainer to keep in mind that their script could be cut off at litterally any point… (Now what is thatset -e
the maintainer keeps yapping about?)Can you really expect maintainers to keep network error in mind when writing a Bash script?? I’ll just download your script first like I would your binary. Opening yourself up to more issues like this is just plain dumb.
Doesn’t it download the entire script before piping it?
It runs the curl command which tries to fetch the entire script. Then no matter what it got (the intended script, half the script, something else because somebody tampered with it) it just runs it without any extra checks.
Installing Rust: curl --proto ‘=https’ --tlsv1.2 -sSf https://sh.rustup.rs/ | sh (source)
Installing Homebrew: /bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)” (source)I understand that you find it infuriating, but it’s not something completely uncommon, even in high end projects :/
There is even a Windows (Powershell) example for Winutil:
Stable Branch (Recommended)
irm "https://christitus.com/win" | iex
Better than explaining how to make a .ps file trusted for execution (thankfully, one of the few executable file extensions that Windows doesn’t trust by default) but why not just use some basic .exe builder at this point?
Obligatory “they better make it a script that automatically creates a medium for silent Linux Mint installation, modifies the relevant BIOS settings and restarts” to prevent obvious snarky replies
--proto ‘=https’ --tlsv1.2
That’s how you know they care, no MIMing that stuff without hijacking the CA at which point you have a whole another set of problems, and if you trust rustc to not delete your sources when they fail a typecheck, then you can trust their installer.
-f
is important to not execute half-downloaded scripts on failure,-s
and-S
are verbosity options,-L
follow redirects.So I was wondering what the flags do too, to check if this is any safer. My curl manual does not say that
-f
will not output half downloaded files, only that it will fail on HTTP response codes of 400 it greater… Did you test that it does not emit the part that it got on network error? At least with the$()
that timing attack won’t work, because you only start executing when curl completes…With the caveat that I’m currently blanking on the semantics of sub-shells yes I think you’re right,
-f
is about not executing<hmtl><h1>404 Not Found</h1></html>
. Does curl output half-transferred documents to stdout in the first place, though, and alsobash -c
is going to hit the command line length limit at some point.And no I haven’t tried anything of this. I use a distribution, I have a package installer.
See the proof of concept for the pipe detection mentioned elsewhere in the thread https://github.com/Stijn-K/curlbash_detect . For that to work, curl has to send to stdout without having all data yet. Most reasonable scripts won’t be large enough, and will probably be buffered in full, though, I guess.
Thanks for the laugh on the package installer, haha.
Common or not, it’s still fucking awful and the people who promote this nonsense should be ashamed of themselves.
Don’t forget Pi-hole! It’s been the default install method since basically the beginning.
I’ve seen a lot of projects doing this lately. Just run this script, I made it so easy!
Please, devs, stop this. There are defined ways to distribute your apps. If it’s local provide a binary, or a flatpak or exe. For docker, provide a docker image with well documented environments, ports, and volumes. I do not want arbitrary scripts that set all this up for me, I want the defined ways to do this.
Would you prefere
$ curl xyz $ chmod +x xyz $ ./xyz
?
You can detect server-side whether curl is piping the script to Bash and running it vs just downloading it, and inject malicious code only in the case no one is viewing it
https://github.com/Stijn-K/curlbash_detect
So that would at least be a minor improvement
I mean, how about:
- Download the release for your arch from the releases page.
- Extract to
~/.local/bin
- Run
I think you missed the point.
Why is that safer/better? That binary can do anything a shell script can, and it’s a lot harder to inspect.
It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.
On the flip side, you can also just download the script from the site without piping it directly to bash if you want to review what it’s going to do before you run it.
Would have been much better if they just pasted the (probably quite short) script into the readme so that I can just paste it into my terminal. I have no issue running commands I can have a quick look at.
I would never blindly pipe a script to be executed on my machine though. That’s just next level “asking to get pwned”.
It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.
You’re not wrong but this is what lead to the xz “hack” not to long ago. When it comes to data, trust is a fickle mistress.
I agree but hey at least you can inspect the script before running it, in contrast to every binary installer you’re called to download.
I assume your concern is with security, so then whats the difference between running the install script from the internet and downloading a binary from the internet and running it?
To add to OP’s concerns, the server can detect if you run
curl <URL> | sh
rather than just downloading the file, and deliver a malicious payload only in the piped to sh case where no one is viewing itYou’re already installing a binary from them, the trust on both the authors and the delivery method is already there.
If you don’t trust, then don’t install their binaries.