Claude installed every CLI, prompted me to login once, then went into autopilot.
Claude asked if it could SSH into my Hetzner instance to investigate. I said yes.
I’m all for AI tools—but I have security issues with letting anything in like this. Even if it worked once, they change things. Each update requires its own investigation to see if the outcome is the same.
It’ll ask you before running every command. It’s not just running blindly, unless you let it do so.
Sounds like this person let it. But still, what if the command is long, lots of arguments, pipes, and confusing commands?
I think most people would probably just allow it. I’m not saying I wouldn’t… but this all makes me very nervous, because it fails in small ways pretty often
I always read the whole command, otherwise it could do whatever. I generally let it have read+write access in the project directory (it’s in git anyway) and manually review every command it wants to run.
I’d rather let a junior engineer loose in my codebase. At least I know their intent and where they live.



