Rendered at 20:07:59 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
Pay08 3 days ago [-]
Really neat. Out of curiosity, does this need to use the Github API? I hoped something like this could be done with plain http.
v9v 3 days ago [-]
I use https://github.com/TxGVNN/github-explorer for this and even though it doesn't have a C-x C-f nicety (you just m-x github-explorer then type in the repo name) it works via http (or at least I don't recall giving it any API key or anything).
iLemming 2 days ago [-]
From what I can tell (by just glancing over the code) - it doesn't open the tree in Dired. I just wanted to browse any GitHub repo in Dired. You can browse the tree in any branch, view and copy files out, grab GH urls for files, for regions, etc. You just can't make any edits - no file/subdir renaming.
iLemming 3 days ago [-]
Of course it needs to use the API. How are you otherwise read the private repos?
necovek 3 days ago [-]
Authenticated HTTP or even SSH should allow it, especially if you are restricting to GH and know how their web URLs translate into git repo URLs.
iLemming 3 days ago [-]
Ah, okay. I get now what the question meant. Sorry, it's past midnight here and my brain is ketchup. Git's own protocols let you talk to a remote repo without cloning it, so why not use that, right? Multiple reasons:
- Tree listing. There's no raw HTTP URL that gives you a directory listing. raw.githubblabla.com can't do directory indexes. You'd have to shell out to git ls-tree ... etc over the ssh, which means essentially implementing a partial git client.
- Getting subtries is also problematic
- Branch listing and repo search - no git protocol equivalent for those - need the API
- Current approach fetches the entire tree in one API call. Doing the same over pack protocol means negotiating a fetch, receiving packfile data and parsing it. Much heavier, much more code.
We can only imagine a world where git's transport layer gives you a browsable filesystem interface. It doesn't - git's protocols are optimized for syncing object graphs, not random-access file browsing.
medivhX 1 days ago [-]
Does this work with private repos? Or only public GitHub repos?
iLemming 1 days ago [-]
Yes - works with both.
captn3m0 2 days ago [-]
What I want at this point is a classic.github.com which uses the old UI from 2013. That was perfect and fast.
iLemming 2 days ago [-]
The bigger point is not about "a better UI", it's about having control over plain text. My issue with things like Jira/GitHub/Slack is not that they don't provide nice UI/UX but that they do that each on their own terms - I can't easily edit a Jira comment without having to deal with their shittiest wysiwyg bullcrap; I can't quickly and easily extract a code snippet from a Slack message without wanting to smash my keyboard in anger. If I can see that crap on my screen and can read it, why the heck they make it so vexatiously difficult to extract it and deal with it in something else? Why do I have to go through enormous hoops, every fucking time?
Using Emacs liberated me from wasting my energy for crap like that. Why would I ever complain about GitHub changing/not having/breaking their UI, if I just want to browse files and the well-trodden path for doing just that has existed in my tool belt for years, why wouldn't I just use that?
the_biot 2 days ago [-]
But github now handles so much more traffic, without a doubt. I'm not so sure their infrastructure is keeping up, but if it is, the current sluggishness may just be its speed under any load.
If you want to see what it should be, check any forgejo/codeberg repo.
It would be even nicer if we could somehow mount the github repo through
FUSE so that we can run ripgrep on the codebase or even launch the LSP.
For changed files we could implement CoW, so we could keep modified
files without uploading them to remote.
For speedup we could cache files locally.
.... oh yeah, I guess we have all this (dired, ripgrep, lsp, speed) for
free when running `git clone <url>` ?
iLemming 1 days ago [-]
A lazy-clone virtual filesystem? That is way out of scope for this project.
FUSE needs a userspace daemon and Elisp is not a practical choice for implementing it - you're gonna need to build a CLI companion. Ripgrep and LSP both need access to most or all files. You'd have to fetch every blob, at which point you're fighting the GitHub API's rate limits - not worth it, easier just to clone. CoW would add a local mutation layer, which means state management, conflict detection, and by the time you get there, you're reimagining git itself.
The point about ripgrep worth considering though - maybe search command that hits the search API would be nice. There are some constraints though - 30 requests/minute (I think); it would only work with indexed branches (non main branches may not be); it only indexes files under certain size (something like less than 400Kb); there's no regex; All that makes me think maybe it's better to make it easy to clone/jump to cloned repo instead.
- Tree listing. There's no raw HTTP URL that gives you a directory listing. raw.githubblabla.com can't do directory indexes. You'd have to shell out to git ls-tree ... etc over the ssh, which means essentially implementing a partial git client.
- Getting subtries is also problematic
- Branch listing and repo search - no git protocol equivalent for those - need the API
- Current approach fetches the entire tree in one API call. Doing the same over pack protocol means negotiating a fetch, receiving packfile data and parsing it. Much heavier, much more code.
We can only imagine a world where git's transport layer gives you a browsable filesystem interface. It doesn't - git's protocols are optimized for syncing object graphs, not random-access file browsing.
Using Emacs liberated me from wasting my energy for crap like that. Why would I ever complain about GitHub changing/not having/breaking their UI, if I just want to browse files and the well-trodden path for doing just that has existed in my tool belt for years, why wouldn't I just use that?
If you want to see what it should be, check any forgejo/codeberg repo.
For changed files we could implement CoW, so we could keep modified files without uploading them to remote.
For speedup we could cache files locally.
.... oh yeah, I guess we have all this (dired, ripgrep, lsp, speed) for free when running `git clone <url>` ?
The point about ripgrep worth considering though - maybe search command that hits the search API would be nice. There are some constraints though - 30 requests/minute (I think); it would only work with indexed branches (non main branches may not be); it only indexes files under certain size (something like less than 400Kb); there's no regex; All that makes me think maybe it's better to make it easy to clone/jump to cloned repo instead.