-- Anzeige/Ad --
We are starting a brand resources repository for @govdirectory but I am struggling to decide on a good structure for it. Do you have an example of a repo that does this well? I would be thankful for all the inspiration I could get.
Our empty repo: github.com/govdirectory/brand-…
GitHub - govdirectory/brand-resources: Brand resources repository for Govdirectory - a crowdsourced and fact-checked directory of official governmental online accounts and services.
Brand resources repository for Govdirectory - a crowdsourced and fact-checked directory of official governmental online accounts and services. - govdirectory/brand-resourcesGitHub
Jan Vlug 🌱 🐷 🤍❤️🤍 💙💛
Als Antwort auf Jan Ainali • • •Jan Ainali
Als Antwort auf Jan Vlug 🌱 🐷 🤍❤️🤍 💙💛 • • •Jan Vlug 🌱 🐷 🤍❤️🤍 💙💛
Als Antwort auf Jan Ainali • • •For hosting of Git-projects there are several #opensource options. The best known is #GitLab, some functionality is not free available though. Other options are @Codeberg and @forgejo and @gitea
There are probably several more. Extra information is welcome.
Read especially: docs.codeberg.org/getting-star…
#gitlab #codeberg #forgejo #gitlab #FOSS #Git #gitea
What is Codeberg? | Codeberg Documentation
docs.codeberg.orgnilesh
Als Antwort auf Jan Vlug 🌱 🐷 🤍❤️🤍 💙💛 • • •Do you happen to know which one among these protects my code AND content (such as discussions on issues) from AI companies taking it (without consent) for training their models? GitHub owned by Microsoft which partially owns OpenAI is clearly not it.
Tagging @Codeberg
Jan Ainali
Als Antwort auf nilesh • • •Codeberg.org
Als Antwort auf Jan Ainali • • •Hi, this is true. Although we do opt-out of AI scrapers using our robots.txt, it is still the Internet and relying on the platform for such protection is not 100% safe, no matter if you rely on GitHub or our platform. We recently had to block IP addresses that were relentlessly scraping our platform but _after_ they caused performance problems. ~n
P.S. WIRED published something on AI "content aggregators" ignoring robots.txt very recently: wired.com/story/perplexity-is-…
nilesh
Als Antwort auf Codeberg.org • • •Codeberg.org
Als Antwort auf nilesh • • •@nilesh They are not viable in our use case because they would either get in the way of developers trying to get work done, because they raise privacy concerns for the users relying on our platform.
We are also, well, out there, because we are a bunch of optimists that believe that we can get some things done if we act beyond the monoculture imposed by a monopoly. Relying on a platform in a similar market position to provide service would, well, probably be deeply ironic to some.
Codeberg.org
Als Antwort auf Codeberg.org • • •@nilesh If you try to distinguish between robots and humans like that, you're bound to block, say, the Tor users. Therefore, "retroactive" is the way to go.
Think of DRM: If you use Netflix in Google Chrome on Linux, you get a considerably worse quality for the same price because you can record your own screen and make a copy. But not on the Netflix app for Windows – it's inconvenient to some, but meant to protect something. Their shows still land on piracy sites within hours anyway. ~n
Codeberg.org
Als Antwort auf Codeberg.org • • •nilesh
Als Antwort auf Codeberg.org • • •