Zum Inhalt der Seite gehen


We are starting a brand resources repository for @govdirectory but I am struggling to decide on a good structure for it. Do you have an example of a repo that does this well? I would be thankful for all the inspiration I could get.

Our empty repo: github.com/govdirectory/brand-…

#civictech #branding

Als Antwort auf Jan Ainali

For hosting of Git-projects there are several #opensource options. The best known is #GitLab, some functionality is not free available though. Other options are @Codeberg and @forgejo and @gitea

There are probably several more. Extra information is welcome.

Read especially: docs.codeberg.org/getting-star…

#gitlab #codeberg #forgejo #gitlab #FOSS #Git #gitea

Als Antwort auf Jan Vlug 🌱 🐷 🤍❤️🤍 💙💛

Do you happen to know which one among these protects my code AND content (such as discussions on issues) from AI companies taking it (without consent) for training their models? GitHub owned by Microsoft which partially owns OpenAI is clearly not it.

Tagging @Codeberg

Als Antwort auf nilesh

No, I don't. I imagine, but emphasize that I am not a lawyer, that if you open source the code, no platform will "protect" it. It will be up to you to sue if you think someone is not complying with your license.
Als Antwort auf Jan Ainali

Hi, this is true. Although we do opt-out of AI scrapers using our robots.txt, it is still the Internet and relying on the platform for such protection is not 100% safe, no matter if you rely on GitHub or our platform. We recently had to block IP addresses that were relentlessly scraping our platform but _after_ they caused performance problems. ~n

P.S. WIRED published something on AI "content aggregators" ignoring robots.txt very recently: wired.com/story/perplexity-is-…

Als Antwort auf Codeberg.org

@Codeberg Have you considered using more protections beyond robots.txt? For example, DDOS protection tools like Cloudflare that OpenAI uses for its own site?
Als Antwort auf nilesh

@nilesh They are not viable in our use case because they would either get in the way of developers trying to get work done, because they raise privacy concerns for the users relying on our platform.

We are also, well, out there, because we are a bunch of optimists that believe that we can get some things done if we act beyond the monoculture imposed by a monopoly. Relying on a platform in a similar market position to provide service would, well, probably be deeply ironic to some.

Als Antwort auf Codeberg.org

@nilesh If you try to distinguish between robots and humans like that, you're bound to block, say, the Tor users. Therefore, "retroactive" is the way to go.

Think of DRM: If you use Netflix in Google Chrome on Linux, you get a considerably worse quality for the same price because you can record your own screen and make a copy. But not on the Netflix app for Windows – it's inconvenient to some, but meant to protect something. Their shows still land on piracy sites within hours anyway. ~n

Als Antwort auf Codeberg.org

@nilesh (Of course, rate limiting things by ourselves is still something that, as far as I know, can be worked on.)

Diese Webseite verwendet Cookies. Durch die weitere Benutzung der Webseite stimmst du dieser Verwendung zu. https://inne.city/tos