• ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 小时前

    I think this is the first time I found a reasonable take on “how to fix the internet”. You can’t fix the corpo web. Most people just want constant updates and they don’t care about ads, bots and AI slop. You can’t change their minds.

    Saying “fuck it, I will just build my own thing and I don’t care if anyone will see it” is the right approach. Couple of times I was thinking about creating some guides (like guide to public EV chargers in Spain) and I just gave up because I realized I’m not going to win the SEO war and no one is going to view it. Why write guides if they are not helping anyone? I’m still not sure if it makes sense to create guides but it may be a good idea to create a simple site, post some photos, share a story. I will probably do it.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        18 小时前

        Buy the cheapest laptop you can find, with a broken screen it’s fine. Install debian 12 on it give it a memorable name, like “server” go to a DNS registrar of your choice, maybe “porkbun” and buy your internet DNS name for example “MyInternetWebsite.tv”, this will cost you 20$/30$ for the rest of your life, or until we finally abolish the DNS system to something less extortionnate Install webmin and then apache on it go to your router, give the laptop a static address in the DNS section Some router do no have the ability to apply a static dhcp lease to computers on your network, in that case it will be more complicated or you will have to buy a new one, one that preferably supports openwrt. then go to port forwarding and forward the ports 80 and 443 to the address of the static dhcp lease now use puttygen to create a private key, copy that public key to your linux laptop’s file called /root/.ssh/authorized_keys go to the webmin interface, which can be accessed with http://server.lan:10000/ from any computer on your PC and setup dynamic dns, this will make the DNS record for MyInternetWebsite.tv change when the IP of your internet connection changes, which can happen at any time, but usually rarely does. But you have to, or else when it changes again, your website and email will stop working. Now go to your desktop computer, and download winsshfs, put in your private key and mount the folder /var/www/html/ to a drive letter like “T:” Now, whatever you put in T: , will be the content of your very own internet web server enjoy

        • ohshit604@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 小时前

          While i appreciate the detailed response here i did make another comment letting OP know i’m in a similiar situation as them, i use Docker Engine & Docker Compose for my self-hosting needs on a 13th Gen Asus Nuc (i7 model) running Proxmox with a Debian 12 VM. My reverse proxy is traefik and i am able to receive SSL certificates on port :80/:443 (also have Fail2Ban setup) however, i can’t for the life of me figure out how to expose my containers to the internet.

          On my iPhone over LTE/5G trying my domain leads to an “NSURLErrorDomain” and my research of this error doesn’t give me much clarity. Edit appears to be a 503 error.

          This is a snippet of my docker-compose.yml
          services:
            homepage:
              image: ghcr.io/gethomepage/homepage
              hostname: homepage
              container_name: homepage
              networks:
                - main
              environment:
                PUID: 0 # optional, your user id
                PGID: 0 # optional, your group id
                HOMEPAGE_ALLOWED_HOSTS: my.domain,*
              ports:
                - '127.0.0.1:3000:3000'
              volumes:
                - ./config/homepage:/app/config # Make sure your local config directory exists
                - /var/run/docker.sock:/var/run/docker.sock #:ro # optional, for docker integrations
                - /home/user/Pictures:/app/public/icons
              restart: unless-stopped
              labels:
                - "traefik.enable=true"
                - "traefik.http.routers.homepage.rule=Host(`my.domain`)"
                - "traefik.http.routers.homepage.entrypoints=https"
                - "traefik.http.routers.homepage.tls=true"
                - "traefik.http.services.homepage.loadbalancer.server.port=3000"
                - "traefik.http.routers.homepage.middlewares=fail2ban@file"
                # - "traefik.http.routers.homepage.tls.certresolver=cloudflare"
                #- "traefik.http.services.homepage.loadbalancer.server.port=3000"
                #- "traefik.http.middlewares.homepage.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 172.18.0.0/16, 208.118.140.130"
                #- "traefik.http.middlewares.homepage.ipwhitelist.ipstrategy.depth=2"
            traefik:
              image: traefik:v3.2
              container_name: traefik
              hostname: traefik
              restart: unless-stopped
              security_opt:
                - no-new-privileges:true
              networks:
                - main
              ports:
                # Listen on port 80, default for HTTP, necessary to redirect to HTTPS
                - target: 80
                  published: 55262
                  mode: host
                # Listen on port 443, default for HTTPS
                - target: 443
                  published: 57442
                  mode: host
              environment:
                CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
                # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
                TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
              secrets:
                - cf_api_token
              env_file: .env # use .env
              volumes:
                - /etc/localtime:/etc/localtime:ro
                - /var/run/docker.sock:/var/run/docker.sock:ro
                - ./config/traefik/traefik.yml:/traefik.yml:ro
                - ./config/traefik/acme.json:/acme.json
                #- ./config/traefik/config.yml:/config.yml:ro
                - ./config/traefik/custom-yml:/custom
                # - ./config/traefik/homebridge.yml:/homebridge.yml:ro
              labels:
                - "traefik.enable=true"
                - "traefik.http.routers.traefik.entrypoints=http"
                - "traefik.http.routers.traefik.rule=Host(`traefik.my.domain`)"
                #- "traefik.http.middlewares.traefik-ipallowlist.ipallowlist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 208.118.140.130, 172.18.0.0/16"
                #- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
                - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
                - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
                - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
                - "traefik.http.routers.traefik-secure.entrypoints=https"
                - "traefik.http.routers.traefik-secure.rule=Host(`my.domain`)"
                #- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
                - "traefik.http.routers.traefik-secure.tls=true"
                - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
                - "traefik.http.routers.traefik-secure.tls.domains[0].main=my.domain"
                - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.my.domain"
                - "traefik.http.routers.traefik-secure.service=api@internal"
                - "traefik.http.routers.traefik.middlewares=fail2ban@file"
          

          Image of my port-forwarding rules (note; the 3000 internal/external port was me “testing”)


          Edit: I should note the Asus Documentation for Port-forwarding mentions this:

          1. Port Forwarding only works within the internal network/intranet(LAN) but cannot be accessed from Internet(WAN).

          (1) First, make sure that Port Forwarding function is set up properly. You can try not to fill in the [ Internal Port ] and [ Source IP ], please refer to the Step 3.

          (2) Please check that the device you need to port forward on the LAN has opened the port. For example, if you want to set up a HTTP server for a device (PC) on your LAN, make sure you have opened HTTP port 80 on that device.

          (3) Please note that if the router is using a private WAN IP address (such as connected behind another router/switch/modem with built-in router/Wi-Fi feature), could potentially place the router under a multi-layer NAT network. Port Forwarding will not function properly under such environment.

          Private IPv4 network ranges:

          Class A: 10.0.0.0 – 10.255.255.255

          Class B: 172.16.0.0 – 172.31.255.255

          Class C: 192.168.0.0 – 192.168.255.255

          CGNAT IP network ranges:

          The allocated address block is 100.64.0.0/10, i.e. IP addresses from 100.64.0.0 to 100.127.255.255.

          I want to highlight the fact that i may be under a multi-layered NAT, the folks in my household demand the ISP router given that i have PiHole running DNS blocking and my Asus Router routes all outbound connections through a VPN tunnel, besides DDNS obviously which my router also handles, i have to run these routers in bridged-mode so that they share the same WAN IP but, if I am able to receive SSL/TLS certificates from LetsEncrypt on port :80/:443 that means port-forwarding is working as intended right?

        • ohshit604@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          19 小时前

          I’m in the same boat (sorta)!

          Follow up question, did you have trouble exposing port :80 & :443 to the internet? Also are you also using Swarm or Kubernetes?

          I have the docker engine setup on a machine along side Traefik (have tried Nginx in the past) primarily using Docker Compose and it works beautifully on LAN however I can’t seem to figure out why I can’t connect over the internet, I’m forced to WireGuard/VPN into my home network to access my site.

          No need to provide troubleshooting advice, just curious on your experience.

          • otacon239@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            5 小时前

            I keep everything as flat as possible. Just the regular docker (+compose) package running on vanilla Debian. On the networking side, I’m lucky in that I have a government-run fiber provider that doesn’t care that much what I host, so it’s just using the normal ports.

            I did previously use C*mcast, and I remember there was an extra step I had to do to get it to redirect port 80 over 443, but I couldn’t tell you what that step was anymore.

      • otacon239@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 天前

        Maybe that’s a dark mode thing? I know Dark Reader breaks almost anything with an already dark theme.

        • MonkderVierte@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          24 小时前

          Lol, no. I made a usercss for this (currently not released) but explicitly disabled it here. But that one uses a base style that switches via @prefers light/dark:

          @media (prefers-color-scheme: dark) {
            :root {
              --text-color: #DBD9D9;
              --text-highlight: #232323;
              --bg-color: #1f1f1f;
              …
            }
          }
          @media (prefers-color-scheme: light) {
            :root {
              …
            }
          

          Guess your site uses one of them too.

          • otacon239@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            23 小时前

            I admit I used Publii for my builder. I can’t program CSS for crap. I’m far more geared towards backend dev.

  • shiroininja@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 天前

    I think I wrote this. This is my philosophy for how the web should be. Social media shouldn’t be the main Highway of the web. And the internet should be more of a place to visit, not an always there presence.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 天前

    One of the things I miss about web rings and recommended links is it’s people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

    Only problem with slow web is people write what they are working on, they aren’t trying to exhaustively create “content”. By which I mean, they aren’t going to have every answer to every question. You read what’s there, you don’t go searching for what you want to read.

    • AnarchistArtificer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      24 小时前

      Something that I have enjoyed recently are blogs by academics, which often have a list of other blogs that they follow. Additionally, in their individual posts, there is often a sense of them being a part of a wider conversation, due to linking to other blogs that have recently discussed an idea.

      I agree that the small/slow web stuff is more useful for serendipitous discovery rather than searching for answers for particular queries (though I don’t consider that a problem with the small/slow web per se, rather with the poor ability to search for non-slop content on the modern web)

  • crank0271@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 天前

    Interesting read. It captures a lot of how I feel and what I miss about the “old internet.”

  • rumimevlevi@lemmings.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 天前

    I don’t know abou that. I don’t want to manage visiting dozens of websites.

    Technically it is also possible to make interactionless feeds with no live and share bottons

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 天前

      How’s visiting dozens of pages different from visiting dozens of websites?

      And BTW, on sites where feeds are in fashion, maybe some kind of Usenet upgraded for HTML and Markdown and post\author hyperlinks would be more in place.

      • rumimevlevi@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 天前

        Visiting feeds is like using tools from one organized toolbox. Visiting many websites is like jumping between many separate toolboxes

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 天前

          No. You have a toolbox, it’s called a web browser. To unite the particular websites you have a web ring, or your own bookmarks. There were also web catalogues.

          • rumimevlevi@lemmings.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 天前

            Bookmark at not intuitive enough to me and RSS feeds are still feeds that have no interaction features like the writer of this article like.

            I am always for giving the most power to users. I like compromises like user settings so people who want a feed with interactions can and who doesn’t can disable it

            • shiroininja@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 天前

              But why do we need interactive crap for everything. Comments and etc for articles are the worst. Not everybody needs to hear you, sometimes you’ve just gotta take in information and process it.

              Like I literally Maintain my own fleet of apps that give me just the article body images, in a sorted feed. No ads. No links. Nothing. Even the links to other articles, etc in the middle of an article is too much. I hate that shit. Modern web page design is garbage and unreadable.

              I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

              • catloaf@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 天前

                Interactiviry seems to be a good thing. What brings you to participate here on Lemmy?

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 天前

                Modern web page design is garbage and unreadable.

                Because it’s a “newspaper meets slot machine” design. Kills two birds with one stone, hijacking media (censorship is invisible) and making money (invisible too).

                I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

                And also because not every place is supposed to be crawling with people.