For the vast majority of docker images, the documentation only mention a super long and hard to understand “docker run” one liner.

Why nobody is placing an example docker-compose.yml in their documentation? It’s so tidy and easy to understand, also much easier to run in the future, just set and forget.

If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml

I want to know if it’s just me that I’m out of touch and should use “docker run” or it’s just that an “one liner” looks much tidier in the docs. Like to say “hey just copy and paste this line to run the container. You don’t understand what it does? Who cares”

The worst are the ones that are piping directly from curl to “sudo bash”…

  • OmltCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.

    Many project also have a example docker-compose.yml in the repository if you dig not so deep into it

    There is https://www.composerize.com to convert run command to compose. Works ~80% of the time.

    I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.

    • Anony Moose@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Out of curiosity, is there much overhead to using docker than installing via curl and bash? I’m guessing there’s some redundant layers that docker uses?

      • Shrek@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Of course, but the amount of overhead completely depends per container. The reason I am willing to accept the -in my experience- very small amount of overhead I typically get is that the repeatability is amazing with docker.

        My first server was unRAID (freebsd, not Linux), I setup proxmox (debian with a webui) later. I took my unRAID server down for maintenance but wanted a certain service to stay up. So I copied a backup from unRAID to another server and had the service running in minutes. If it was a package, there is no guarantee that it would have been built for both OSes, both builds were the same version, or they used the same libraries.

        My favorite way to extend the above is Docker Compose. I create a folder with a docker-compose.yml file and I can keep EVERYTHING for that service in a single folder. unRAID doesn’t use Docker Compose in its webui. So, I try to stick to keeping things in Proxmox for ease of transfer and stuff.

        • Anony Moose@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Makes sense! I have a bunch of services (plex, radarr, sonarr, gluetun, etc) on my media server on Armbian running as docker containers. The ease of management is just something else! My HC2 doesn’t seem to break a sweat running about a dozen containers, so the overhead can’t be too bad.

          • Shrek@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeah, that’s going to come completely down to the containers you’re running and the people who designed them. If the container is built on Alpine Linux, you can pretty much trust that it’s going to have barely any overhead. But if a container is built on an Ubuntu Docker image. It will have a bunch of services that probably aren’t needed in a typical docker container.

            • Anony Moose@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Good point. Most containers I’ve used do seem to use Alpine as a base. Found this StackOverflow post that compared native vs container performance, and containers fair really well!

              • Shrek@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                It seems like that data is from 2014 as well. I’m sure the numbers would have improved in almost ten years too!

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Plain docker is useful when running some simple containers, or even one-off things. A lot of people thing about containers as long running services, but there’s also many containers that are for running essentially a single command to completion and then shuts down.

    There’s also alternate ways to handle containers, for example Podman is typically used with systemd services as unlike Docker it doesn’t work through a persistent daemon, so the configuration goes to a service.

    I typically skip the docker-compose for simple containers, and turn to compose for either containers with loads of arguments or multi-container things.

    Also switching between Docker and Podman depending on the machine and needs.

  • Toribor@corndog.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve started replacing my docker compose files with pure ansible that is the equivilent of doing docker run. My ansible playbooks look almost exactly like my compose file but they can also create folders, set config files or cycle services when configs are updated.

    It’s been a bit of a learning process but it’s replaced a lot what was previously documentation with code instead.

    • xcjs@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve done something similar, but I’m using compose files orchestrated by Ansible instead.

    • Zephyr@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      I did the same, but I started from my list of run scripts… I used ChatGPT to create them, took 2 minutes…

      • Toribor@corndog.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hahaha, I’ve been using ChatGPT in the exact same way. It requires a bit of double-checking but it really speeds things up a lot.

  • AlexKalopsia@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I used docker run when I first started, I think it’s a fairly easy entry point that “just works”.

    However I would never really go back to it, since compose is a lot tighter and offers a better sense of overview and control

  • ilmagico@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    I don’t think you’re out of touch, just use docker compose. It’s not that hard to conver the docker run example command line into a neat docker-compose.yml, if they don’t already provide one for you. So much better than just running containers manually.

    Also, you should always understand what any command or docker compose file does before you run it! And don’t blindly curl | bash either, download the bash script and look at it first.

  • Pixel@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Honestly I never really saw the point of it, just seems like another dependency. The compose file and the docket run commands have almost the same info. I’d rather jump to kubectl and skip compose entirely. I’d like to see a tool that can convert between these 3 formats for you. As for piping into bash, no - I’d only do it on a very trusted package.

  • Captain Howdy@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I prefer to use ansible to define and provision my containers (docker/podman over containerd). For work, of course k8s and helm takes the cake. no reason to run k8s for personal self hosting, though.

    • cliffhanger407@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      No reason aside from building endless unnecessary complexity, which–let’s be honest–is 90% of the point of running a home lab.

      Shit’s broken at work: hate it. Shit’s broken at home: ooh a project!

  • giacomo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’m sure someone has written a script to convert docker run commands to compose files.

    I am usually customizing variables and tend to use compose for anything I am planning on running in “production”. I’ll use run if it’s a temporary or on-demand use container.

    It’s not really that much effort to write a compose file with the variables from a run command, but you do have to keep an eye on formatting.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    it turns out GPT converts plain docker commands into docker compose files well enough to me, it’s been my go-to when I need to create a compose YAML. Checking a YAML and making one or two small corrections is even faster than entering all info in a form like Docker Compose Generator.

  • TitanLaGrange@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Previously my server was just a Debian box where I had a ‘docker’ directory with a bunch of .sh files containing ‘docker run’ commands (and a couple of docker-compose files for services that that have closely related containers). That works really well, it’s easy to understand and manage. I had nginx running natively to expose stuff as necessary.

    Recently I decided to try TrueNAS Scale (I wanted more reliable storage for my media library, which is large enough to be annoying to replace when a simple drive fails), and I’m still trying to figure it out. It’s kind of a pain in the ass for running containers since the documentation is garbage. The web interface is kind of nice (other than constantly logging me out), but the learning curve for charts and exposing services has been tough, and it seems that ZFS is just a bad choice for Docker.

    I was attracted to the idea of being able to run my services on my NAS server as one appliance, but it’s feeling like TrueNAS Scale is way too complicated for home-scale (and way too primitive for commercial, not entirely sure what market they are aiming for) and I’m considering dumping it and setting up two servers, one for NAS and for running my containers and VMs.

  • Morethanevil@lmy.mymte.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I always use docker-compose. It is very handy if you ever want to have a good backup or move the whole server to another. Copy over files -> docker compose up -d and you are done For beginners, they should use docker compose from the start. Easier than docker run

    If you ever want to convert those one-liner to a proper .yml then use this converter

    • casrou@feddit.dk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      That is one docker compose up -d for each file you copied over, right… Or are you doing something even smarter?

      • Morethanevil@lmy.mymte.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have one docker-compose.yml for each service. You can use docker compose -f /path/to/docker-compose.yml up -d in scripts

        I would never use “one big” file for all. You only get various problems imo

        • SheeEttin@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You use a separate file for each service? Why? I use one file for each stack, and if anything, breaking them out would give me issues.

  • hoodlem@hoodlem.me
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Totally agree. I need to then pick apart the run command to make the docker compose file, then get something wrong and need to do a search.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Even for the one-liner argument - a better one liner than any docker run is docker compose up [-d].