Sign in to follow this  
Followers 0
systems_glitch

Slackware 14.2 Released!

11 posts in this topic

Slackware 14.2 was released over the weekend (2016-07-01), downloading now! Got my DVD copy on order, too. Anyone else still running Slackware?

0

Share this post


Link to post
Share on other sites

I'm sure a few gurus still are. I learned basic Linux commands and administration on Slackware. Worked for a mom and pop ISP, the admin was a retired Navy Administrator. He ran Slackware for co-hosting, RADIUS (to authenticate ISP end-users, POP3, SMTP, DNS; and every other service vital to operations.

 

It did run pretty damn flawlessly.

 

0

Share this post


Link to post
Share on other sites

I moved away from it for a while due to everything being locked to "stable" versions, which back then meant old. It seems like Slackware is really keeping up nowadays though, certainly ahead of what RHEL/CentOS ships with most of the time, and IIRC ahead of stock Debian too. That's tracking release branches, not -current.

0

Share this post


Link to post
Share on other sites

i tried it once but the install annoyed me :/. was either take the super bloated entire dvd, or pick every single file individually or something..

 

and yea debian stable seems to always be super old not really sure what im gonna be switching to since the sid branch distros seem to keep having weird fallouts and stuff, sidux, aptosid siduction then theres linuxbbq which seems to be a roll your own kind of thing or something idk XD. but even those are still pretty easy to break stuff, like.. can do distupgrade on siduction but you have to do it like within a month or very likely to break stuff, but when release cycle slips so hard you might get a release a year, if you dont get it right away then you just cant do anything other than manually upgrading stuff and hoping it dont break the regular distros may seem stable but thats how they get there people wade through nearly every single kernel update breaking something

0

Share this post


Link to post
Share on other sites

I never liked CentOS. Installed it once about 4 years: there was not a standard non-X install from the full DVD ISO..   Then tried a minimal install, and it didn't even have tools needed for basic network config to complete the installation online:

#>ifconfig -a

#>--bash -ifconfig: command not found

 

 

EDIT: remember just giving up, but not till after trying a full install then removing X.

 

After the initial full install: the network card wasn't coming up by default and the interface was named something like epn5923427. I left CentOS thinking it was complete lunacy. Why not just rename every command to its SHA2 hash? Or why even have root level directories like the standard? /home and /etc/ - be could designated to cryptic stuff like:  /rst345349232 and /pyo34234234

 

 

"Hello.... built for production and no non-gui and need to type: #>ifconfig eth0 up every time on boot by default, and give interfaces really obscure names using too many characters?????" Too many WTF???? Moments with CentOS. Was easier to just find something else... was convinced the maintainers of CentOS just wanted admins to pull all their out.

 

 

 

 

 

 

 

 

 

 

Edited by tekio
0

Share this post


Link to post
Share on other sites

Pretty sure the interface renaming has more to do with systemd migration than anything else. It's supposed to be a unique identifier for the interface for...reasons? Also `ifconfig` is deprecated basically everywhere...I guess the distinction between a system that acknowledges legacy support and one that doesn't is whether the `ifconfig` compat shim is installed in the base system, or if the answer is basically, "fuck you, learn `ip` syntax."

 

And then of course there's the transition to firewalld, which almost but not completely is capable of doing the same things as iptables, and apparently still uses iptables under the hood.

0

Share this post


Link to post
Share on other sites

Kind of nice to install something and be able to find your IP Address then know what your interface name will be, and not need to remember up 20 or more epnXXXXXXXXX strings. Very inefficient, from my perspective. :-)  Of even know the name of your interface to pass it some commands. If one builds five boxes that are multi-homed thats a lot of random epnXXXXXXXXX to remember? I don't think I was the only one with these complaints on the forums either?

 

Anyway, I guess if guessing random interface names is your thing - more power to ya. But it seems a lot more practical to call them something like Eth0, Eth1, and Eth2 to me. How many times do we need random interface names across a LAN? More often we need stuff up efficiently, especially in production. :-)

 

Then if we are special and need unique names we can do that. :-)

[root@tecmint ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0B:CD:1C:18:5A
inet addr:172.16.25.126  Bcast:172.16.25.63  Mask:255.255.255.224
inet6 addr: fe80::20b:cdff:fe1c:185a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:2341604 errors:0 dropped:0 overruns:0 frame:0
TX packets:2217673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:293460932 (279.8 MiB)  TX bytes:1042006549 (993.7 MiB)
Interrupt:185 Memory:f7fe0000-f7ff0000

 

IT works on every other disto for me? To me it looks like they expected someone to be using a GUI. 

 

 

Edited by tekio
0

Share this post


Link to post
Share on other sites

I'm pretty sure you're spot-on with the "expected to be using a GUI" remark. I think that's also why the firewalld syntax is as obtuse as it is. Not meant for anyone to hack at anymore. I guess it also doesn't matter if you're using some devops solution like Chef, Puppet, Ansible, Salt, or whatever. I definitely still prefer `ethX` naming, even the BSDs' convention of using $driver_nameX (e.g. em0 for Intel gigabit, vr0 for VIA Rhine), which I at first liked less than `ethX`, is better than the random string of garbage you get nowadays. At least the BSD approach provides additional useful info!

0

Share this post


Link to post
Share on other sites

Yes. I am actually doing a CentOS tutorial for a Lynda(ish) company. CentOS is great with the GUI. 

 

To put my comment in light: "I wish CentOS had an out-of-the-box CLI version like Ubuntu server. I really need to learn Puppet. Like really, badly now. So hard to keep skills up-to-date these days, I miss the 90's and cushy I.T. work. :(

 

 

When I installed the minimal CentOS for a CMD install: I jumped through a million hoops to get WiFi working. Was not please to see: adapter: wisdf9i80s8ifs08fsdfsdf0. I could not copy and paste. :-(  My memory sucks and the hardest part of the job (despite trying to find packages) was memorizing wsiakjjkwresadfsadflk;kjl90. Finally renamed then to: wifi0 and eth1.  :-) 

 

 

After reading, yes, ifconfig is now considered obsoleted. Guess I can blame that on myself and not keeping up. But it is still there with every other distro. I'll possibly think of updating my skillset instead of bitching and complaining. :-/

Edited by tekio
0

Share this post


Link to post
Share on other sites

So, one of the things with Puppet/Chef/Ansible/Salt, et c. is you get a base configuration set up, and you work off of that. A quick way to get up and going is to play with Vagrant, which includes many premade generic "boxes" (VM appliances, whatever you're used to calling them):

 

https://www.vagrantup.com/

 

We usually skip Chef on our development VMs and just use the command line provisioner -- you just script it like you would a regular bring-up before all of these devops tools were available. Our VMs that use Chef in production are written with Vagrant first, though, so we not only have a same-as-production VM to develop against, but we also don't have to use a remote VM to get the Chef recipes tweaked.

 

I don't know if it's just us, but it seems to me that every time we Chef a VM, it takes an order of magnitude more time to get the Chef scripts just right as compared to manually deploying the same number of machines. I guess the benefit is repeatability and documentation. It often seems like these tools are trying to solve social/organizational problems with technology, which ultimately I think is doomed to fail.

 

Yeah, dumping legacy tools is frustrating. I like that both Slackware and Arch provide a replacement for `ifconfig` -- I don't know if it's the old code or merely a wrapper around `ip`. I find `ip` syntax to be kind of obtuse. It feels like it tries to do too many things.

1

Share this post


Link to post
Share on other sites

Actually, had an interview a few years ago and the lead I.T. guy really dawged me, because I didn't know Chef. I asked, "if I know Linux and Python how hard could it be? How many systems are you deploying each day?"

 

After the discussion shifted to OS X, I pretty much got he was a gommer and didn't want someone who knew too much. 

"Yes... I love OS X, there is something to be said for a Unix operating system I could recommend to my Grandma'

 

I.T. Admin Guy:

"Really? I like it because of Open Directory structure for user and groups. It simplifies user and group and management... We look at things professionally...".

 

Me:

"Microsoft has been doing that since Server 2000? Active Directory is really light-years beyond OS X based Open Directory for professional use?".

 

At that point figured he was just wasting time. Funny, he was an Autrailain guy living in the USA and ranting how Alibaba did not have chance in the USA because Americans were racist bigots. 

 

 

Haha. Thanks, I'll check that out after some NodeJS and JQueryUI/BootStrap. 

 

 

Edited by tekio
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0