Moderating Team
  • Content count

  • Joined

  • Last visited

Community Reputation

80 Knowledgable

About systems_glitch

  • Rank
    Dangerous free thinker

Contact Methods

  • AIM
  • Website URL
  • ICQ

Profile Information

  • Gender

Recent Profile Visitors

7,664 profile views
  1. There are accessible distros out there, my only experience with them was downloading the Knoppix version for the visually impaired, which includes speakup and talks you through the startup/login process.
  2. Huh, interesting. It's not listed in the HP reference as a supported CPU, but that doesn't necessarily mean anything. They say unbuffered ECC is the only thing supported, but Registered/Buffered ECC DDR3 works just fine. I just installed 32GB of it
  3. In August I picked up an AMD Bulldozer workstation: This was supposed to be a replacement for my AMD A8 APU desktop, but it turns out the single core performance is *horrible* and for most of my workload the A8 is actually faster. Well, the A8 box started to have hardware issues and some update had started to cause Firefox to consume massive quantities of memory if left running for too long (it'd use up all 16 GB of main memory, plus 8 GB of swap!), so I switched over to the Bulldozer box as my main workstation for a few weeks. Aside from being slower than the A8 for my day-job workload, it's *loud*! These Supermicro boxes came from a production development environment, I'm not sure how they had several of these in an open plan work area running all day. I guess everyone was deaf or wore noise cancelling headphones. It's not 1U server bad, but it's pretty loud. Anyway, I found a used HP Z420 in "barebones" config (no RAM, hard disk, or graphics) for $200 shipped. Specs: * Intel Xeon E5-1620 (i7 derivative, it seems) * 8x ECC DDR3 slots for 64GB supported (unsure if you can use 16GB DIMMs) * 10x internal SATA ports, two of which are 6 gb/s * Still has PS/2 connectors for my IBM Model M keyboard * Shockingly, FireWire 400 on the front and back * Some generic SATA DVD burner * PCIe slots: 2 x16, 1 x8, 1 x4, 1 x1 * Legacy PCI slot * USB 3 on the motherboard It can also boot directly from M.2 PCIe attached SSDs, so no more having to have a boot partition on a SATA disk, like I did with the Bulldozer box. I put the following on-hand hardware in: * 12 GB ECC DDR3 1066 * Samsung SM951 128GB SSD in a PCIe x4 adapter board * 1 TB WD Green storage drive * GeForce GTX 750 from the A8 workstation It's running Slackware 14.2, still have a boot partition on the 1 TB SATA disk since I just pulled the storage out of the Bulldozer box and moved it over. I plan on doing a reinstall and eliminating the 1 TB mechanical disk. I'll probably replace with two 250 GB WD RE3 SATA drives in a ZFS mirror -- I don't need piles of local storage, that's what the fileserver is for. So far it's significantly faster than both the Bulldozer box and the A8 box -- my main benchmark is how long a certain massive test suite takes to run. It was about 20 minutes on the A8 box, 30 minutes on the Bulldozer box, and 17 minutes on the Z420. I've ordered 32 GB ECC DDR3 so that the memory currently installed can go back into the Bulldozer box -- I have a friend who's interested in it as a VM host. Thinking about getting a 256 GB M.2 SSD and reinstalling to that, I could use the 128 GB SSD elsewhere. Part of the reason I got this box is because it was cheaper than getting a new SuperMicro motherboard for my Micro ATX tower (the one the A8 mobo is in currently), and the SuperMicro board obviously didn't come with a CPU. Also it has enough free PCIe slots that I can use a M.2 SSD, the double slot GTX 750, and still have a free x8 or better slot for a 10gig Ethernet card. I may end up with a Xeon E5-1660 v2 CPU in there, the single-core performance is better and two extra cores (plus 2 hyperthreads) couldn't hurt with my VM load.
  4. trojan

    Sub7 is ancient stuff, you can probably find it or one of its workalikes (Optix, BackOrifice, et c.) on some skiddie/malware archive. I suppose it could be useful to experiment with in your home lab if you've never seen it. Basically, it's a remote command/control malware. In middle school, my friends and I played pranks on each other and some of our less tech-savvy friends with it. I'd imagine even the worst possible modern virus scanner would pick it up. It may not work with anything XP or newer, IIRC when we were goofing around with it, most people were running 98 or a few ME users. We thought we were hot shit running Windows 2000
  5. Upgrade if possible, there's a Perl script to run otherwise. Slackware has updated packages out already, Arch Linux does not at time of posting. If you want to build from src until a package comes out, you can run ./configure --prefix=/home/you/bin && make && make install To build local. Obviously change the prefix path to something that exists
  6. Yeah, when I go into full hack mode, chasing a bug or a weird problem or something, it becomes its own unpaid full-time job for a while. I do it because I like it, not because it makes me money. I don't like it enough to try to monetize it, that's why I'm still a programmer/hardware design guy for a day-job
  7. As long as humans write code, there will be vulnerabilities. A safe career choice, if you like doing it.
  8. I don't think there's anything wrong with it, I think I just had too high of expectations for it. It's a machine from 2011 that's known for having lousy single-core performance. It does compile C/C++ code *really* fast, with `-j18` in the MAKEOPTS. I may end up selling it to a friend -- paid a visit a few weekends ago, and apparently his VM server died, and he's been running his production VMs for his small business off a laptop! I was simultaneously surprised and not surprised that Gentoo was actually slower. I've maintained for years that there's no real speed advantage from Gentoo for most hardware situations, but I didn't think it would be that much slower!
  9. So I added the Samsung SSD and got to pretty much equal run times with my AMD APU box. Obviously multitasking is still way more usable on the Opteron box, with all of those cores! As an experiment, I installed Gentoo and took care to optimize USE and CPU flags for the architecture. After a day of compiling and working around bugs/oddities, I finally got around to running the test suite I've been using as a benchmark -- it's 5 or 6 minutes *slower* with Gentoo I'm sure Gentoo experts will tell me I'm doing it wrong...
  10. I'm pretty sure you're spot-on with the "expected to be using a GUI" remark. I think that's also why the firewalld syntax is as obtuse as it is. Not meant for anyone to hack at anymore. I guess it also doesn't matter if you're using some devops solution like Chef, Puppet, Ansible, Salt, or whatever. I definitely still prefer `ethX` naming, even the BSDs' convention of using $driver_nameX (e.g. em0 for Intel gigabit, vr0 for VIA Rhine), which I at first liked less than `ethX`, is better than the random string of garbage you get nowadays. At least the BSD approach provides additional useful info!
  11. Pretty sure the interface renaming has more to do with systemd migration than anything else. It's supposed to be a unique identifier for the interface for...reasons? Also `ifconfig` is deprecated basically everywhere...I guess the distinction between a system that acknowledges legacy support and one that doesn't is whether the `ifconfig` compat shim is installed in the base system, or if the answer is basically, "fuck you, learn `ip` syntax." And then of course there's the transition to firewalld, which almost but not completely is capable of doing the same things as iptables, and apparently still uses iptables under the hood.
  12. I picked up a 128 GB Samsung SM951 PCIe attached SSD, going to grab a PCIe x4 adapter for full bandwidth. That ought to rule out a disk bottleneck
  13. So I'm against some sort of bottleneck which I haven't had time to identify yet. The full test suite of one of the day-job applications I work on runs significantly slower, but won't even utilize a full core. On my APU box, I was running a load average of 2.5 - 3 with all fans spun up. This sits around 0.75 load average. I'm wondering if it's a disk bandwidth issue, I currently have the 1 TB Hitachi/HGST SATA drive in that shipped with the machine. Thinking I'm going to order a Samsung M.2 SSD and PCIe adapter bracket, I wanted to upgrade to SSD anyway. Direct PCIe attachment will rule out any disk/controller issues. I knew getting into this, from our server deploys, that single-core performance wasn't awesome with these CPUs, but not even being able to load down a single core seems like something else is my bottleneck.
  14. I moved away from it for a while due to everything being locked to "stable" versions, which back then meant old. It seems like Slackware is really keeping up nowadays though, certainly ahead of what RHEL/CentOS ships with most of the time, and IIRC ahead of stock Debian too. That's tracking release branches, not -current.
  15. Slackware64 14.2 installed, running a lot of compiles and building up part of my dev stack. Currently compiling node.js -- `make -j16` is nice Got some correctable ECC errors while building Ruby, so it looks like there might be a RAM upgrade sooner than later...