Binrev Financier
  • Content count

  • Joined

  • Last visited

Community Reputation

93 Knowledgable

About tekio

  • Rank
    The Man In The Box
  • Birthday April 8

Profile Information

  • Gender
  • Interests
    Invading far off lands for oil and cigaring Interns.
  • Country
  • Location
    The Blue Nowhere

Contact Methods

  • Website URL
  • ICQ
  • Yahoo

Recent Profile Visitors

40,648 profile views
  1. Dallas vs. Green Bay game was OBVIOUSLY hacked by the Russians....

  2. "supported" == don't call us, if you are getting kernel panics with something else. ;-) That Xeon also supports non-eec, assuming the MoBo does not complain, I'd use non-eec on a desktop. Unless one needs uptimes in the months - to years.... unsure the overhead of EEC is beneficial.
  3. I'll call that a shellacking. Hard to win NFL playoff games when the Lineman are stepping on the QB in their own end zone. 

  4. Actually, had an interview a few years ago and the lead I.T. guy really dawged me, because I didn't know Chef. I asked, "if I know Linux and Python how hard could it be? How many systems are you deploying each day?" After the discussion shifted to OS X, I pretty much got he was a gommer and didn't want someone who knew too much. "Yes... I love OS X, there is something to be said for a Unix operating system I could recommend to my Grandma' I.T. Admin Guy: "Really? I like it because of Open Directory structure for user and groups. It simplifies user and group and management... We look at things professionally...". Me: "Microsoft has been doing that since Server 2000? Active Directory is really light-years beyond OS X based Open Directory for professional use?". At that point figured he was just wasting time. Funny, he was an Autrailain guy living in the USA and ranting how Alibaba did not have chance in the USA because Americans were racist bigots. Haha. Thanks, I'll check that out after some NodeJS and JQueryUI/BootStrap.
  5. Layered encryption really wears an HDD out fast!  Thought I had a good idea. :-(

  6. Yes. I am actually doing a CentOS tutorial for a Lynda(ish) company. CentOS is great with the GUI. To put my comment in light: "I wish CentOS had an out-of-the-box CLI version like Ubuntu server. I really need to learn Puppet. Like really, badly now. So hard to keep skills up-to-date these days, I miss the 90's and cushy I.T. work. When I installed the minimal CentOS for a CMD install: I jumped through a million hoops to get WiFi working. Was not please to see: adapter: wisdf9i80s8ifs08fsdfsdf0. I could not copy and paste. :-( My memory sucks and the hardest part of the job (despite trying to find packages) was memorizing wsiakjjkwresadfsadflk;kjl90. Finally renamed then to: wifi0 and eth1. :-) After reading, yes, ifconfig is now considered obsoleted. Guess I can blame that on myself and not keeping up. But it is still there with every other distro. I'll possibly think of updating my skillset instead of bitching and complaining. :-/
  7. Thought writing would be fun. :-(  Complete new CentOS Admin Tutorial in 3! 1... 2... (wait, my coffee needs refill)... 

  8. Also it allows Admin to set rules in a Windows Domain. Worked for a company where executives were allowed to stream iTunes and anything they wanted. But not so for customer service people. Simple: apply firewall rules to Windows Security Groups in GP. :-)
  9. It is my opinion firewalling an internal network should be a layered approach for best security. Also, network design should be thought out for complete security. Just getting a state-of-the-art wiz-bang 7 layer filtering firewall.... If it ever has complete hardware failure the entire network will be vulnerable until RMA processes are completed. If a security hole is found in the filter mechanisms, the firewall is useless until patched by the vendor. Border Firewall filtering protocols and stateful sessions (ingress and egress) ||-> DMZ ||-> host based firewall |-> web servers, VPN, other services that need exposure with IPS/IDS (like Snort) |-> Firewalls filtering protocols and hosts ||-> Layer 2/3 Switches ||-> host based firewalls with Administrator controlled rules and active IDS/IPS. Honestly, I think layering might work great and save money in both equipment and training/hiring.
  10. My juno mail is getting flooded with Happy Birthday mailers? Would like to see stats on email spikes from registrations on the web. Interesting.....

  11. If you need a quick-fix for O-Rings maybe try a Vape Shop. Those things are always leaking from the constant heating and cooling. Most shops have a shit-ton on hand on demand. Just an idea. But I want to order one of those, thanks for the post.
  12. But 4 cores are sharing the same L2 that cannot be shared individually. So that's why AMD increased the clock cycles to 4Ghz and even faster in some cases. It is not pulling through the pipeline efficiently. If each core had its own L2, data could be flowing through the cache pipeline as each EX core processing data. Instead, it needs to wait for the cache pipeline to empty. This what people mean when they say, "Intel does more work per clock cycle". I think you may be a little confused about how L2 Cache works: disk->Memory->L2 Cache->L1 Cache-> Execution Cores. So with a non-shared L2 Cache per EX core, data will be flowing through that pipeline unfettered by the other cores activity. Not waiting to pull memory and looking for Cache hits that will not exist because another core is using them. Do you see how optimizing for this architecture can help? It does have an L3 cache shared by all cores, but that does not stop CPU Stall: AMD L2 Cache Architecture: Disk->Memeory->L3 Cache L2 Cache -> L1 Cache -> Core 00 (ALU) | L1 Cache -> Core 01 (ALU) | FPU L1 Cache -> Core 02 (ALU) | L1 Cache -> Core 03 (ALU) | L2 Cache: -> L1 Cache -> Core 04 (ALU) | L1 Cache -> Core 05 (ALU) | L1 Cache -> Core 06 (ALU) | FPU L1 Cache -> Core 07 (ALU) | So..... EX core is processing a long algorithm and is at 100%. CPU Stall hits: three of 4 cores are stalled (this is called CPU Stall) because one EX core is processing and piping data through cache pipeline and the other 2 needs the same L2 Cache to get data from memory (remember only so much can pipleline per clockcycle on shated L2 Cache). To make this worse: AMD does not use threads like Intel supporting CPUID. Thus Windows doesn't efficiently put threads on each core, further complexing the problem. Intel L2 Cache Architecture: Disk->Memory->L3 Cache L2 Cache -> L1 Cache -> Core00 (FPU and ALU) x2 threads L2 Cache -> L1 Cache -> Core01 (FPU and ALU) x2 threads L2 Cache -> L1 Cache -> Core03 (FPU and ALU) x2 threads L2 Cache -> L1 Cache -> Core 04 (FPU and ALU) x3 threads Intel: more cache hits since each core is simultaneously executing two threads and no CPU Stall. Each is not dependent on the other finishing to pipeline more data. Also, most operating systems efficiently process threads this way. Throwing gobs of shared L2 Cache and higher clock cycles looks good on paper. And is good in some terms, but there is a reason most servers use Xeon and Intel. I do like AMD myself for the record. :-) AMD Simply tried ramping the clock cycles to make CPU less noticeable. Does this make sense, bro? EDIT: AMD is like the P4 which was kind of a fail: higher clock cycles and relying on cache hits. But this is seldom going to happen with 4 cores sharing L2. There is a higher probability of CPU stall: magnitudes higher... and it shows in benchmarking a lot. Then Intel was still behind AMD when the Pentium D came out. It was a multi-core P4. Then the Core2Duo came out that adopted AMD's L3 cache and finally Intel transformed to the current pipeline. More closely resembling the PIII or P4 before the Prescott. The Prescott was really when Intel fell behind AMD for a bit. Cannot remember the specifics of the older Pentiums: but the 3 and first 4 was nice. Then got killed with Prescott. Then intel finally broke out with the Core2 Duo Architecture. :-)
  13. Your Xfinity WiFi is my '64. ;-)

  14. Correction to my post above: a developer cannot choose how threads spread across the CPU. That is left up to the operating system for the most part.
  15. Yes. At work, I'm using an 8350 running Linux natively, one thru two copies of Windows in a VM. It actually does really great. Always running a Windows VM. While at the same time running about 4 instances of Chrome in Linux (use it for all personal stuff at work like browsing porn, reading the news, and ordering pizza). Then Gimp and Ubuntu server running OpenStack for PHP stuff are always running. Average uptime are in the weeks. It really does run smooth. Even when I need to load vSphere in a VM for quick test scenarios. Was gonna get one at home, but decided to wait for the Zen architecture to come out first. EDIT: also built stations for the Graphics Department since we started posting corp. video on Youtube. Most our designers are interns and use a Mac at school. They love the AMD stations with Win10. With the money I saved over IMacs/MacPro was able to get them two monitors and beefy hardware. So the 8350 is a workhorse for editing video as well... Apparently, Adobe makes use of multiple cores well. EDIT2: AMD still suffers on some multi-core operations unless optimized. We did have this discussion before. Example: Besides only one floating point processor per 2 cores, it only has one L2 cache pipeline per 4 cores. So... if the application in not designed with AMD in mind; to use affinity of the correct cores in say... dual threads it can still suffer the same CPU stall it gets in single threaded operations. So with AMD the developer needs to "optimize" to use cores efficiently: Cores 0,1, 2, 3 == share L2 Cache Cores 4,5,6,7 == share L2 Cache. So... when "optimizing" for AMD care would need to be taken to use cores 0-3 and 4-7 different from Intel 8 core CPU's. Or the cache pipeline (or lack of) can cause CPU stall pulling data from memory. I guess the best way to describe is: Intel does less work efficiently. While the 8350 does more work faster. The end outcome depends on the work to be performed. Like a sports sedan versus a pickup: depends on the work needed to be done daily.