dinscurge

Members
  • Content count

    1,061
  • Joined

  • Last visited

  • Days Won

    71

Everything posted by dinscurge

  1. well the l2 cache was massive(in comparison 2mb v 512kb) but was just trying to minimize the performance hits from cache miss, miss predictions etc, but has really high latency and stuff because is more complicated but the later revisions steamroller/excavator made definite improvements in scheduling/utilization of both threads per module but never got those in the am3+ platform supposedly cause the 28nm stuff wasnt so good for that compared to the 32soi. not sure how the fpu works exactly, but for older stuff(non avx xop basically) should be able to use just a single of the 2 fmacs, where phenom2 just couldnt do avx at all only the 128bit and earlier operations. but depending on the pricing for ryzen could be pretty cool, given that if an i7-990(nahelem refresh/dieshrink 6 core) can render faster than an i7-6700k
  2. well the zen is pretty similar to intel design, where bulldozer(and children) were totally different like.. a david core, to the intel being the goliath core. where the zen core is functionally about 1.5 of the integer cores and the entire fpu but bit beefed up/changed of the bulldozer module also with some kind of cheap smt where. its going to be much faster single threaded and on the back end still have good performance through the higher utilization by switching in the other queued work any time theres a stall/fetch whatever where the bulldozer was only good if you loaded every thread/everything to the max where it would use all the units, so any kind of mixed workload where it went single threaded intel would pull ahead with say the 3770k vs the 8350, but in specifically highly parallel workloads like transcoding, ray tracing, encryption/decryption file compression/decompression then the amd throws down sort of between the regular intel core or like phenom/athlon2, and xeonphi kind of a weird compromise
  3. all could gather from intel, was atleast its the same socket, but with 2 of them easily faster than a 6950x in multi threaded more around 12-15core broadwell cept with maybe lower turbo speeds/single threaded. but the motherboards are expensive usually :/.
  4. if you were looking to keep the box you could get a massive upgrade for pretty cheap maybe, http://ark.intel.com/products/75275/Intel-Xeon-Processor-E5-2670-v2-25M-Cache-2_50-GHz assuming the bios/mobo could support http://www.ebay.com/itm/Intel-XEON-E5-2670-2-6-GHz-20-MB-8-Core-CPU-Processor-SR0H8-socket-LGA-2011-/282247198778?hash=item41b73e783a:g:drQAAOSwx2dYIAE8
  5. not surprising, considering the other stuff, say data. your limit only applies to non roaming data, theres a seperate cap thats significantly lower say, the unlimited 4g plan, you get a whopping 50mb of domestic roaming data lol. https://support.t-mobile.com/docs/DOC-3299
  6. g? (probably) bartender asks what i thought of this election cycle. so i scratch my chin and think for a second.. its like alex jones trolled on the set of the young terks at the rnc
  7. not sure exactly why it would if you had a program manually configuring what your computer(s) were using, say ping google at interval, if it fails to return proper return number then try a different interface/network whatever. would more see needing dynamic dns or something for automatic failover where.. you dont lose a session/stream the connection never breaks buut.. if you dont need that capability should be easy enough, especially since many connection purposes dont have issues with that, say your watching a video on youtube, you could switch from 3/4g to wifi airplane mode whatever and when network resumes it will just start grabbing the video again. or if you had a torrent it will just not be able to download during the time between network not being available and it switching to a different source. where if it was http download it would fail and you'd have to start from the beginning in most cases. so.. for home use probably not that big of a deal considering you might have to press f5 or something where if you didnt have it fail over youd just have no network until it came back up however long the outage is. where a server or some other higher priority network stuff it might be a big deal
  8. im sure its been asked but i figure gonna post anyways just to make some traffic lol.. the question is. do you have any tips for creating your own "format(s)" other than the obvious.. writing/typing out literally every thing it needs to have as far as data? now the specifics would be that im working on a project in c where im basically making dnd as a 'shell' if you want to call it that.. in that its a text game which you just type in what you want to do like.. can just type in balance, and then it runs some stuff to do a balance check, can type in the other program names since i haven't added them as functions at this point as its dnd so there's already like 60+ of them so not sure what all i would need as far as pointers to move all the variables.. also speeds up the compiling when i make these new functions as i don't have to compile the entire program.. none of that is really needed to know but.. making a quest/job 'engine' i guess you could call it.. so instead of having the jobs/quests compiled into the program it seems more useful to have them external and have it just gather the data from those files.. so i need to make a 'format' for which this data will be stored, instructing the program what type of job/quest it is to use the appropriate loops/sections of the program because different ones will need different calculations/have different amounts of text. a very simple example would be.. guy offers you to move boxes into store room from a cart out front. so the string for that offer, if you accept then you moving the boxes, then a string for when you tell him you have finished of him paying you/saying whatever. where if this had been some bandits ransacking a town, you may have also had to talk to the bandits during the confrontation and being that that's a fight/battle would require a lot more calculations/variables than the earlier box example. so at this point there is no program to model the files after, nor the files to model the program since its a fairly specific job which i want to use the least amount of data i can figure out how to get away with. so the tips would be for a entirely new thing, not expanding on anything already existing or what have you.. and yes i know terrible grammar etc.. lol
  9. made a couple other things after figured out some of the stuff lol :p. one being a program which creates/manages html redirection pages, as if i registered pages to emails, i could change all the pages that was registered to that email, or if you got a take down notice from 5-0 about bad stuff you could ban all their registered pages, you could update email for all or just one site, change url for the redirections etc. on the basis sort of like.. the opposite approach to freenet, in that just anyone could have a 'dns' but not sure how to go about automation being.. the point is to not use icann or any other gate keepers so.. doing ssl/tls will give certificate errors for most all users, but having a first come first served policy, where you just have a page with 3 forms say, one for desired name, the url/path to whatever file, and the email they wish to register it to. the data shouldnt be transmitted till they press go and the only attack i could forsee is if you were to sniff the traffic and be able to inject your own before theirs got there to steal their page before they got it, other than being able to see who registered the redirection and where it would point to. that.. probably would be made with lighthttpd or something to get it as a single package but not sure if its really worth it lol. but in the current state its just manual configuration just a utility that asks name, then it asks url if its not registered, otherwise pops up the email its registered to so you could confirm if you were going to use one of the options to ban/edit etc, then would ask the email after url if it wasnt registered and then id drop back to asking name but might remake it as individual tools to add/edit ban etc and just have both versions. so if they wanted to change they would email an email you setup to recieve this traffic. for the webserver, in manual configuration id just be any web server basically, and point it to the provided index or make your own, where it has a search bar where you type the name which redirects to the redirection page or have the webserver print your custom 404 saying its not been registered if they want to register follow these steps etc.so you could load say stankdawg.fap on the index 'search bar' or you could go the url to that domain whatever, if you have it registered or your ip/stankdawg.fap if you wanted to directly access the file stankdawg.fap.html so basically a make your own tinyurl where they get to pick the names. but its just static html redirections so as long as the path is available it doesnt matter if its local, intranet or internet figure might be useful for something likes.. set up a network at a college have your own knock off versions of Qoogle where you can get the low down all the clubs and classes and student activities edit: oh a random multiple one time pad thing, and some other dumb stuff lol
  10. ooh right you are
  11. was that a glitch or a pun? but anyways yea basically, if your an electrician or plumber or some other generic trade in the us, assuming you dont move alot of the code seems to not change very rapidly you kinda just go to school once and after that its just oh this new wire is the trend or doing pex instead of hardline or whatever but talking exponentially more research on finding suppliers or good deals on supplies than what you are supposed to use or to keep up with codes. where with security stuff, they change alot more rapidly so its alot more unpaid research on your own time just to keep your chops up so.. if you enjoy research and learning on its own merits regardless of pay then it could be very rewarding otherwise might want to try to find an angle/specific field where its a bit slower paced
  12. dunno. since humans have to write the code that would be then writing code, how would they assure there was no bugs in the first one to create even more bugs in later ones? buut.. i am waiting for them to start calling people to arms, you dont have the right to install any programs you want, you cant program for your computer you might be up to good edit: thought i replied earlier but i guess not lol XD. the position will probably be around for a long time but, its also a relatively high demanding position being that, if you want to be legit, like say moxie marlinspike you have to know your shit, you have to research basically forever would be like if you were an accountant and the math was constantly changing, no you dont use a calculator you use a japanese abacus, you cant do multiplication you have to do addition etc etc. the basic core knowledge you have will still be useful but when all the programs, standards, protocols get patched, forked, deprecated new ones get introduced i mean if you specifically target a protocol or something for research and can consistently deliver stuff you might be able to get away with it but most likely youd end up at a job looking at their program(s) doing code auditing or something, or freelance whatever penetration testing where youd have to be up on all the popular program versions, attacks etc
  13. basically, for stuff like a vm just running a teamspeak server or some other low performance thing, probably doesnt matter much, but if you had say 4 gaming vms at once, if you only had 1 hdd youd only be able to get 1/4 of the bandwidth if you tried to load a game at the same time on all 4 where if you had 4 drives, in jbod, a raid array zfs pool whatever you'd get about 4x the bandwidth of one drive so about the same as if you had one drive per vm. having say a dual socket quad channel memeory with a motherboard that uses all 8 channels you could only get almost the memory bandwidth of 4 desktop dual channel systems being you will have lower memory frequency so if you tried to simulate say 8 pentium dual core or i3 systems or something you wont be able to achieve the same memory bandwidth of 8 of them so if you try to do memory intensive stuff on more than 4 of them at once the performance will be degraded significantly more compared to separate machines
  14. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF seems fairly newish stuff, but yeah storage and memory bandwidth will be issues to performance but not too much you can do about the memory bandwidth, if you try to do lots of heavy tasks on all the threads it will bog down but thats the nature of the hardware.
  15. um, youd want to look into hardware pass thru so you could pass a gpu directly to the vm assuming you want nearly full performance of say a quadcore with a dedicated gpu for 3 vms. linustechtips' channel did similar thing twice on youtube something like '8 gamers 1 cpu'. pretty sure you can do passthru in some opensource stuff meaning you dont have to use unraid people seem to do it usually to run a high performance vm to play windows games on their linux box that dont work with wine or whatever. but thats pretty much it, otherwise its pretty basic once you got one vm going isnt too hard to do 2>,
  16. dunno lol. i always thought it was sort of on you to do the work, but then ive only ever messed with the minimal install thing where you build it just mean.. gcc may not be the best option at all, especially for certain processors/versions without even getting into stuff thats highly optimized like the intel compiler thats super expensive that only works with intel and specific intel chips at that
  17. dunno would guess one would have to know all the right options/which compiler even you should be using for best optimization. but i always under the impression it was more the actual system efficiency than any individual program running significantly faster, like by trimming everything possible from the kernel, only having the tools you actually use/need. would be faster boot times slightly, the drive would be slightly faster because the more free space. if you had a brand new instruction set or something that you needed stuff compiled for etc like bunch of small things eventually resulting in some noticeable gains. if you wanna see it do something fast try handbrake or 7zip lol
  18. forgot.. power stuff: have to remember that usual bedroom circuit is only 15amps, so.. dont want to go over 1600~w as absolute max for a single circuit unless you have a dedicated one with higher, and if going multiple and want redundancy can make or get like.. y cables for the 24pin/8pin cpu and stuff. but would still guess trying for like 4-500w psu prolly cheapest route, like 2 4-500w per box at maybe 15-25$ a shot based on.. people usually wanting excess/optional expandability so these lower wattage ones arent nearly as desirable to like 750-850w you start talking they just keep it till it dies. say.. could put out a craigslist ad or something 'looking for 4-600w name brand psus' network: https://www.amazon.com/HP-NC364T-Gigabit-Server-Adptr/dp/B000P0NX3G/ maybe not that particular card but one that works with your choice os would be prefered, prolly loads of cheap used ones on ebay, sun and bunch of other server manus made them, will take more work to configure the network stuff for max speeds but, being those arent that expensive, and you would just need a 24port or 2, bunch of patch cables or a spool and connectors. something and be able to push 4-6gb through each box but as per the other stuff is highly dependant on the workloads how much bandwidth you would actually use or need like.. if you wanted to render multiple video files or something on 2 machines if you just gave each half the files or frames whatever, would be faster than trying to have each box render half each frame probably by reducing latency/networking components by not having to wait for one of the boxes to split the frames and send it to the other and then put it together after like.. would be stupid to try to use them dependently on eachother to that level for that kind of task where you could just split the workload and let them go fullbore till its done
  19. yeah definitely would assume it would be harder if not impossible to have all connection types be able to switch over during. but.. if dont mind if they reset should be very easy lol
  20. gpu: 7950/70 would be great if you need/want dual precision for whatever reason they had 1/4 rate for fp64, so its straight ripped, 7970 has more dual precision flops than a 390x lol the ghz is like 4tf single, 1tf dual where 290x is like 750-850for fp64 instead of 950-1000. would look up benchmark stuff.. some stuff runs great on nvidia, but for mining well.. the 1080 was the first card nvidia has that could beat the amd flagship lol. and they just havent released a new one so its not even really a comparison currently. their arch seems very serial so the workloads your doing will determine which is faster prolly if your looking at similar range of power. hardware: the pci lanes only make a difference for workloads where the memory is larger than the frame buffer, but small enough you can deliver it at high speed, i.e. if its more than your systems memory and you have to get it from drives, even if you had like.. a nvme x4 ssd youd not be able to saturate out a x8 even so having the x16 prolly wont do much besides stuff that can fit inside system memory where you can deliver it at higher speed unless you go some kind of raid setup where you can get significant speeds. if making your own case/rigging stuff up could look into raiser cards/cables to use more slots there is a slight performance degradation though networking: kind of same as hardware depends on workload if was doing extremely heavy network stuff could increase the speeds alot going to a quad port card or 10gb or something but if its just to serve a page or something where you can see the results/tell it what to do you shouldnt need much at all. if you did.. nas or something to deliver files to this machine then would be limiting speed of operations then but if all the data is already on the box just need to figure out the actual network load :p. buut.. if you can get drivers for quad port cards might be a decent option i guess for cheaper like.. 4 1000, 1 or 2 on the mobo without having to get 10gb switch or something atleast for a significantly lower cost option than probbaly most fiber/whatever stuff although slower. for psu: other than used, evga like.. supernova g2 1300w 175$ on amazon currently. seems to be fairly cheap sometimes if your looking for new(they are superflower which is good just you only have 1x12v rail so.. if something happens will be more violent about it prolly, but with the big ones youd still be pushing 30amps> through a single rail so if something shorts will still jack shit up either way. but i guess you could get a tester if you dont have one and just try to get a bunch of lower power psus and make up or get jumpers probably the cheapest.. dumpster finds, if you got a freegeek or something like that near you just bunch of like 4-600w ones prolly could get for really cheap but have to make a case then or do somethin. storage: already kind of covered lol anyways super long thing but just some thoughts lol
  21. never tried but sounds like in theory should be fairly easy even if you wanted to make the traffic automagically switch if you had say a 'service' running which would ping a handfull of servers like google or something, and then upon whatever conditions do whichever responding action. using static configuration(s) should be easier than dhcp, but dhcp should be doable if you just configure it to ignore when its told thats the default gateway and you supply which is the default gateway i guess? but.. is one of those things i havent seen much of a good solution really, as to what if someone had 3-4 internet connections being able to decide which to use for what etc
  22. should once used a different disk/controller should eliminate any of those potential causes, otherwise usual run down whole checklist of possible random things, settings/configs for programs, build versions, check for reported bugs or anything like that, if you were using linux who knows if the kernel your using has the smp module or whatever, there might even be such a thing just for the g34/numa stuff(joke aside, i havent looked into it much but start talking sockets vs cores there might be a kernel option or something applicable for this type of setup) etc remotely applicable conditions
  23. i tried it once but the install annoyed me :/. was either take the super bloated entire dvd, or pick every single file individually or something.. and yea debian stable seems to always be super old not really sure what im gonna be switching to since the sid branch distros seem to keep having weird fallouts and stuff, sidux, aptosid siduction then theres linuxbbq which seems to be a roll your own kind of thing or something idk XD. but even those are still pretty easy to break stuff, like.. can do distupgrade on siduction but you have to do it like within a month or very likely to break stuff, but when release cycle slips so hard you might get a release a year, if you dont get it right away then you just cant do anything other than manually upgrading stuff and hoping it dont break the regular distros may seem stable but thats how they get there people wade through nearly every single kernel update breaking something
  24. hmm.. sounds weird probably need to look at bunch of jack to try to figure it out if that doesnt work lol. like micro code stuff/firmware, might have to recompile the software/get a different build using different compiler options or something.. who knows XD. most of the stuff ive heard as far as issues was like some versions of 7zip dont use all the cores, or performance optimization stuff relating to the numa stuff, since.. not 100% sure how it operates.. but besides theres 2 sockets, g34 is a mcm, pretty sure each of the 2 dies for the 'chip' has its own memory controller, the l3 stuff might be split between the dies so.. if you access other stuff/channels it might be at lower speeds if its in the other parts/die but thats really specific optimization stuff should have nothing to deal with the cores not being used at all.
  25. ooh yeah was wrong XD. abu dabi is the pile driver. just meant might be one of those.. easily forgotten things if people didnt have experience with dual socket, that you would want to populate slots for each memory controller, to that, the minimum would be 8 sticks for full speed, that if you only populate one socket the other memory controller wont be there so.. probably cant populate any of the other 8 slots etc. just like it might be easy to miss the requirement for dual 8pin cpu power(usually anyways) lol :p. edit: like with the newer soc stuff amd is going towards, if the servers follow suit.. depending on motherboards and stuff, if you only had one socket populated instead of 2, you would lose pci lanes and stuff aswell, where with current gaming setups with multiple graphics cards.. if you had a 990 chipset you could have 2 x16, but 970 only has 1, usually with the lower end consumer boards you can plainly see by the solder points which slot is x16 or can be x16, but with stuff like 2011v3 where different chips have different amount of lanes different slots will operate at different speeds using different chips which im sure could leave some people scratching their head about this low performing raid or whatever lol