Jump to content


Photo
- - - - -

what's the big deal with parallelism?


  • Please log in to reply
1 reply to this topic

#1 jason87x

jason87x

    SCRiPT KiDDie

  • Members
  • 21 posts

Posted 23 December 2010 - 08:57 PM

I was in a computer architecture class last semester, and the last chapter was about all this multiprocessor stuff. It's quite confusing and I didn't really learn it that well.

What's the big deal about it? Do multiple cores really provide the speedup they promise? Is task level parallelism a good idea, or do the separate processes need to like communicate with each other a lot?

And how does vector processing play into things? How many applications really rely on vector processing? I know a lot of this is geared at graphics and sometimes sound, but do they have much benefit to normal processing? Will it change how programming is done like in a serious way?

I guess this is kind of like the change from learning about calculus in a scalar way (Calc 1 and early Calc 2) to transitioning to a vector way of thinking (vector calculus). Which still is more complicated, I understand calculus in a scalar perspective, but I forgot a lot of those vector calc formulas, and how they relate to a scalar math world. I wish math the way it's taught was more generalized to account for vector and matrix (arbitrary R^n).

#2 n3xg3n

n3xg3n

    "I Hack, therefore, I am"

  • Members
  • 960 posts
  • Country:
  • Gender:Male
  • Location:(703)

Posted 25 December 2010 - 07:40 PM

Basically we are getting to an impasse when it comes to processing power. We are still able to make processors smaller, but the speed improvement rate is tailing off and we are leveling off. To that end, the only way we can get a huge amount of processing power is to get more processing cores, not to make them faster.

Dual core CPUs are already standard for low end computers with quad core (and hyperthreaded) cpus being the standard at the midrange. High end computers already have between 6 and 8 processing cores available for computation, and supercomputers have thousands of CPUs. We've stopped (or slowed down considerably) making CPUs faster, but we are adding ever more processing cores to them.

To fully utilize the multicore CPUs which are fast becoming the norm, a task has to be able to break up its computation into parallel work units. I predict that in the near future, we are going to see a huge amount of industrial pressure put on developing multicore ready applications and algorithms which take advantage of parallelism.

Vector processing is basically being able to perform calculations on multiple values at once. If we are looking into parallelism, this is a good thing.

Parallel programming has already effected programming in a big way, but currently the place where you are most likely to run into it is in a research institution. Scientific computing is really big into parallelism currently because the simulations and calculations that are being performed are simply to complex and long running to perform on even the fastest cores available today. That is why you occasionally hear about researchers being awarded upwards 65million hours of computer time on a supercomputer. If that was run against one core it would last thousands of years, but when you spread that across many thousands of processing units, it is a much more feasible amount of time.




BinRev is hosted by the great people at Lunarpages!