Sign in to follow this  
Followers 0
813_Error

2.6.9!!!

17 posts in this topic

2.6.9 is out woot!!! so going to try this tomarrow :D everyone get it now.

ERROR!

0

Share this post


Link to post
Share on other sites

It seems that they are going too fast on development of the kernel these days. What ever happened to wanting stability and reliability in the kernel? If they keep it up I'm switching to BSD.

0

Share this post


Link to post
Share on other sites
It seems that they are going too fast on development of the kernel these days. What ever happened to wanting stability and reliability in the kernel? If they keep it up I'm switching to BSD.

I guess you don't know how the Kernel.org works.

The actual Kernel is developed really slow. (Memory management, Processing File system ...)

Linux is now going to 64bit, and it could be said that; most of the development effort is on the drivers modules which could not make you computer less stable than it is now.

Profetas

0

Share this post


Link to post
Share on other sites
Linux is now going to 64bit

now? linux has had 64bit compatability for many archs for a while

0

Share this post


Link to post
Share on other sites

Switching to BSD? Kernel doesn't change as much but (with FreeBSD at least) you have to update to the latest RC quite often.

0

Share this post


Link to post
Share on other sites

I wasn't talking about compatibility.

I was talking about effort.

And compatibility and structure is different.

a program may run under 64 bit, but only use 32 instructions and registers.

To make Linux fully 64 bit will be a lof of work. and I mentioned in another post, that Intel 64 PC arch is doing the possible to no be compatible to AMD64, such as Intanium.

Edited by profetas
0

Share this post


Link to post
Share on other sites
I wasn't talking about compatibility.

I was talking about effort.

And compatibility and structure is different.

a program may run under 64 bit, but only use 32 instructions and registers.

To make Linux fully 64 bit will be a lof of work. and I mentioned in another post, that Intel 64 PC arch is doing the possible to no be compatible to AMD64, such as Intanium.

Adding support for 64 bit and making it run 64 bit are two entirely different things. Remember they need to keep 32 bit compatibility. Most of the 64 bit changes are done via #define preprocessor code to maintain this.

Your also forgetting that compiler support much match for the given arch. I run into this problem all the time with 64 bit ultrasparc development

0

Share this post


Link to post
Share on other sites

yeah if your compiler is 32bit your userland is 32bit, you can chroot into a 64bit environment though

0

Share this post


Link to post
Share on other sites

Adding support for 64 bit and making it run 64 bit are two entirely different things.

Care to explain that? I have no idea how you're differentiating 'support' from 'run'. If it supports it then it can run it.

0

Share this post


Link to post
Share on other sites

I've been using the various iterations of 2.6.9 ever since RC-1. Don't see that much changed from RC-4, weren't they only like a week apart anyway? I do like it however, it works well, runs fast, and remains stable.

0

Share this post


Link to post
Share on other sites

Adding support for 64 bit and making it run 64 bit are two entirely different things.

Care to explain that? I have no idea how you're differentiating 'support' from 'run'. If it supports it then it can run it.

A perfect example of this would be pearpc or any other type of emulation of hardware or software. Just because you get it to ‘run’ doesnt mean its running at full speed, making use of everything possible, or even running fast enough to do its job.

Then again if its not running fast enough to be used for anything, is it really running at all?

Perhaps I should have gone into more detail on the int size issue as well.

Please let me explain.

It’s true most of the code in question is in portable C.

Its true with the right compiler support full AMD64 support can be had.

BUT:

Depending on the overall code flow and design it could still not be fully compatible. Boundary checks for integer rollovers, buffer sizes for functions like ‘printf’ that need fixed sized buffers able to contain the string versions of max value of the types in question, and max values for variable types are sometimes hard coded by the use of #defines.

This as you know will allow memory to be wasted or even create security concerns if a int max value is set to support ((2^32)-1) as its max value size to protect it from integer roll overs and this is not the case.


#include <stdio.h>

int main(){
unsigned int a_int = 0;
a_int--;
printf("%u\n",a_int);
return 0;
}

2^ 32 = 4294967296 //4 gigs is allot of ram, But this is just to show.

So we expect to be used 32 bit.

volatile unsigned int sys_foo_itr;

unsigned int system_foo_table[MAX_INT_VALUE];

So what if the int can handle 64 bit? And you thought 4 gigs was allot of ram.

Also:

/* Programming error or invalid #define */

sys_foo_itr = 18446744073709551616; // 2^64

system_foo_table[sys_foo_itr]; // Value now invalid, segfualt is the best thing that can happen.

If your limited the available number of something by the available non negative integral values available for that type and your not using the max value available then your wasting memory.

Not to mention the bloat caused by 64 bit userland: It’s only good for kernels.

A good guide for optimizing sparc based stuff with gcc is here: http://www.osnews.com/story.php?news_id=6136

It talks about some of the hardware differences as well in case your interested.

Edited by Belgarath
0

Share this post


Link to post
Share on other sites

I would just say:

When you compile a program to 32 bits machine, the compiler, will use the limits of that machine.

one simple example is that it won't use anything over 32 bits etc..

if a 64 bits is compatible to a 32 bit. you won't have to change that much to be able to compile the program to run in a 64 bits machine. however, your compiler will have compiled most of the code for a 32 bits machine using the restriction of 32 bits.

for you to get a full 64 bit object code, your compiler has to know all the feature and limitation of a 64 bit machine.

0

Share this post


Link to post
Share on other sites

if a int max value is set to support ((2^32)-1) as its max value size to protect it from integer roll overs and this is not the case.

Yeah, it's not as there's some definition INT_MAX in the headers provided by the compiler. By that line of reasoning I could argue if a pointer is set to NULL and something dereferences it and the mmu causes the program to segfault, it's the mmu's fault and we have to adjust to that - not the fact that people shouldn't have been touching NULL in the first place.

If your limited the available number of something by the available non negative integral values available for that type and your not using the max value available then your wasting memory.

Then use a variable of fixed size. There's a reason why we have int8_t, int16_t, int32_t, etc. Plus, if you concern yourself only with memory size, you're going to end up taking performance hits from alignment issues (although I'm not sure how much of an issue they are today..).

Not to mention the bloat caused by 64 bit userland: It’s only good for kernels.

Let's take the x86-64, because that's the 64-bit arch I know best (even though it's just a kludge built on another series of kludges and it's probably a very poor example). It's not as bloated as you may think. Only data pointers are fully 64-bit. ints are only 32-bits. Addresses within the program are 31-bits (using a medium memory model - gcc hasn't implemented the large model). Plus, the CISC instruction set limits the size of the code generated to a managable amount. *AND* there are more registers available, which means there can be less stack activity, and therefore faster code. Plus there's another layer of translation added to the kernel if there's 32-bit suport.

I'm not saying a 32-bit userland is inherently bad on a 64-bit platform, just that it's not necessarily better than a 64-bit userland.

Off of the top of my head, I can name a few applications that can benefit from a 64-bit userland: video editing, CAD, and databases.

With that being said, if your code can't easily be made 64-bit clean, your code is braindead. I just got over fixing a large project of mine (about 30k lines) so that it would be 64-bit clean. The only thing I had to fix was casting pointers to 32-bit integers into intptr_t size integers. It was a trivial fix. Really.

I'm using a 64-bit platform right now with a 64-bit userspace. It works fine for me.

0

Share this post


Link to post
Share on other sites

64bit user space on sparc's works ok in linux, but it's slow, and only would really help you with number crunching

0

Share this post


Link to post
Share on other sites
64bit user space on sparc's works ok in linux, but it's slow, and only would really help you with number crunching

Jedibebop: Exactly my point.

Chz: I understand what your saying but sparc (ultrasparc to be exact) is the only 64 bit arch I have available/recent experience with. So thats what I know the best. If I had a PPC apple os and a harddrive I may be able to get a ppc computer running, but right now I'm waiting to get a AMD 64 system.

0

Share this post


Link to post
Share on other sites

/me awaits the deb pkg

i should learn how to compile a kernel from source!!

0

Share this post


Link to post
Share on other sites
/me awaits the deb pkg

i should learn how to compile a kernel from source!!

In the mean time, here is an unofficial build and the Morphix one, here.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0