Honestly though, how many of your kernels or UBE's need to address more than 2GB of RAM (or 3GB with PAE blah blah), not many I hope! If you do, there might be some other issues that you have to deal with first.
To me it seems pretty simple too, we activate 64 bit tools and then build a full package using 64 bit compile directives. We then end up with 64bit pathcode specific dll's or so's and away we go.
The thing is, don't forget that you need to slime your code to ensure that it is 64bit ready, what does this mean? I again draw an analogy between char and wchar, remember the unicode debacle? Just think about that once again. If you use all of the standard JDE malloc's and reallocs - all good, but if you've ventured into the nether-regions of memory management (as I regularly do), then there might be a little more polish you need to provide.
This is a good guide with some great samples of problems and rectifications of problems, quite specifically for JDE:
https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/White%20Papers/jde64bsfn.pdf
In the simplest form, I'll demonstrate 64 bit vs 32 bit with the following code and the following output.
#include
int main(void)
{
int i = 0;
int *d ;
printf("hello world\n");
printf("number %d %d\n",i,sizeof(i));
d=&i;
printf("number %d %d\n",*d, sizeof(d));
return 1;
}
giving me the output
[holuser@docker ~]$ cc hello.c -m32 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 4
[holuser@docker ~]$ cc hello.c -m64 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 8
Wow - what a difference hey? Can't get 32 bit to compile, then you are going to need to run this as root:
yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686 --setopt=protected_multilib=false
The size of the basic pointer is 8 bytes - you can address way more memory. This is the core of the change to 64 bit and everything flows from the size of the base pointers.
Basically, the addresses are 8 bytes, not 4 - which changes arithmetic and a whole heap of down stream things. So when doing pointer arithmetic and cool things, your code is going to be different.
The sales glossy is good from oracle, I say get to 64 if you can.
1.
Moving to 64-bit enables you to
adopt future technology and future-proof your environments. If you do not move
to 64-bit, you incur the risk of facing hardware and software obsolescence. The
move itself to 64-bit is the cost benefit.
2.
Many vendors of third-party
components, such as database drivers and Java, which JD Edwards EnterpriseOne
requires, are delivering only 64-bit components. They also have plans in the
future to end or only provide limited support of 32-bit components.
3.
It enables JD Edwards to deliver
future product innovation and support newer versions of the required technology
stack.
4.
There is no impact to your
business processes or business data. Transitioning to 64-bit processing is a
technical uplift that is managed with the JD Edwards Tools Foundation.
This was stolen directly from https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/64bit/64_bit_Brief.pdf
Okay, so now we know the basics of 64 vs 32 - we need to start coding around it and fixing our code. You'll know pretty quick if there are problems, the troubleshooting guide and google are going to be your friend.
Note that there are currently 294 ESUs and 2219 objects that are related to BSFN compile and function problems - the reach is far.
These are divided into the following categories:
So there might be quite a bit of impact here.
Multi foundation is painful at the best of times, this is going to tough if clients want to do it over a weekend. I recommend new servers with 64 bit and get rid of the old ones in one go. Oracle have done some great work to enable this to be done gradually, but I think just bash it into prod on new servers once you have done the correct amount of testing.
This is great too https://docs.oracle.com/cd/E84502_01/learnjde/64bit.html
No comments:
Post a Comment