Archive for the ‘RDBMS’ Category
Note to self and others!
When importing large oracle dumps (size in GB’s):
- start early
- always disable ARCHIVELOG….always
- set UNDO retention to a small value, let’s say 5 seconds
- get coffee…
- remember to enable ARCHIVELOG… when in production
Just realised that it’s a new day. 2 AM, and 16 hours since I left home. Another one of those days.
As a techie this happens really often, sometimes I like it, other times I just hate it. I’m going to have to explain this to my girlfriend again .
I’m importing tablespaces, and it has to be over until 6 AM, as it’s a production application and if it’s not online and fully functional I’m going to get a lot of messages from a couple of thousand of users.
Lot’s to think about while waiting, and a lot of coffee to drink.
I wish to thank the guys that designed the oracle imp utility, which keeps on fighting on the remote datacenter, although the power in the office went down twice in the last two hours, taking my shell with it.
top - 04:09:14 up 50 days, 14:11, 3 users, load average: 2.52, 2.44, 2.37 Tasks: 156 total, 2 running, 154 sleeping, 0 stopped Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa Cpu1 : 76.2%us, 23.8%sy, 0.0%ni, 0.0%id, 0.0%wa Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa Cpu3 : 0.0%us, 1.0%sy, 0.0%ni, 1.0%id, 98.0%wa Mem: 6056016k total, 6037232k used, 18784k free, 86944k buffers Swap: 6144852k total, 117564k used, 6027288k free, 5157960k cached
Only 4 Gigabytes left, with a little luck I’ll be done by 4 AM.
Well, sometimes you need to do it. My case is simple, a lot of tables created without specifying a tablespace, by a DBA-granted database user.
The procedure is quite straight forward, although it took me some time to figure it out. Because we were migrating to a new server, I considered it a good time to clean up the tablespaces, add some new indexes, partition some tables, so, the first step was to exp/imp the new data. So it’s tablespace time…
I created the new tablespaces, added the fat datafiles, and created the users. As the data was previously dumped, using the DBA user/s previously mentioned, and because a full dump takes about 4 hours, I was stuck using those dump files.
Because only a DBA user can import what another DBA user has previously exported, I had to grant my users DBA privileges. I imported the dump for the first user, giving him quota on his own tablespace, everything went OK. Moving on to importing the second user’s data I noticed something strange. The user’s tablespace was still empty, while the first user’s tablespace was filling up.
This was strange, as the second user didn’t have quota on the first user’s tablespace. I stopped the import, dropped the user and recreated. It took me 4 import attempts to figure out what was happening. Because the second user was a DBA, although he had no quota on the first user’s tablespace, he had unlimited tablespace system privilege. I revoked it and it magically worked.
So, the lesson I learned today is:
If you want to move tables from one tablespace/schema to another, using Oracle:
* export the data using exp
* create the new tablespace
* create the new user, assigning him the new tablespace, and correctly setting the quota to none on all other tablespaces and to unlimited on the target tablespace
* make sure he doesn’t have unlimited tablespace system privilege
* import using the new user using imp
Anyway, if you have a lot of data (in the millions of records), cleaning up your tablespaces, separating your indexes in another tablespace and partitioning your tables and indexes can improve performance by an order of magnitude.