On the day before last day, my client told me the resource of EC2 instance had been increased, so I decided to increase memory setting for Oracle instance.
The total memory was 4G, so I changed the memory_target to 2800M, and set the option of system parameter filesystemio_options to 'setall', then rebooted the instance, and got below error at first:
- SQL> startup
- ORA-00845: MEMORY_TARGET not supported on this system
The error message was a little misunderstanding, and in fact the issue was that the size of /dev/shm was not enough. I modified the /etc/fstab to get a bigger /dev/shm, then remounted it and got below result:
- [root@ip-172-XXX-1-224 ~]# df -h
- Filesystem Size Used Avail Use% Mounted on
- /dev/xvda2 500G 70G 431G 14% /
- devtmpfs 1.9G 0 1.9G 0% /dev
- tmpfs 3.0G 0 3.0G 0% /dev/shm
- tmpfs 1.8G 17M 1.8G 1% /run
- tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
- tmpfs 354M 0 354M 0% /run/user/1000
OK, then started the instance again, and got the 443 error:
- SQL> startup
- ORA-00443: background process "PMON" did not start
I checked the alert log and did not find any useful information, also got no help from Oracle support webiste(Metalink), so even I though the memory_target was not so big to cause this issue, I still decreased it and do some tests, but all of these failed again and again.
I installed the instance just on the previous day, so I made sure it worked well before the change, and I knew if I roll-backed the change it should work again. But now the memory was increased from 2G to 4G, and I did want to make use of more memory. So something was different now, and I should find it.
I recalled one thing that on RHEL7, the /etc/rc.local would not be called after the fresh installation, and I did add one line in it to enable the swap file. so I checked the memory usage again:
- [oracle@ip-172-XXX-1-224 shm]$ free
- total used free shared buff/cache available
- Mem: 3618512 95920 1696860 16648 1825732 3236460
- Swap: 0 0 0
I noticed there was no swap space. Was this the reason? Now the memory size changed, so I made a new swap file and started the instance again:
- [root@ip-172-XXX-1-224 shm]# dd if=/dev/zero of=/swapfile bs=1048576 count=4096
- 4096+0 records in
- 4096+0 records out
- 4294967296 bytes (4.3 GB) copied, 53.2875 s, 80.6 MB/s
- [root@ip-172-XXX-1-224 shm]# mkswap /swapfile
- Setting up swapspace version 1, size = 4194300 KiB
- no label, UUID=ccd1af8b-a182-4a86-8ee0-4bd29dd672ab
- [root@ip-172-XXX-1-224 shm]# chmod 600 /swapfile
- [root@ip-172-XXX-1-224 shm]# swapon /swapfile
- [root@ip-172-XXX-1-224 shm]# su - oracle
- Last login: Fri Mar 24 02:05:59 EDT 2017 on pts/0
- [oracle@ip-172-31-1-224 ~]$ sqlplus "/as sysdba"
- SQL*Plus: Release 12.2.0.1.0 Production on Fri Mar 24 02:32:12 2017
- Copyright (c) 1982, 2016, Oracle. All rights reserved.
- Connected to an idle instance.
- SQL> startup
- ORACLE instance started.
- Total System Global Area 2936012800 bytes
- Fixed Size 8625032 bytes
- Variable Size 2432697464 bytes
- Database Buffers 486539264 bytes
- Redo Buffers 8151040 bytes
- Database mounted.
- Database opened.
- SQL> exit
And I checked the memory usage again:
- [oracle@ip-172-31-1-224 ~]$ free
- total used free shared buff/cache available
- Mem: 3618512 400420 159116 885016 3058976 2048104
- Swap: 4194300 0 4194300
- [oracle@ip-172-31-1-224 ~]$ df -h
- Filesystem Size Used Avail Use% Mounted on
- /dev/xvda2 500G 71G 430G 15% /
- devtmpfs 1.9G 0 1.9G 0% /dev
- tmpfs 3.0G 848M 2.2G 28% /dev/shm
- tmpfs 1.8G 17M 1.8G 1% /run
- tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
- tmpfs 354M 0 354M 0% /run/user/1000
So even the swap space was not used, if the configured memory_target was large and could not be placed in the memory, the instance would not be started!
But why I made the swap file? Why not use a swap partition directly?
- There is no swap for some kinds of AWS EC2 instance.
- The default file system is XFS on RHEL7, and it could not be shrunk.
So when you create a XFS file system, you have to plan the size carefully as it cannot be reduced online like EXT4/3.