Dec 13 01:28:59.017766 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:28:59.017805 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:59.017820 kernel: BIOS-provided physical RAM map: Dec 13 01:28:59.017831 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:28:59.017841 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:28:59.017852 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:28:59.017869 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:28:59.017881 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:28:59.017892 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:28:59.017904 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:28:59.017917 kernel: NX (Execute Disable) protection: active Dec 13 01:28:59.017929 kernel: APIC: Static calls initialized Dec 13 01:28:59.017942 kernel: SMBIOS 2.7 present. Dec 13 01:28:59.017956 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:28:59.017976 kernel: Hypervisor detected: KVM Dec 13 01:28:59.017990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:28:59.018005 kernel: kvm-clock: using sched offset of 6329457136 cycles Dec 13 01:28:59.018020 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:28:59.018035 kernel: tsc: Detected 2499.996 MHz processor Dec 13 01:28:59.018049 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:28:59.018064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:28:59.018083 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:28:59.018097 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:28:59.018112 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:28:59.018127 kernel: Using GB pages for direct mapping Dec 13 01:28:59.018141 kernel: ACPI: Early table checksum verification disabled Dec 13 01:28:59.018156 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:28:59.018171 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:28:59.018186 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:28:59.018200 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:28:59.018218 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:28:59.018232 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:28:59.018247 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:28:59.018262 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:28:59.018276 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:28:59.018289 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:28:59.018302 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:28:59.018317 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:28:59.018329 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:28:59.018346 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:28:59.018364 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:28:59.018377 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:28:59.018391 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:28:59.018404 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:28:59.018421 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:28:59.018435 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:28:59.018450 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:28:59.018464 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:28:59.018479 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:28:59.018492 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:28:59.018507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:28:59.018521 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:28:59.018537 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:28:59.018619 kernel: Zone ranges: Dec 13 01:28:59.018632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:28:59.018645 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:28:59.018658 kernel: Normal empty Dec 13 01:28:59.018671 kernel: Movable zone start for each node Dec 13 01:28:59.018684 kernel: Early memory node ranges Dec 13 01:28:59.018697 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:28:59.018712 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:28:59.018727 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:28:59.018745 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:28:59.018760 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:28:59.018773 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:28:59.018786 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:28:59.018801 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:28:59.018815 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:28:59.018828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:28:59.018843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:28:59.018858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:28:59.018872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:28:59.018890 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:28:59.018904 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:28:59.018918 kernel: TSC deadline timer available Dec 13 01:28:59.018932 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:28:59.018946 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:28:59.018960 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:28:59.018974 kernel: Booting paravirtualized kernel on KVM Dec 13 01:28:59.018988 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:28:59.019002 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:28:59.019022 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:28:59.019038 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:28:59.019053 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:28:59.019068 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:28:59.019084 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:28:59.019103 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:59.019120 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:28:59.019136 kernel: random: crng init done Dec 13 01:28:59.019155 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:28:59.019171 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:28:59.019187 kernel: Fallback order for Node 0: 0 Dec 13 01:28:59.019203 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:28:59.019219 kernel: Policy zone: DMA32 Dec 13 01:28:59.019234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:28:59.019250 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:28:59.019266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:28:59.019285 kernel: Kernel/User page tables isolation: enabled Dec 13 01:28:59.019301 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:28:59.019317 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:28:59.019333 kernel: Dynamic Preempt: voluntary Dec 13 01:28:59.019349 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:28:59.019367 kernel: rcu: RCU event tracing is enabled. Dec 13 01:28:59.019384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:28:59.019400 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:28:59.019417 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:28:59.019433 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:28:59.019454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:28:59.019470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:28:59.019486 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:28:59.019501 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:28:59.019589 kernel: Console: colour VGA+ 80x25 Dec 13 01:28:59.019606 kernel: printk: console [ttyS0] enabled Dec 13 01:28:59.019621 kernel: ACPI: Core revision 20230628 Dec 13 01:28:59.019638 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:28:59.019654 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:28:59.019675 kernel: x2apic enabled Dec 13 01:28:59.019692 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:28:59.019721 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:28:59.019742 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 01:28:59.019759 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:28:59.019777 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:28:59.019793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:28:59.019810 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:28:59.019827 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:28:59.019855 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:28:59.019872 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:28:59.019888 kernel: RETBleed: Vulnerable Dec 13 01:28:59.019908 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:28:59.019925 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:28:59.019941 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:28:59.019958 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:28:59.019974 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:28:59.019991 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:28:59.020008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:28:59.020028 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:28:59.020045 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:28:59.020061 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:28:59.020077 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:28:59.020094 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:28:59.020111 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:28:59.020128 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:28:59.020144 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:28:59.020161 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:28:59.020177 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:28:59.020197 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:28:59.020213 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:28:59.020230 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:28:59.020246 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:28:59.020262 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:28:59.020279 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:28:59.020294 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:28:59.020311 kernel: landlock: Up and running. Dec 13 01:28:59.020327 kernel: SELinux: Initializing. Dec 13 01:28:59.020343 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:28:59.020358 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:28:59.020375 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:28:59.020395 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:59.020409 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:59.020425 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:59.020440 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:28:59.020546 kernel: signal: max sigframe size: 3632 Dec 13 01:28:59.020574 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:28:59.020592 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:28:59.020608 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:28:59.020625 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:28:59.020645 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:28:59.020659 kernel: .... node #0, CPUs: #1 Dec 13 01:28:59.020673 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:28:59.020689 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:28:59.020704 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:28:59.020719 kernel: smpboot: Max logical packages: 1 Dec 13 01:28:59.020733 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 01:28:59.020747 kernel: devtmpfs: initialized Dec 13 01:28:59.020764 kernel: x86/mm: Memory block size: 128MB Dec 13 01:28:59.020779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:28:59.020794 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:28:59.020808 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:28:59.020823 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:28:59.020837 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:28:59.020852 kernel: audit: type=2000 audit(1734053339.145:1): state=initialized audit_enabled=0 res=1 Dec 13 01:28:59.020867 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:28:59.020881 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:28:59.020899 kernel: cpuidle: using governor menu Dec 13 01:28:59.020914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:28:59.020928 kernel: dca service started, version 1.12.1 Dec 13 01:28:59.020943 kernel: PCI: Using configuration type 1 for base access Dec 13 01:28:59.020959 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:28:59.020973 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:28:59.020988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:28:59.021003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:28:59.021018 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:28:59.021036 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:28:59.021051 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:28:59.021066 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:28:59.021081 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:28:59.021096 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:28:59.021111 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:28:59.021126 kernel: ACPI: Interpreter enabled Dec 13 01:28:59.021140 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:28:59.021155 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:28:59.021170 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:28:59.021188 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:28:59.021203 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:28:59.021218 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:28:59.021443 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:28:59.021603 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:28:59.021737 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:28:59.021758 kernel: acpiphp: Slot [3] registered Dec 13 01:28:59.021778 kernel: acpiphp: Slot [4] registered Dec 13 01:28:59.021793 kernel: acpiphp: Slot [5] registered Dec 13 01:28:59.021807 kernel: acpiphp: Slot [6] registered Dec 13 01:28:59.021823 kernel: acpiphp: Slot [7] registered Dec 13 01:28:59.021838 kernel: acpiphp: Slot [8] registered Dec 13 01:28:59.021853 kernel: acpiphp: Slot [9] registered Dec 13 01:28:59.021869 kernel: acpiphp: Slot [10] registered Dec 13 01:28:59.021883 kernel: acpiphp: Slot [11] registered Dec 13 01:28:59.021897 kernel: acpiphp: Slot [12] registered Dec 13 01:28:59.021915 kernel: acpiphp: Slot [13] registered Dec 13 01:28:59.021929 kernel: acpiphp: Slot [14] registered Dec 13 01:28:59.021944 kernel: acpiphp: Slot [15] registered Dec 13 01:28:59.021959 kernel: acpiphp: Slot [16] registered Dec 13 01:28:59.021973 kernel: acpiphp: Slot [17] registered Dec 13 01:28:59.021987 kernel: acpiphp: Slot [18] registered Dec 13 01:28:59.022001 kernel: acpiphp: Slot [19] registered Dec 13 01:28:59.022015 kernel: acpiphp: Slot [20] registered Dec 13 01:28:59.022030 kernel: acpiphp: Slot [21] registered Dec 13 01:28:59.022044 kernel: acpiphp: Slot [22] registered Dec 13 01:28:59.022063 kernel: acpiphp: Slot [23] registered Dec 13 01:28:59.022077 kernel: acpiphp: Slot [24] registered Dec 13 01:28:59.022093 kernel: acpiphp: Slot [25] registered Dec 13 01:28:59.022109 kernel: acpiphp: Slot [26] registered Dec 13 01:28:59.022124 kernel: acpiphp: Slot [27] registered Dec 13 01:28:59.022139 kernel: acpiphp: Slot [28] registered Dec 13 01:28:59.022153 kernel: acpiphp: Slot [29] registered Dec 13 01:28:59.022168 kernel: acpiphp: Slot [30] registered Dec 13 01:28:59.022183 kernel: acpiphp: Slot [31] registered Dec 13 01:28:59.022201 kernel: PCI host bridge to bus 0000:00 Dec 13 01:28:59.022349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:28:59.022481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:28:59.022654 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:28:59.022792 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:28:59.022921 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:28:59.023080 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:28:59.023233 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:28:59.023378 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:28:59.023517 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:28:59.023680 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:28:59.023817 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:28:59.023960 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:28:59.024092 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:28:59.024230 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:28:59.024451 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:28:59.024601 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:28:59.024739 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:28:59.025853 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:28:59.026038 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:28:59.026197 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:28:59.026461 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:28:59.026676 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:28:59.026832 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:28:59.026964 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:28:59.026984 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:28:59.027000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:28:59.027021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:28:59.027036 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:28:59.027051 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:28:59.027067 kernel: iommu: Default domain type: Translated Dec 13 01:28:59.027082 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:28:59.027097 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:28:59.027112 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:28:59.027128 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:28:59.027143 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:28:59.027277 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:28:59.027408 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:28:59.027725 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:28:59.027751 kernel: vgaarb: loaded Dec 13 01:28:59.027767 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:28:59.027784 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:28:59.027800 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:28:59.027817 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:28:59.027833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:28:59.027866 kernel: pnp: PnP ACPI init Dec 13 01:28:59.027882 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:28:59.027898 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:28:59.027914 kernel: NET: Registered PF_INET protocol family Dec 13 01:28:59.027930 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:28:59.027947 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:28:59.027963 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:28:59.027980 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:28:59.027999 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:28:59.028013 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:28:59.028029 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:28:59.028046 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:28:59.028062 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:28:59.028078 kernel: NET: Registered PF_XDP protocol family Dec 13 01:28:59.028212 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:28:59.028333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:28:59.028454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:28:59.028607 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:28:59.028748 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:28:59.028770 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:28:59.028787 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:28:59.028803 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:28:59.028820 kernel: clocksource: Switched to clocksource tsc Dec 13 01:28:59.028837 kernel: Initialise system trusted keyrings Dec 13 01:28:59.028853 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:28:59.028873 kernel: Key type asymmetric registered Dec 13 01:28:59.028899 kernel: Asymmetric key parser 'x509' registered Dec 13 01:28:59.028915 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:28:59.028932 kernel: io scheduler mq-deadline registered Dec 13 01:28:59.028948 kernel: io scheduler kyber registered Dec 13 01:28:59.028964 kernel: io scheduler bfq registered Dec 13 01:28:59.028979 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:28:59.028996 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:28:59.029012 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:28:59.029032 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:28:59.029048 kernel: i8042: Warning: Keylock active Dec 13 01:28:59.029064 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:28:59.029080 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:28:59.029228 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:28:59.029354 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:28:59.029479 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:28:58 UTC (1734053338) Dec 13 01:28:59.029635 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:28:59.029659 kernel: intel_pstate: CPU model not supported Dec 13 01:28:59.029676 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:28:59.029693 kernel: Segment Routing with IPv6 Dec 13 01:28:59.029709 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:28:59.029725 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:28:59.029741 kernel: Key type dns_resolver registered Dec 13 01:28:59.029757 kernel: IPI shorthand broadcast: enabled Dec 13 01:28:59.029774 kernel: sched_clock: Marking stable (522003082, 234583059)->(843112802, -86526661) Dec 13 01:28:59.029790 kernel: registered taskstats version 1 Dec 13 01:28:59.029809 kernel: Loading compiled-in X.509 certificates Dec 13 01:28:59.029826 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:28:59.029842 kernel: Key type .fscrypt registered Dec 13 01:28:59.029858 kernel: Key type fscrypt-provisioning registered Dec 13 01:28:59.029874 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:28:59.029890 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:28:59.029907 kernel: ima: No architecture policies found Dec 13 01:28:59.029923 kernel: clk: Disabling unused clocks Dec 13 01:28:59.029943 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:28:59.029959 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:28:59.029975 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:28:59.029991 kernel: Run /init as init process Dec 13 01:28:59.030006 kernel: with arguments: Dec 13 01:28:59.030023 kernel: /init Dec 13 01:28:59.030038 kernel: with environment: Dec 13 01:28:59.030054 kernel: HOME=/ Dec 13 01:28:59.030069 kernel: TERM=linux Dec 13 01:28:59.030085 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:28:59.030111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:28:59.030144 systemd[1]: Detected virtualization amazon. Dec 13 01:28:59.030174 systemd[1]: Detected architecture x86-64. Dec 13 01:28:59.030191 systemd[1]: Running in initrd. Dec 13 01:28:59.030209 systemd[1]: No hostname configured, using default hostname. Dec 13 01:28:59.030229 systemd[1]: Hostname set to . Dec 13 01:28:59.030248 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:28:59.030265 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:28:59.030281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:59.030298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:59.030318 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:28:59.030335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:28:59.030356 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:28:59.030373 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:28:59.030392 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:28:59.030410 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:28:59.030427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:59.030443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:59.030461 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:28:59.030482 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:28:59.030499 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:28:59.030517 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:28:59.030535 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:59.030609 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:59.030628 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:28:59.030643 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:28:59.030661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:59.030679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:59.030701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:59.030719 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:28:59.030738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:28:59.030756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:28:59.030775 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:28:59.030793 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:28:59.030812 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:28:59.030833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:28:59.030852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:59.030870 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:59.030889 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:28:59.030944 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:28:59.030992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:59.031011 systemd-journald[178]: Journal started Dec 13 01:28:59.031048 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23ec93ee503e1b45dcfd8bf6b19bf7) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:28:59.034139 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:28:59.034743 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:28:59.045779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:28:59.047974 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:28:59.062777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:28:59.063275 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:59.090823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:28:59.181845 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:28:59.181880 kernel: Bridge firewalling registered Dec 13 01:28:59.110506 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:28:59.188182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:59.191050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:59.192541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:59.204931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:59.214718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:59.218968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:59.240362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:59.253239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:28:59.256126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:59.263813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:28:59.288452 dracut-cmdline[215]: dracut-dracut-053 Dec 13 01:28:59.290051 systemd-resolved[208]: Positive Trust Anchors: Dec 13 01:28:59.290072 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:28:59.290124 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:28:59.300025 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:59.309829 systemd-resolved[208]: Defaulting to hostname 'linux'. Dec 13 01:28:59.312113 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:28:59.314465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:59.376583 kernel: SCSI subsystem initialized Dec 13 01:28:59.389593 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:28:59.406626 kernel: iscsi: registered transport (tcp) Dec 13 01:28:59.450605 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:28:59.450690 kernel: QLogic iSCSI HBA Driver Dec 13 01:28:59.490271 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:59.497771 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:28:59.540391 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:28:59.540479 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:28:59.542669 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:28:59.592615 kernel: raid6: avx512x4 gen() 16250 MB/s Dec 13 01:28:59.609616 kernel: raid6: avx512x2 gen() 14308 MB/s Dec 13 01:28:59.626610 kernel: raid6: avx512x1 gen() 13876 MB/s Dec 13 01:28:59.643602 kernel: raid6: avx2x4 gen() 13007 MB/s Dec 13 01:28:59.660598 kernel: raid6: avx2x2 gen() 11126 MB/s Dec 13 01:28:59.677922 kernel: raid6: avx2x1 gen() 11668 MB/s Dec 13 01:28:59.677996 kernel: raid6: using algorithm avx512x4 gen() 16250 MB/s Dec 13 01:28:59.696061 kernel: raid6: .... xor() 5865 MB/s, rmw enabled Dec 13 01:28:59.696158 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:28:59.721588 kernel: xor: automatically using best checksumming function avx Dec 13 01:28:59.930610 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:28:59.946098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:59.952294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:59.978003 systemd-udevd[397]: Using default interface naming scheme 'v255'. Dec 13 01:28:59.984468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:59.995832 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:00.020377 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Dec 13 01:29:00.057591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:00.068781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:00.182833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:00.192763 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:00.233798 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:00.238527 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:00.242124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:00.245839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:00.253799 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:00.311896 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:00.332585 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:29:00.384829 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:29:00.385032 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:29:00.385203 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:f6:2e:33:f1:b5 Dec 13 01:29:00.388813 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:29:00.402660 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:00.426784 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:29:00.435327 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:29:00.446339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:00.452166 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:00.452198 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:00.446518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:00.454191 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:00.457949 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:29:00.455678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:00.456153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:00.461639 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:00.474227 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:00.474262 kernel: GPT:9289727 != 16777215 Dec 13 01:29:00.474280 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:00.474301 kernel: GPT:9289727 != 16777215 Dec 13 01:29:00.474322 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:00.474341 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:29:00.481249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:00.610611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (444) Dec 13 01:29:00.651586 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (450) Dec 13 01:29:00.732581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:00.750050 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:00.792126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:29:00.829149 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:29:00.829572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:00.843286 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:29:00.843500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:29:00.861479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:29:00.872813 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:00.883339 disk-uuid[629]: Primary Header is updated. Dec 13 01:29:00.883339 disk-uuid[629]: Secondary Entries is updated. Dec 13 01:29:00.883339 disk-uuid[629]: Secondary Header is updated. Dec 13 01:29:00.895011 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:29:00.915597 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:29:00.924589 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:29:01.926602 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:29:01.927042 disk-uuid[630]: The operation has completed successfully. Dec 13 01:29:02.125258 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:02.125378 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:02.139773 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:02.145814 sh[971]: Success Dec 13 01:29:02.173659 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:29:02.295952 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:02.306360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:02.315258 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:02.347697 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:02.347828 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:02.347848 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:02.348831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:02.350022 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:02.492607 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:02.496947 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:02.500160 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:02.514727 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:02.523026 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:02.554667 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:02.554729 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:02.554755 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:29:02.565130 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:29:02.585245 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:02.586919 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:02.609327 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:02.616061 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:02.664297 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:02.683823 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:02.705143 systemd-networkd[1163]: lo: Link UP Dec 13 01:29:02.705155 systemd-networkd[1163]: lo: Gained carrier Dec 13 01:29:02.706829 systemd-networkd[1163]: Enumeration completed Dec 13 01:29:02.707230 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:02.707235 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:02.708251 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:02.711746 systemd[1]: Reached target network.target - Network. Dec 13 01:29:02.717019 systemd-networkd[1163]: eth0: Link UP Dec 13 01:29:02.717025 systemd-networkd[1163]: eth0: Gained carrier Dec 13 01:29:02.717039 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:02.731669 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.30.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:29:03.221310 ignition[1095]: Ignition 2.19.0 Dec 13 01:29:03.221328 ignition[1095]: Stage: fetch-offline Dec 13 01:29:03.221601 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:03.221614 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:03.224064 ignition[1095]: Ignition finished successfully Dec 13 01:29:03.228629 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:03.234762 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:03.255416 ignition[1172]: Ignition 2.19.0 Dec 13 01:29:03.255429 ignition[1172]: Stage: fetch Dec 13 01:29:03.255994 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:03.256009 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:03.256120 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:03.275758 ignition[1172]: PUT result: OK Dec 13 01:29:03.278432 ignition[1172]: parsed url from cmdline: "" Dec 13 01:29:03.278442 ignition[1172]: no config URL provided Dec 13 01:29:03.278455 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:03.278473 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:03.278495 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:03.280054 ignition[1172]: PUT result: OK Dec 13 01:29:03.280106 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:29:03.282643 ignition[1172]: GET result: OK Dec 13 01:29:03.284145 ignition[1172]: parsing config with SHA512: 43cb4d4cb19992c92e3343bd5ea403fc1da31619d6c2ae23c492b12f1f18a77ee3ef3d6f314e8f78ba58e08fc5507eba18a61ce5a16be7d66e6e834bac65517a Dec 13 01:29:03.291717 unknown[1172]: fetched base config from "system" Dec 13 01:29:03.291733 unknown[1172]: fetched base config from "system" Dec 13 01:29:03.291741 unknown[1172]: fetched user config from "aws" Dec 13 01:29:03.293711 ignition[1172]: fetch: fetch complete Dec 13 01:29:03.293719 ignition[1172]: fetch: fetch passed Dec 13 01:29:03.293834 ignition[1172]: Ignition finished successfully Dec 13 01:29:03.299781 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:03.305844 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:03.340430 ignition[1178]: Ignition 2.19.0 Dec 13 01:29:03.340444 ignition[1178]: Stage: kargs Dec 13 01:29:03.341007 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:03.341021 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:03.341128 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:03.344046 ignition[1178]: PUT result: OK Dec 13 01:29:03.349809 ignition[1178]: kargs: kargs passed Dec 13 01:29:03.349981 ignition[1178]: Ignition finished successfully Dec 13 01:29:03.353114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:03.362854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:03.389062 ignition[1185]: Ignition 2.19.0 Dec 13 01:29:03.389076 ignition[1185]: Stage: disks Dec 13 01:29:03.389626 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:03.389638 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:03.389749 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:03.391137 ignition[1185]: PUT result: OK Dec 13 01:29:03.400188 ignition[1185]: disks: disks passed Dec 13 01:29:03.400443 ignition[1185]: Ignition finished successfully Dec 13 01:29:03.403506 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:03.406780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:03.408082 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:03.410478 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:03.411758 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:03.414206 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:03.421824 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:03.473451 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:29:03.477713 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:03.486776 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:03.600576 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:03.601378 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:03.602308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:03.624681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:03.627061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:03.629164 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:29:03.629278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:03.629307 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:03.642647 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Dec 13 01:29:03.642697 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:03.645580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:03.645642 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:29:03.654041 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:03.658804 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:29:03.660283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:03.674369 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:04.196686 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:04.216961 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:04.223337 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:04.228689 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:04.397682 systemd-networkd[1163]: eth0: Gained IPv6LL Dec 13 01:29:04.588799 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:04.599717 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:04.604779 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:04.611589 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:04.612316 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:04.642256 ignition[1325]: INFO : Ignition 2.19.0 Dec 13 01:29:04.644700 ignition[1325]: INFO : Stage: mount Dec 13 01:29:04.644700 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:04.644700 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:04.644700 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:04.652236 ignition[1325]: INFO : PUT result: OK Dec 13 01:29:04.655132 ignition[1325]: INFO : mount: mount passed Dec 13 01:29:04.656340 ignition[1325]: INFO : Ignition finished successfully Dec 13 01:29:04.659860 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:04.665737 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:04.679861 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:04.694913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:04.713583 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) Dec 13 01:29:04.713689 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:04.721110 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:04.721188 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:29:04.731585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:29:04.733653 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:04.768131 ignition[1355]: INFO : Ignition 2.19.0 Dec 13 01:29:04.768131 ignition[1355]: INFO : Stage: files Dec 13 01:29:04.770372 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:04.770372 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:04.770372 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:04.770372 ignition[1355]: INFO : PUT result: OK Dec 13 01:29:04.777683 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:04.796468 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:04.796468 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:04.828941 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:04.831386 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:04.833707 unknown[1355]: wrote ssh authorized keys file for user: core Dec 13 01:29:04.835305 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:04.839118 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:04.841594 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:04.899099 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:05.258619 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:05.258619 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:05.280937 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:29:05.746031 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:29:06.077972 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:06.077972 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:06.084176 ignition[1355]: INFO : files: files passed Dec 13 01:29:06.084176 ignition[1355]: INFO : Ignition finished successfully Dec 13 01:29:06.103387 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:06.112767 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:06.123704 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:06.128337 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:06.130050 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:06.153431 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:06.156004 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:06.158290 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:06.159176 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:06.164034 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:06.172734 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:06.209216 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:06.209366 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:06.212965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:06.219663 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:06.221883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:06.227014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:06.261275 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:06.273351 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:06.305775 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:06.306006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:06.311539 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:06.315463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:06.315941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:06.318114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:06.321708 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:06.327790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:06.331843 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:06.332050 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:06.338884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:06.341898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:06.345755 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:06.348859 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:06.351799 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:06.354587 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:06.354779 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:06.359150 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:06.361351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:06.364429 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:06.365824 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:06.371771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:06.371989 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:06.381732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:06.383728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:06.388913 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:06.390304 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:06.398880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:06.405313 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:06.406757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:06.407022 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:06.409219 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:06.409375 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:06.416494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:06.417610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:06.436943 ignition[1408]: INFO : Ignition 2.19.0 Dec 13 01:29:06.438113 ignition[1408]: INFO : Stage: umount Dec 13 01:29:06.438113 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:06.438113 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:06.438113 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:06.443666 ignition[1408]: INFO : PUT result: OK Dec 13 01:29:06.448369 ignition[1408]: INFO : umount: umount passed Dec 13 01:29:06.448369 ignition[1408]: INFO : Ignition finished successfully Dec 13 01:29:06.457298 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:06.457451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:06.461685 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:06.461804 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:06.463299 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:06.463361 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:06.476009 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:06.476073 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:06.478655 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:06.481845 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:06.481962 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:06.485372 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:06.487713 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:06.491639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:06.495207 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:06.495294 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:06.499107 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:06.499157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:06.503614 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:06.503661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:06.506010 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:06.506062 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:06.509664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:06.509716 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:06.511632 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:06.513838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:06.518048 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:06.518818 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:06.518926 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:06.520183 systemd-networkd[1163]: eth0: DHCPv6 lease lost Dec 13 01:29:06.521426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:06.521527 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:06.531253 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:06.531383 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:06.536419 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:06.536534 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:06.539022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:06.539077 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:06.549733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:06.551853 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:06.551937 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:06.554171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:06.554406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:06.556968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:06.557031 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:06.560193 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:06.560250 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:06.563569 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:06.592012 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:06.593523 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:06.597212 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:06.598682 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:06.610148 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:06.610255 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:06.616860 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:06.619312 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:06.624452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:06.624537 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:06.628513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:06.628586 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:06.632362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:06.632421 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:06.642812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:06.644179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:06.644239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:06.645873 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:29:06.645918 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:06.648537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:06.648593 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:06.655051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:06.655125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:06.661711 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:06.662232 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:06.665682 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:06.693817 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:06.704758 systemd[1]: Switching root. Dec 13 01:29:06.744119 systemd-journald[178]: Journal stopped Dec 13 01:29:09.688387 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:09.688474 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:09.688507 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:09.688525 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:09.688547 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:09.688629 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:09.688647 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:09.688664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:09.688680 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:09.688704 kernel: audit: type=1403 audit(1734053348.018:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:09.688722 systemd[1]: Successfully loaded SELinux policy in 53.056ms. Dec 13 01:29:09.688753 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.647ms. Dec 13 01:29:09.688775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:09.688795 systemd[1]: Detected virtualization amazon. Dec 13 01:29:09.688816 systemd[1]: Detected architecture x86-64. Dec 13 01:29:09.688835 systemd[1]: Detected first boot. Dec 13 01:29:09.688854 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:09.688881 zram_generator::config[1450]: No configuration found. Dec 13 01:29:09.688903 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:09.688931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:09.688952 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:09.688980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:09.689002 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:09.689028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:09.689050 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:09.689074 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:09.689096 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:09.689122 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:09.689147 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:09.689168 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:09.689190 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:09.689210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:09.689229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:09.689246 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:09.689266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:09.689285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:09.689304 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:09.689327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:09.689347 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:09.689406 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:09.689426 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:09.689446 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:09.689492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:09.689513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:09.689537 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:09.689598 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:09.689618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:09.689637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:09.689761 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:09.689785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:09.689805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:09.689825 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:09.689843 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:09.689861 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:09.689885 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:09.689904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.689923 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:09.689942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:09.689959 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:09.689979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:09.689999 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:09.690018 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:09.690042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:09.690066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:09.690087 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:09.690207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:09.690232 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:09.690255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:09.690274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:09.690293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:09.690313 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:09.690337 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:09.690356 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:09.690378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:09.690398 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:09.690417 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:09.690436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:09.690454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:09.690473 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:09.690496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:09.690521 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:09.690540 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:09.690590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.690611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:09.690634 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:09.690711 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:09.690736 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:09.690757 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:09.690782 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:09.690804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:09.690825 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:09.690847 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:09.690873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:09.690897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:09.690918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:09.690939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:09.690998 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:09.691021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:09.691050 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:09.691073 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:09.691095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:09.691118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:09.691138 kernel: loop: module loaded Dec 13 01:29:09.691159 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:09.691180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:09.691204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:09.691225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:09.691250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:09.691271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:09.691327 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:09.691349 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:09.691370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:09.691393 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:09.691455 systemd-journald[1532]: Collecting audit messages is disabled. Dec 13 01:29:09.691502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:09.691528 systemd-journald[1532]: Journal started Dec 13 01:29:09.691621 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec23ec93ee503e1b45dcfd8bf6b19bf7) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:29:09.169475 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:09.202486 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:29:09.202886 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:09.710286 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:09.710466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:09.710495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:09.717883 kernel: fuse: init (API version 7.39) Dec 13 01:29:09.728114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:09.728183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:09.734691 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:09.749655 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:09.757532 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:09.757232 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:09.759063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:09.759958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:09.763005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:09.812703 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:09.819029 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:09.823392 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Dec 13 01:29:09.823419 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Dec 13 01:29:09.823750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:09.825910 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:09.828811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:09.841205 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:09.843596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:09.848742 kernel: loop0: detected capacity change from 0 to 61336 Dec 13 01:29:09.859380 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:09.869762 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:09.885333 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec23ec93ee503e1b45dcfd8bf6b19bf7 is 67.322ms for 970 entries. Dec 13 01:29:09.885333 systemd-journald[1532]: System Journal (/var/log/journal/ec23ec93ee503e1b45dcfd8bf6b19bf7) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:29:09.958794 systemd-journald[1532]: Received client request to flush runtime journal. Dec 13 01:29:09.886545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:09.893914 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:09.908038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:09.922996 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:09.958274 udevadm[1593]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:09.961855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:10.002498 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:10.028040 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:10.036997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:10.048587 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:29:10.063699 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Dec 13 01:29:10.064167 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Dec 13 01:29:10.070308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:10.179649 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 01:29:10.241593 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:29:10.401590 kernel: loop4: detected capacity change from 0 to 61336 Dec 13 01:29:10.412606 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:29:10.448575 kernel: loop6: detected capacity change from 0 to 210664 Dec 13 01:29:10.472705 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 01:29:10.494220 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:29:10.495300 (sd-merge)[1607]: Merged extensions into '/usr'. Dec 13 01:29:10.511625 systemd[1]: Reloading requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:10.511771 systemd[1]: Reloading... Dec 13 01:29:10.604272 zram_generator::config[1629]: No configuration found. Dec 13 01:29:10.922087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:10.998279 systemd[1]: Reloading finished in 485 ms. Dec 13 01:29:11.027750 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:11.042781 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:11.046221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:11.059698 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:11.059719 systemd[1]: Reloading... Dec 13 01:29:11.130628 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:11.131684 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:11.133028 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:11.133486 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:29:11.133704 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:29:11.143897 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:11.143913 systemd-tmpfiles[1682]: Skipping /boot Dec 13 01:29:11.176759 zram_generator::config[1706]: No configuration found. Dec 13 01:29:11.176121 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:11.177042 systemd-tmpfiles[1682]: Skipping /boot Dec 13 01:29:11.345500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:11.406441 systemd[1]: Reloading finished in 346 ms. Dec 13 01:29:11.422215 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:11.427237 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:11.442889 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:11.450812 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:11.469774 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:11.483076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:11.490197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:11.500809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:11.509319 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:11.516886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.517163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:11.531191 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:11.541954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:11.554711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:11.557187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:11.557553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.565756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.566250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:11.568761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:11.568929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.577964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:11.578190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:11.582038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:11.582230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:11.597186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.598169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:11.604999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:11.609891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:11.612498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:11.613912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:11.614203 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:11.615801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:11.617130 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:11.617339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:11.621234 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:11.644007 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:11.645879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:11.646069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:11.653641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:11.659420 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:11.678060 augenrules[1795]: No rules Dec 13 01:29:11.683394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:11.685376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:11.686292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:11.688411 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:11.689657 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:11.690810 systemd-udevd[1771]: Using default interface naming scheme 'v255'. Dec 13 01:29:11.695915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:11.701352 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:11.807856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:11.823725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:11.849681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:11.853913 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:11.948575 systemd-resolved[1768]: Positive Trust Anchors: Dec 13 01:29:11.949060 systemd-resolved[1768]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:11.949209 systemd-resolved[1768]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:11.968419 systemd-resolved[1768]: Defaulting to hostname 'linux'. Dec 13 01:29:11.972526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:11.975050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:11.985585 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1809) Dec 13 01:29:11.988941 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1809) Dec 13 01:29:12.005099 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:12.032656 (udev-worker)[1820]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:29:12.054595 systemd-networkd[1814]: lo: Link UP Dec 13 01:29:12.054609 systemd-networkd[1814]: lo: Gained carrier Dec 13 01:29:12.060102 systemd-networkd[1814]: Enumeration completed Dec 13 01:29:12.060245 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:12.061525 systemd[1]: Reached target network.target - Network. Dec 13 01:29:12.062452 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:12.062461 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:12.066586 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:12.067911 systemd-networkd[1814]: eth0: Link UP Dec 13 01:29:12.068177 systemd-networkd[1814]: eth0: Gained carrier Dec 13 01:29:12.068208 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:12.070706 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:12.079649 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.30.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:29:12.092033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:12.103210 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:12.105287 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:12.126590 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:12.128777 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:12.139974 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:12.141612 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 01:29:12.154601 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:29:12.163625 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 01:29:12.173641 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:29:12.189663 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:12.201630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:12.206588 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1812) Dec 13 01:29:12.329466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:29:12.341029 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:12.347917 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:12.350756 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:12.385604 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:12.433428 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:12.537293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:12.538872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:12.544798 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:12.547743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:12.550445 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:12.552231 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:12.553960 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:12.556711 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:12.558722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:12.560115 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:12.561476 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:12.561526 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:12.562674 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:12.565364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:12.568344 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:12.568917 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:12.577928 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:12.580769 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:12.582712 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:12.584182 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:12.585747 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:12.585776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:12.591730 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:12.602987 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:12.606763 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:12.610689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:12.613150 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:12.614271 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:12.622799 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:12.643283 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:29:12.678432 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:12.686718 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:29:12.689800 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:12.697773 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:12.736811 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:12.760265 jq[1941]: false Dec 13 01:29:12.739686 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:12.742680 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:12.745660 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:12.760783 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:12.771919 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:12.787170 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:12.787641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:12.855064 extend-filesystems[1942]: Found loop4 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found loop5 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found loop6 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found loop7 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p1 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p2 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p3 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found usr Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p4 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p6 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p7 Dec 13 01:29:12.860376 extend-filesystems[1942]: Found nvme0n1p9 Dec 13 01:29:12.860376 extend-filesystems[1942]: Checking size of /dev/nvme0n1p9 Dec 13 01:29:12.876217 dbus-daemon[1940]: [system] SELinux support is enabled Dec 13 01:29:12.873051 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:12.898390 jq[1954]: true Dec 13 01:29:12.878416 dbus-daemon[1940]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:29:12.878750 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:12.905492 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:12.911846 update_engine[1953]: I20241213 01:29:12.910188 1953 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:12.926339 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:12.929591 update_engine[1953]: I20241213 01:29:12.928791 1953 update_check_scheduler.cc:74] Next update check in 8m18s Dec 13 01:29:12.930637 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:12.937809 (ntainerd)[1966]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: ---------------------------------------------------- Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: corporation. Support and training for ntp-4 are Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: available at https://www.nwtime.org/support Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: ---------------------------------------------------- Dec 13 01:29:12.940954 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: proto: precision = 0.063 usec (-24) Dec 13 01:29:12.938471 ntpd[1944]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:29:12.938503 ntpd[1944]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:29:12.969146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: basedate set to 2024-11-30 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: gps base set to 2024-12-01 (week 2343) Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listen normally on 3 eth0 172.31.30.29:123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listen normally on 4 lo [::1]:123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: bind(21) AF_INET6 fe80::4f6:2eff:fe33:f1b5%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: unable to create socket on eth0 (5) for fe80::4f6:2eff:fe33:f1b5%2#123 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: failed to init interface for address fe80::4f6:2eff:fe33:f1b5%2 Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: Listening on routing socket on fd #21 for interface updates Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:12.970983 ntpd[1944]: 13 Dec 01:29:12 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:12.938514 ntpd[1944]: ---------------------------------------------------- Dec 13 01:29:12.938524 ntpd[1944]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:29:12.938551 ntpd[1944]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:29:12.971818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:12.938581 ntpd[1944]: corporation. Support and training for ntp-4 are Dec 13 01:29:12.971857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:12.938591 ntpd[1944]: available at https://www.nwtime.org/support Dec 13 01:29:12.973587 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:12.938600 ntpd[1944]: ---------------------------------------------------- Dec 13 01:29:12.973623 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:12.940931 ntpd[1944]: proto: precision = 0.063 usec (-24) Dec 13 01:29:12.943326 ntpd[1944]: basedate set to 2024-11-30 Dec 13 01:29:12.943349 ntpd[1944]: gps base set to 2024-12-01 (week 2343) Dec 13 01:29:12.947709 ntpd[1944]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:29:12.989354 tar[1959]: linux-amd64/helm Dec 13 01:29:12.986150 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.986 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.988 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.989 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.989 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.990 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.991 INFO Fetch failed with 404: resource not found Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.991 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.992 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.993 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.993 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.995 INFO Fetch successful Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.995 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:29:13.004908 coreos-metadata[1939]: Dec 13 01:29:12.996 INFO Fetch successful Dec 13 01:29:12.947778 ntpd[1944]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:29:13.003949 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:12.948001 ntpd[1944]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:29:12.948042 ntpd[1944]: Listen normally on 3 eth0 172.31.30.29:123 Dec 13 01:29:12.948085 ntpd[1944]: Listen normally on 4 lo [::1]:123 Dec 13 01:29:12.948132 ntpd[1944]: bind(21) AF_INET6 fe80::4f6:2eff:fe33:f1b5%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:29:12.948154 ntpd[1944]: unable to create socket on eth0 (5) for fe80::4f6:2eff:fe33:f1b5%2#123 Dec 13 01:29:12.948172 ntpd[1944]: failed to init interface for address fe80::4f6:2eff:fe33:f1b5%2 Dec 13 01:29:12.948214 ntpd[1944]: Listening on routing socket on fd #21 for interface updates Dec 13 01:29:12.954614 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:12.956866 ntpd[1944]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:12.992025 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:29:13.018664 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:29:13.029655 jq[1977]: true Dec 13 01:29:13.032668 extend-filesystems[1942]: Resized partition /dev/nvme0n1p9 Dec 13 01:29:13.074323 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:29:13.074468 extend-filesystems[1995]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:13.035243 systemd-logind[1950]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:29:13.035267 systemd-logind[1950]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:29:13.035292 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:13.074295 systemd-logind[1950]: New seat seat0. Dec 13 01:29:13.092828 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:13.105332 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:29:13.148590 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:29:13.180294 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:13.182430 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:13.229817 systemd-networkd[1814]: eth0: Gained IPv6LL Dec 13 01:29:13.253395 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:29:13.253395 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:29:13.253395 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:29:13.252996 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:13.268335 extend-filesystems[1942]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:29:13.253289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:13.279758 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:13.292708 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:13.302939 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:29:13.315841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:13.328923 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:13.391721 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:13.395962 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:13.410951 systemd[1]: Starting sshkeys.service... Dec 13 01:29:13.413857 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1820) Dec 13 01:29:13.506143 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:13.522094 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:13.532429 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:13.594405 amazon-ssm-agent[2026]: Initializing new seelog logger Dec 13 01:29:13.594803 amazon-ssm-agent[2026]: New Seelog Logger Creation Complete Dec 13 01:29:13.594803 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.594803 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.595900 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 processing appconfig overrides Dec 13 01:29:13.596759 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:29:13.596968 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:29:13.598532 dbus-daemon[1940]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1989 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 processing appconfig overrides Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 processing appconfig overrides Dec 13 01:29:13.612121 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO Proxy environment variables: Dec 13 01:29:13.612680 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:29:13.638924 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.638924 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:13.640956 amazon-ssm-agent[2026]: 2024/12/13 01:29:13 processing appconfig overrides Dec 13 01:29:13.669665 coreos-metadata[2083]: Dec 13 01:29:13.669 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:29:13.672794 coreos-metadata[2083]: Dec 13 01:29:13.672 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:29:13.675251 coreos-metadata[2083]: Dec 13 01:29:13.674 INFO Fetch successful Dec 13 01:29:13.675251 coreos-metadata[2083]: Dec 13 01:29:13.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:29:13.687612 polkitd[2105]: Started polkitd version 121 Dec 13 01:29:13.694352 polkitd[2105]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:29:13.697884 coreos-metadata[2083]: Dec 13 01:29:13.696 INFO Fetch successful Dec 13 01:29:13.700226 polkitd[2105]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:29:13.711700 unknown[2083]: wrote ssh authorized keys file for user: core Dec 13 01:29:13.718236 polkitd[2105]: Finished loading, compiling and executing 2 rules Dec 13 01:29:13.755379 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO https_proxy: Dec 13 01:29:13.751868 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:29:13.751664 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:29:13.753626 polkitd[2105]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:29:13.813744 update-ssh-keys[2136]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:13.813594 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:13.829938 systemd[1]: Finished sshkeys.service. Dec 13 01:29:13.852945 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO http_proxy: Dec 13 01:29:13.876153 systemd-hostnamed[1989]: Hostname set to (transient) Dec 13 01:29:13.878507 systemd-resolved[1768]: System hostname changed to 'ip-172-31-30-29'. Dec 13 01:29:13.950230 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO no_proxy: Dec 13 01:29:14.000350 locksmithd[1987]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:14.056498 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:29:14.157300 amazon-ssm-agent[2026]: 2024-12-13 01:29:13 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:29:14.256629 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO Agent will take identity from EC2 Dec 13 01:29:14.266403 sshd_keygen[1978]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:14.315955 containerd[1966]: time="2024-12-13T01:29:14.315228892Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:14.326625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:14.335044 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:14.344976 systemd[1]: Started sshd@0-172.31.30.29:22-139.178.68.195:38162.service - OpenSSH per-connection server daemon (139.178.68.195:38162). Dec 13 01:29:14.355757 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:14.384243 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:14.384492 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:14.396084 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:14.399523 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:14.404595 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:14.405730 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:29:14.406872 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:29:14.406872 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:29:14.406872 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:29:14.406872 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [Registrar] Starting registrar module Dec 13 01:29:14.409279 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:29:14.409279 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:29:14.409279 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:29:14.409279 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:29:14.409279 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:29:14.450708 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:14.455161 amazon-ssm-agent[2026]: 2024-12-13 01:29:14 INFO [CredentialRefresher] Next credential rotation will be in 30.691463797966666 minutes Dec 13 01:29:14.464699 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:14.473184 containerd[1966]: time="2024-12-13T01:29:14.473115452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.475221 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.475888551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.475928683Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.475951956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.476127497Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.476145579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.476209605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.476223805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.477800456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.477828674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.477848507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:14.480804 containerd[1966]: time="2024-12-13T01:29:14.477919851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.476795 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.478117690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.478440960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.478837551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.478870244Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.478985878Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:14.481267 containerd[1966]: time="2024-12-13T01:29:14.479390312Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:14.489781 containerd[1966]: time="2024-12-13T01:29:14.489734756Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.490734688Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.490779576Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.490804509Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.490826124Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491016212Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491349902Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491482273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491504050Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491522930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491542977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491576527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491603723Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491631107Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.492613 containerd[1966]: time="2024-12-13T01:29:14.491652227Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491676683Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491699722Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491717128Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491751277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491776084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491795072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491816497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491833496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491852701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491869369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491891587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491941186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491968674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493339 containerd[1966]: time="2024-12-13T01:29:14.491985383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492002914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492020265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492048154Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492086705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492103822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492122530Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492170191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492193207Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492208533Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492226922Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492241027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492256826Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492276422Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:14.493810 containerd[1966]: time="2024-12-13T01:29:14.492295505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:14.494410 containerd[1966]: time="2024-12-13T01:29:14.493652807Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:14.494410 containerd[1966]: time="2024-12-13T01:29:14.493742784Z" level=info msg="Connect containerd service" Dec 13 01:29:14.494410 containerd[1966]: time="2024-12-13T01:29:14.493807036Z" level=info msg="using legacy CRI server" Dec 13 01:29:14.494410 containerd[1966]: time="2024-12-13T01:29:14.493817402Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:14.496395 containerd[1966]: time="2024-12-13T01:29:14.495140416Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:14.499411 containerd[1966]: time="2024-12-13T01:29:14.498397956Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:14.500008 containerd[1966]: time="2024-12-13T01:29:14.499465753Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.500021935Z" level=info msg="Start recovering state" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.499976278Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.500134007Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.501071078Z" level=info msg="Start event monitor" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.501098966Z" level=info msg="Start snapshots syncer" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.501113125Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.501124321Z" level=info msg="Start streaming server" Dec 13 01:29:14.501497 containerd[1966]: time="2024-12-13T01:29:14.501233320Z" level=info msg="containerd successfully booted in 0.189597s" Dec 13 01:29:14.501327 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:14.603522 sshd[2172]: Accepted publickey for core from 139.178.68.195 port 38162 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:14.607315 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:14.632836 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:29:14.642974 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:29:14.654504 systemd-logind[1950]: New session 1 of user core. Dec 13 01:29:14.677934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:29:14.692027 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:29:14.710027 (systemd)[2185]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:29:14.896920 tar[1959]: linux-amd64/LICENSE Dec 13 01:29:14.897383 tar[1959]: linux-amd64/README.md Dec 13 01:29:14.914729 systemd[2185]: Queued start job for default target default.target. Dec 13 01:29:14.923077 systemd[2185]: Created slice app.slice - User Application Slice. Dec 13 01:29:14.923115 systemd[2185]: Reached target paths.target - Paths. Dec 13 01:29:14.923136 systemd[2185]: Reached target timers.target - Timers. Dec 13 01:29:14.926543 systemd[2185]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:29:14.928814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:14.949040 systemd[2185]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:29:14.949201 systemd[2185]: Reached target sockets.target - Sockets. Dec 13 01:29:14.949224 systemd[2185]: Reached target basic.target - Basic System. Dec 13 01:29:14.949284 systemd[2185]: Reached target default.target - Main User Target. Dec 13 01:29:14.949325 systemd[2185]: Startup finished in 229ms. Dec 13 01:29:14.949467 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:29:14.954769 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:29:15.110860 systemd[1]: Started sshd@1-172.31.30.29:22-139.178.68.195:37566.service - OpenSSH per-connection server daemon (139.178.68.195:37566). Dec 13 01:29:15.268802 sshd[2199]: Accepted publickey for core from 139.178.68.195 port 37566 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:15.270309 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:15.276740 systemd-logind[1950]: New session 2 of user core. Dec 13 01:29:15.282782 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:29:15.348109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:15.350150 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:15.421962 amazon-ssm-agent[2026]: 2024-12-13 01:29:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:29:15.482840 systemd[1]: Startup finished in 726ms (kernel) + 9.242s (initrd) + 7.514s (userspace) = 17.484s. Dec 13 01:29:15.490046 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:15.508831 sshd[2199]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:15.517459 systemd[1]: sshd@1-172.31.30.29:22-139.178.68.195:37566.service: Deactivated successfully. Dec 13 01:29:15.518769 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:29:15.520508 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:29:15.522724 amazon-ssm-agent[2026]: 2024-12-13 01:29:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2210) started Dec 13 01:29:15.525367 systemd-logind[1950]: Removed session 2. Dec 13 01:29:15.547336 systemd[1]: Started sshd@2-172.31.30.29:22-139.178.68.195:37568.service - OpenSSH per-connection server daemon (139.178.68.195:37568). Dec 13 01:29:15.624859 amazon-ssm-agent[2026]: 2024-12-13 01:29:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:29:15.727272 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 37568 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:15.730266 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:15.737237 systemd-logind[1950]: New session 3 of user core. Dec 13 01:29:15.743782 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:29:15.864707 sshd[2224]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:15.871352 systemd[1]: sshd@2-172.31.30.29:22-139.178.68.195:37568.service: Deactivated successfully. Dec 13 01:29:15.874440 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:29:15.876757 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:29:15.878945 systemd-logind[1950]: Removed session 3. Dec 13 01:29:15.902089 systemd[1]: Started sshd@3-172.31.30.29:22-139.178.68.195:37584.service - OpenSSH per-connection server daemon (139.178.68.195:37584). Dec 13 01:29:15.939052 ntpd[1944]: Listen normally on 6 eth0 [fe80::4f6:2eff:fe33:f1b5%2]:123 Dec 13 01:29:15.939394 ntpd[1944]: 13 Dec 01:29:15 ntpd[1944]: Listen normally on 6 eth0 [fe80::4f6:2eff:fe33:f1b5%2]:123 Dec 13 01:29:16.073829 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 37584 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:16.077305 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:16.099821 systemd-logind[1950]: New session 4 of user core. Dec 13 01:29:16.105777 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:29:16.236833 sshd[2239]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:16.242455 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:29:16.243279 systemd[1]: sshd@3-172.31.30.29:22-139.178.68.195:37584.service: Deactivated successfully. Dec 13 01:29:16.247376 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:29:16.250717 systemd-logind[1950]: Removed session 4. Dec 13 01:29:16.275030 systemd[1]: Started sshd@4-172.31.30.29:22-139.178.68.195:37586.service - OpenSSH per-connection server daemon (139.178.68.195:37586). Dec 13 01:29:16.320915 kubelet[2207]: E1213 01:29:16.320878 2207 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:16.324067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:16.324332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:16.324937 systemd[1]: kubelet.service: Consumed 1.007s CPU time. Dec 13 01:29:16.440064 sshd[2246]: Accepted publickey for core from 139.178.68.195 port 37586 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:16.443081 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:16.449928 systemd-logind[1950]: New session 5 of user core. Dec 13 01:29:16.461819 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:29:16.617601 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:29:16.618092 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:17.341897 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:29:17.342061 (dockerd)[2267]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:29:18.112843 dockerd[2267]: time="2024-12-13T01:29:18.112725187Z" level=info msg="Starting up" Dec 13 01:29:19.428074 dockerd[2267]: time="2024-12-13T01:29:19.427792891Z" level=info msg="Loading containers: start." Dec 13 01:29:19.676589 kernel: Initializing XFRM netlink socket Dec 13 01:29:19.734499 (udev-worker)[2289]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:29:19.808341 systemd-networkd[1814]: docker0: Link UP Dec 13 01:29:19.833516 dockerd[2267]: time="2024-12-13T01:29:19.833469199Z" level=info msg="Loading containers: done." Dec 13 01:29:19.873033 dockerd[2267]: time="2024-12-13T01:29:19.872956873Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:29:19.873440 dockerd[2267]: time="2024-12-13T01:29:19.873139972Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:29:19.873495 dockerd[2267]: time="2024-12-13T01:29:19.873457075Z" level=info msg="Daemon has completed initialization" Dec 13 01:29:19.932595 dockerd[2267]: time="2024-12-13T01:29:19.932025393Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:29:19.932386 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:29:21.354521 systemd-resolved[1768]: Clock change detected. Flushing caches. Dec 13 01:29:22.831822 containerd[1966]: time="2024-12-13T01:29:22.831451175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:29:23.589389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275644201.mount: Deactivated successfully. Dec 13 01:29:26.429027 containerd[1966]: time="2024-12-13T01:29:26.428929884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.431320 containerd[1966]: time="2024-12-13T01:29:26.431207557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:29:26.439322 containerd[1966]: time="2024-12-13T01:29:26.438937949Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.451454 containerd[1966]: time="2024-12-13T01:29:26.451321136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.458209 containerd[1966]: time="2024-12-13T01:29:26.458150642Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.62665187s" Dec 13 01:29:26.458209 containerd[1966]: time="2024-12-13T01:29:26.458214206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:29:26.502713 containerd[1966]: time="2024-12-13T01:29:26.502481702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:29:27.989264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:27.994658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:28.811531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:28.818563 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:28.938207 kubelet[2482]: E1213 01:29:28.938133 2482 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:28.945022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:28.945198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:29.926912 containerd[1966]: time="2024-12-13T01:29:29.926800052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:29.947150 containerd[1966]: time="2024-12-13T01:29:29.947079805Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:29:29.979645 containerd[1966]: time="2024-12-13T01:29:29.979556285Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:30.004118 containerd[1966]: time="2024-12-13T01:29:30.002455807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:30.007021 containerd[1966]: time="2024-12-13T01:29:30.006963769Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.504379449s" Dec 13 01:29:30.007021 containerd[1966]: time="2024-12-13T01:29:30.007022943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:29:30.044746 containerd[1966]: time="2024-12-13T01:29:30.044706544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:29:32.761072 containerd[1966]: time="2024-12-13T01:29:32.760937755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.802848 containerd[1966]: time="2024-12-13T01:29:32.802751838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:29:32.829501 containerd[1966]: time="2024-12-13T01:29:32.829422230Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.873046 containerd[1966]: time="2024-12-13T01:29:32.872982788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.878471 containerd[1966]: time="2024-12-13T01:29:32.878273599Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.83352201s" Dec 13 01:29:32.878471 containerd[1966]: time="2024-12-13T01:29:32.878324000Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:29:32.959042 containerd[1966]: time="2024-12-13T01:29:32.958999140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:29:34.661467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380106615.mount: Deactivated successfully. Dec 13 01:29:35.490919 containerd[1966]: time="2024-12-13T01:29:35.490739425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.512021 containerd[1966]: time="2024-12-13T01:29:35.511942873Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:29:35.538100 containerd[1966]: time="2024-12-13T01:29:35.537841228Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.565854 containerd[1966]: time="2024-12-13T01:29:35.565778828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.567128 containerd[1966]: time="2024-12-13T01:29:35.566658482Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.607608454s" Dec 13 01:29:35.567128 containerd[1966]: time="2024-12-13T01:29:35.566703267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:29:35.599883 containerd[1966]: time="2024-12-13T01:29:35.599845364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:29:36.286850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177295587.mount: Deactivated successfully. Dec 13 01:29:37.716027 containerd[1966]: time="2024-12-13T01:29:37.715972716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.718324 containerd[1966]: time="2024-12-13T01:29:37.718241129Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:29:37.720454 containerd[1966]: time="2024-12-13T01:29:37.720388707Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.726078 containerd[1966]: time="2024-12-13T01:29:37.725993155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.728411 containerd[1966]: time="2024-12-13T01:29:37.727306951Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.127420959s" Dec 13 01:29:37.728411 containerd[1966]: time="2024-12-13T01:29:37.727353748Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:29:37.754390 containerd[1966]: time="2024-12-13T01:29:37.754345507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:29:38.370154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754574863.mount: Deactivated successfully. Dec 13 01:29:38.379022 containerd[1966]: time="2024-12-13T01:29:38.378975519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.388358 containerd[1966]: time="2024-12-13T01:29:38.388281039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:29:38.391375 containerd[1966]: time="2024-12-13T01:29:38.391302533Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.401718 containerd[1966]: time="2024-12-13T01:29:38.401643595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.403017 containerd[1966]: time="2024-12-13T01:29:38.402564502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 648.180636ms" Dec 13 01:29:38.403017 containerd[1966]: time="2024-12-13T01:29:38.402606273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:29:38.434603 containerd[1966]: time="2024-12-13T01:29:38.434563147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:29:38.995084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:39.002341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:39.143635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247801170.mount: Deactivated successfully. Dec 13 01:29:39.496325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:39.503548 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:39.570361 kubelet[2589]: E1213 01:29:39.569843 2589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:39.574284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:39.574515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:43.227631 containerd[1966]: time="2024-12-13T01:29:43.227566812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:43.251617 containerd[1966]: time="2024-12-13T01:29:43.251536629Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:29:43.275845 containerd[1966]: time="2024-12-13T01:29:43.275755885Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:43.314956 containerd[1966]: time="2024-12-13T01:29:43.314873966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:43.317485 containerd[1966]: time="2024-12-13T01:29:43.317331937Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.882728048s" Dec 13 01:29:43.317485 containerd[1966]: time="2024-12-13T01:29:43.317481720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:29:45.324346 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:29:46.889480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:46.895459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:46.920453 systemd[1]: Reloading requested from client PID 2700 ('systemctl') (unit session-5.scope)... Dec 13 01:29:46.920604 systemd[1]: Reloading... Dec 13 01:29:47.032075 zram_generator::config[2740]: No configuration found. Dec 13 01:29:47.165071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:47.254253 systemd[1]: Reloading finished in 332 ms. Dec 13 01:29:47.321755 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:29:47.321875 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:29:47.322209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:47.330694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:47.911402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:47.923552 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:47.978114 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:47.978114 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:47.978114 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:47.978575 kubelet[2796]: I1213 01:29:47.978223 2796 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:48.479790 kubelet[2796]: I1213 01:29:48.479736 2796 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:29:48.479790 kubelet[2796]: I1213 01:29:48.479780 2796 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:48.480336 kubelet[2796]: I1213 01:29:48.480309 2796 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:29:48.515752 kubelet[2796]: I1213 01:29:48.515714 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:48.519269 kubelet[2796]: E1213 01:29:48.518630 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.542233 kubelet[2796]: I1213 01:29:48.542196 2796 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:48.545880 kubelet[2796]: I1213 01:29:48.545816 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:48.546150 kubelet[2796]: I1213 01:29:48.545877 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-29","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:48.546272 kubelet[2796]: I1213 01:29:48.546162 2796 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:48.546272 kubelet[2796]: I1213 01:29:48.546178 2796 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:48.548706 kubelet[2796]: I1213 01:29:48.548665 2796 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:48.553195 kubelet[2796]: W1213 01:29:48.552471 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-29&limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.553195 kubelet[2796]: E1213 01:29:48.552798 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-29&limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.553836 kubelet[2796]: I1213 01:29:48.553808 2796 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:29:48.553903 kubelet[2796]: I1213 01:29:48.553840 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:48.553903 kubelet[2796]: I1213 01:29:48.553880 2796 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:48.553903 kubelet[2796]: I1213 01:29:48.553900 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:48.569806 kubelet[2796]: W1213 01:29:48.569604 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.569806 kubelet[2796]: E1213 01:29:48.569704 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.577336 kubelet[2796]: I1213 01:29:48.577299 2796 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:48.590686 kubelet[2796]: I1213 01:29:48.590648 2796 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:48.591196 kubelet[2796]: W1213 01:29:48.591002 2796 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:29:48.593118 kubelet[2796]: I1213 01:29:48.592304 2796 server.go:1264] "Started kubelet" Dec 13 01:29:48.595676 kubelet[2796]: I1213 01:29:48.594316 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:48.604751 kubelet[2796]: I1213 01:29:48.604454 2796 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:29:48.609218 kubelet[2796]: I1213 01:29:48.609064 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:48.610286 kubelet[2796]: I1213 01:29:48.610264 2796 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:48.611074 kubelet[2796]: E1213 01:29:48.610838 2796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.29:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-29.1810985e989adc51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-29,UID:ip-172-31-30-29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-29,},FirstTimestamp:2024-12-13 01:29:48.592274513 +0000 UTC m=+0.664061176,LastTimestamp:2024-12-13 01:29:48.592274513 +0000 UTC m=+0.664061176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-29,}" Dec 13 01:29:48.617348 kubelet[2796]: I1213 01:29:48.617137 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:48.622357 kubelet[2796]: E1213 01:29:48.622099 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-29\" not found" Dec 13 01:29:48.622357 kubelet[2796]: I1213 01:29:48.622171 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:48.628657 kubelet[2796]: I1213 01:29:48.628299 2796 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:29:48.628657 kubelet[2796]: I1213 01:29:48.628409 2796 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:48.632252 kubelet[2796]: W1213 01:29:48.632094 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.635090 kubelet[2796]: E1213 01:29:48.634994 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.635090 kubelet[2796]: I1213 01:29:48.633081 2796 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:48.635256 kubelet[2796]: I1213 01:29:48.635176 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:48.635524 kubelet[2796]: E1213 01:29:48.632457 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-29?timeout=10s\": dial tcp 172.31.30.29:6443: connect: connection refused" interval="200ms" Dec 13 01:29:48.636445 kubelet[2796]: E1213 01:29:48.636226 2796 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:48.637294 kubelet[2796]: I1213 01:29:48.637274 2796 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:48.660883 kubelet[2796]: I1213 01:29:48.660823 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:48.660883 kubelet[2796]: I1213 01:29:48.660841 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:48.661159 kubelet[2796]: I1213 01:29:48.660894 2796 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:48.664578 kubelet[2796]: I1213 01:29:48.664542 2796 policy_none.go:49] "None policy: Start" Dec 13 01:29:48.665908 kubelet[2796]: I1213 01:29:48.665712 2796 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:48.665908 kubelet[2796]: I1213 01:29:48.665842 2796 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:48.677213 kubelet[2796]: I1213 01:29:48.677003 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:48.680174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:29:48.680961 kubelet[2796]: I1213 01:29:48.680404 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:48.680961 kubelet[2796]: I1213 01:29:48.680427 2796 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:48.680961 kubelet[2796]: I1213 01:29:48.680448 2796 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:29:48.680961 kubelet[2796]: E1213 01:29:48.680498 2796 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:48.682144 kubelet[2796]: W1213 01:29:48.682112 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.682860 kubelet[2796]: E1213 01:29:48.682772 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:48.694136 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:29:48.698768 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:29:48.710475 kubelet[2796]: I1213 01:29:48.710443 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:48.710909 kubelet[2796]: I1213 01:29:48.710678 2796 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:48.710984 kubelet[2796]: I1213 01:29:48.710955 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:48.713521 kubelet[2796]: E1213 01:29:48.713343 2796 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-29\" not found" Dec 13 01:29:48.724913 kubelet[2796]: I1213 01:29:48.724830 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:48.725361 kubelet[2796]: E1213 01:29:48.725329 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.29:6443/api/v1/nodes\": dial tcp 172.31.30.29:6443: connect: connection refused" node="ip-172-31-30-29" Dec 13 01:29:48.780933 kubelet[2796]: I1213 01:29:48.780839 2796 topology_manager.go:215] "Topology Admit Handler" podUID="13c313a84007b143e9b2d4c435aac01a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.783807 kubelet[2796]: I1213 01:29:48.783567 2796 topology_manager.go:215] "Topology Admit Handler" podUID="00ba691aae577bbc1813777d4bcf69b3" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-29" Dec 13 01:29:48.785978 kubelet[2796]: I1213 01:29:48.785639 2796 topology_manager.go:215] "Topology Admit Handler" podUID="9b9d239738c5d4f489e07edc3d6f6635" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-29" Dec 13 01:29:48.795551 systemd[1]: Created slice kubepods-burstable-pod13c313a84007b143e9b2d4c435aac01a.slice - libcontainer container kubepods-burstable-pod13c313a84007b143e9b2d4c435aac01a.slice. Dec 13 01:29:48.824304 systemd[1]: Created slice kubepods-burstable-pod00ba691aae577bbc1813777d4bcf69b3.slice - libcontainer container kubepods-burstable-pod00ba691aae577bbc1813777d4bcf69b3.slice. Dec 13 01:29:48.832238 systemd[1]: Created slice kubepods-burstable-pod9b9d239738c5d4f489e07edc3d6f6635.slice - libcontainer container kubepods-burstable-pod9b9d239738c5d4f489e07edc3d6f6635.slice. Dec 13 01:29:48.837834 kubelet[2796]: E1213 01:29:48.837692 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-29?timeout=10s\": dial tcp 172.31.30.29:6443: connect: connection refused" interval="400ms" Dec 13 01:29:48.929034 kubelet[2796]: I1213 01:29:48.928362 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:48.929034 kubelet[2796]: E1213 01:29:48.928662 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.29:6443/api/v1/nodes\": dial tcp 172.31.30.29:6443: connect: connection refused" node="ip-172-31-30-29" Dec 13 01:29:48.929034 kubelet[2796]: I1213 01:29:48.928765 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.929034 kubelet[2796]: I1213 01:29:48.928788 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.929034 kubelet[2796]: I1213 01:29:48.928807 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.929034 kubelet[2796]: I1213 01:29:48.928829 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ba691aae577bbc1813777d4bcf69b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-29\" (UID: \"00ba691aae577bbc1813777d4bcf69b3\") " pod="kube-system/kube-scheduler-ip-172-31-30-29" Dec 13 01:29:48.929334 kubelet[2796]: I1213 01:29:48.928906 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:48.929334 kubelet[2796]: I1213 01:29:48.928935 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.929334 kubelet[2796]: I1213 01:29:48.928966 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:48.929334 kubelet[2796]: I1213 01:29:48.928991 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-ca-certs\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:48.929334 kubelet[2796]: I1213 01:29:48.929033 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:49.119561 containerd[1966]: time="2024-12-13T01:29:49.119439043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-29,Uid:13c313a84007b143e9b2d4c435aac01a,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:49.129662 containerd[1966]: time="2024-12-13T01:29:49.129611504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-29,Uid:00ba691aae577bbc1813777d4bcf69b3,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:49.144830 containerd[1966]: time="2024-12-13T01:29:49.144779309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-29,Uid:9b9d239738c5d4f489e07edc3d6f6635,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:49.238821 kubelet[2796]: E1213 01:29:49.238771 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-29?timeout=10s\": dial tcp 172.31.30.29:6443: connect: connection refused" interval="800ms" Dec 13 01:29:49.331141 kubelet[2796]: I1213 01:29:49.331106 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:49.331592 kubelet[2796]: E1213 01:29:49.331552 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.29:6443/api/v1/nodes\": dial tcp 172.31.30.29:6443: connect: connection refused" node="ip-172-31-30-29" Dec 13 01:29:49.394564 kubelet[2796]: W1213 01:29:49.394224 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:49.394564 kubelet[2796]: E1213 01:29:49.394324 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:49.684985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427982107.mount: Deactivated successfully. Dec 13 01:29:49.712685 containerd[1966]: time="2024-12-13T01:29:49.712628185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.714424 containerd[1966]: time="2024-12-13T01:29:49.714369971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:29:49.716726 containerd[1966]: time="2024-12-13T01:29:49.716685709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.718205 containerd[1966]: time="2024-12-13T01:29:49.718118795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.719812 containerd[1966]: time="2024-12-13T01:29:49.719755623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:49.723940 containerd[1966]: time="2024-12-13T01:29:49.723871078Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.725018 containerd[1966]: time="2024-12-13T01:29:49.724860367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:49.730569 containerd[1966]: time="2024-12-13T01:29:49.730418733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.732137 containerd[1966]: time="2024-12-13T01:29:49.732097726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.560591ms" Dec 13 01:29:49.733561 containerd[1966]: time="2024-12-13T01:29:49.733528705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.819209ms" Dec 13 01:29:49.738751 containerd[1966]: time="2024-12-13T01:29:49.738707778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.808964ms" Dec 13 01:29:49.745509 kubelet[2796]: W1213 01:29:49.745439 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-29&limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:49.745509 kubelet[2796]: E1213 01:29:49.745505 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-29&limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:49.946468 kubelet[2796]: W1213 01:29:49.946334 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:49.951683 kubelet[2796]: E1213 01:29:49.950182 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:50.039395 kubelet[2796]: E1213 01:29:50.039336 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-29?timeout=10s\": dial tcp 172.31.30.29:6443: connect: connection refused" interval="1.6s" Dec 13 01:29:50.121541 containerd[1966]: time="2024-12-13T01:29:50.120296970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:50.121541 containerd[1966]: time="2024-12-13T01:29:50.120552059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:50.121541 containerd[1966]: time="2024-12-13T01:29:50.120577343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.121541 containerd[1966]: time="2024-12-13T01:29:50.120700336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.134186 containerd[1966]: time="2024-12-13T01:29:50.133550681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:50.134186 containerd[1966]: time="2024-12-13T01:29:50.133719177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:50.134186 containerd[1966]: time="2024-12-13T01:29:50.133768911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.135337 containerd[1966]: time="2024-12-13T01:29:50.134581246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.135902 kubelet[2796]: I1213 01:29:50.135790 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:50.137620 kubelet[2796]: E1213 01:29:50.137537 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.29:6443/api/v1/nodes\": dial tcp 172.31.30.29:6443: connect: connection refused" node="ip-172-31-30-29" Dec 13 01:29:50.139703 containerd[1966]: time="2024-12-13T01:29:50.139597155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:50.139852 containerd[1966]: time="2024-12-13T01:29:50.139809566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:50.139932 containerd[1966]: time="2024-12-13T01:29:50.139871330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.143259 containerd[1966]: time="2024-12-13T01:29:50.142873757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:50.187330 systemd[1]: Started cri-containerd-f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d.scope - libcontainer container f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d. Dec 13 01:29:50.202325 systemd[1]: Started cri-containerd-b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be.scope - libcontainer container b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be. Dec 13 01:29:50.203300 kubelet[2796]: W1213 01:29:50.203199 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:50.203300 kubelet[2796]: E1213 01:29:50.203268 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:50.212204 systemd[1]: Started cri-containerd-7a676d8bd7e64944d4cf6e98fbfad9b030c2064fe68bdcf4fa60eb987bf41768.scope - libcontainer container 7a676d8bd7e64944d4cf6e98fbfad9b030c2064fe68bdcf4fa60eb987bf41768. Dec 13 01:29:50.298973 containerd[1966]: time="2024-12-13T01:29:50.297891685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-29,Uid:00ba691aae577bbc1813777d4bcf69b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be\"" Dec 13 01:29:50.311609 containerd[1966]: time="2024-12-13T01:29:50.311430822Z" level=info msg="CreateContainer within sandbox \"b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:29:50.322821 containerd[1966]: time="2024-12-13T01:29:50.322380479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-29,Uid:13c313a84007b143e9b2d4c435aac01a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d\"" Dec 13 01:29:50.325679 containerd[1966]: time="2024-12-13T01:29:50.325557010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-29,Uid:9b9d239738c5d4f489e07edc3d6f6635,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a676d8bd7e64944d4cf6e98fbfad9b030c2064fe68bdcf4fa60eb987bf41768\"" Dec 13 01:29:50.330787 containerd[1966]: time="2024-12-13T01:29:50.330507985Z" level=info msg="CreateContainer within sandbox \"7a676d8bd7e64944d4cf6e98fbfad9b030c2064fe68bdcf4fa60eb987bf41768\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:29:50.331161 containerd[1966]: time="2024-12-13T01:29:50.331062232Z" level=info msg="CreateContainer within sandbox \"f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:29:50.366865 containerd[1966]: time="2024-12-13T01:29:50.366650672Z" level=info msg="CreateContainer within sandbox \"b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8\"" Dec 13 01:29:50.367952 containerd[1966]: time="2024-12-13T01:29:50.367794123Z" level=info msg="StartContainer for \"6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8\"" Dec 13 01:29:50.391722 containerd[1966]: time="2024-12-13T01:29:50.390439488Z" level=info msg="CreateContainer within sandbox \"f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c\"" Dec 13 01:29:50.391722 containerd[1966]: time="2024-12-13T01:29:50.391307504Z" level=info msg="StartContainer for \"69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c\"" Dec 13 01:29:50.393908 containerd[1966]: time="2024-12-13T01:29:50.393582672Z" level=info msg="CreateContainer within sandbox \"7a676d8bd7e64944d4cf6e98fbfad9b030c2064fe68bdcf4fa60eb987bf41768\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5527fa2777d653fb4b2c0be5b3cd094e7cbaebfd38e8d3b9b6886dfbe7f2a7bf\"" Dec 13 01:29:50.394483 containerd[1966]: time="2024-12-13T01:29:50.394443989Z" level=info msg="StartContainer for \"5527fa2777d653fb4b2c0be5b3cd094e7cbaebfd38e8d3b9b6886dfbe7f2a7bf\"" Dec 13 01:29:50.420276 systemd[1]: Started cri-containerd-6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8.scope - libcontainer container 6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8. Dec 13 01:29:50.457291 systemd[1]: Started cri-containerd-69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c.scope - libcontainer container 69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c. Dec 13 01:29:50.467366 systemd[1]: Started cri-containerd-5527fa2777d653fb4b2c0be5b3cd094e7cbaebfd38e8d3b9b6886dfbe7f2a7bf.scope - libcontainer container 5527fa2777d653fb4b2c0be5b3cd094e7cbaebfd38e8d3b9b6886dfbe7f2a7bf. Dec 13 01:29:50.569191 containerd[1966]: time="2024-12-13T01:29:50.568639660Z" level=info msg="StartContainer for \"69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c\" returns successfully" Dec 13 01:29:50.569191 containerd[1966]: time="2024-12-13T01:29:50.568728142Z" level=info msg="StartContainer for \"6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8\" returns successfully" Dec 13 01:29:50.587283 containerd[1966]: time="2024-12-13T01:29:50.587242210Z" level=info msg="StartContainer for \"5527fa2777d653fb4b2c0be5b3cd094e7cbaebfd38e8d3b9b6886dfbe7f2a7bf\" returns successfully" Dec 13 01:29:50.680072 kubelet[2796]: E1213 01:29:50.677320 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.29:6443: connect: connection refused Dec 13 01:29:51.741610 kubelet[2796]: I1213 01:29:51.741450 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:53.307819 kubelet[2796]: E1213 01:29:53.307073 2796 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-29\" not found" node="ip-172-31-30-29" Dec 13 01:29:53.391619 kubelet[2796]: I1213 01:29:53.391562 2796 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-29" Dec 13 01:29:53.564634 kubelet[2796]: I1213 01:29:53.564397 2796 apiserver.go:52] "Watching apiserver" Dec 13 01:29:53.630533 kubelet[2796]: I1213 01:29:53.630449 2796 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:29:55.882021 systemd[1]: Reloading requested from client PID 3075 ('systemctl') (unit session-5.scope)... Dec 13 01:29:55.882394 systemd[1]: Reloading... Dec 13 01:29:56.001105 zram_generator::config[3112]: No configuration found. Dec 13 01:29:56.196911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:56.394405 systemd[1]: Reloading finished in 511 ms. Dec 13 01:29:56.459213 kubelet[2796]: I1213 01:29:56.458813 2796 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:56.459011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:56.470251 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:56.470442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:56.470506 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 113.6M memory peak, 0B memory swap peak. Dec 13 01:29:56.477867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:56.949653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:56.965686 (kubelet)[3172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:57.063418 kubelet[3172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:57.063418 kubelet[3172]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:57.063418 kubelet[3172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:57.064037 kubelet[3172]: I1213 01:29:57.063490 3172 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:57.073504 kubelet[3172]: I1213 01:29:57.073427 3172 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:29:57.074097 kubelet[3172]: I1213 01:29:57.073641 3172 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:57.075817 kubelet[3172]: I1213 01:29:57.074965 3172 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:29:57.081724 kubelet[3172]: I1213 01:29:57.081670 3172 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:29:57.084132 kubelet[3172]: I1213 01:29:57.084099 3172 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:57.098947 kubelet[3172]: I1213 01:29:57.098911 3172 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:57.099807 kubelet[3172]: I1213 01:29:57.099191 3172 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:57.099807 kubelet[3172]: I1213 01:29:57.099224 3172 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-29","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:57.099807 kubelet[3172]: I1213 01:29:57.099467 3172 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:57.099807 kubelet[3172]: I1213 01:29:57.099482 3172 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:57.101010 kubelet[3172]: I1213 01:29:57.100981 3172 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:57.102771 kubelet[3172]: I1213 01:29:57.102143 3172 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:29:57.102771 kubelet[3172]: I1213 01:29:57.102167 3172 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:57.102771 kubelet[3172]: I1213 01:29:57.102196 3172 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:57.102771 kubelet[3172]: I1213 01:29:57.102221 3172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:57.118977 kubelet[3172]: I1213 01:29:57.118662 3172 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:57.119471 kubelet[3172]: I1213 01:29:57.119366 3172 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:57.138080 kubelet[3172]: I1213 01:29:57.131635 3172 server.go:1264] "Started kubelet" Dec 13 01:29:57.138080 kubelet[3172]: I1213 01:29:57.136139 3172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:57.138080 kubelet[3172]: I1213 01:29:57.136588 3172 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:57.138080 kubelet[3172]: I1213 01:29:57.136635 3172 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:57.138080 kubelet[3172]: I1213 01:29:57.137954 3172 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:29:57.141002 kubelet[3172]: I1213 01:29:57.140965 3172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:57.150869 kubelet[3172]: I1213 01:29:57.149835 3172 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:57.150869 kubelet[3172]: I1213 01:29:57.150302 3172 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:29:57.150869 kubelet[3172]: I1213 01:29:57.150481 3172 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:57.154941 kubelet[3172]: I1213 01:29:57.154914 3172 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:57.155911 kubelet[3172]: I1213 01:29:57.155492 3172 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:57.157346 kubelet[3172]: I1213 01:29:57.157326 3172 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:57.172446 kubelet[3172]: I1213 01:29:57.172361 3172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:57.174444 kubelet[3172]: I1213 01:29:57.174409 3172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:57.174566 kubelet[3172]: I1213 01:29:57.174455 3172 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:57.174566 kubelet[3172]: I1213 01:29:57.174476 3172 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:29:57.174566 kubelet[3172]: E1213 01:29:57.174522 3172 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:57.184123 kubelet[3172]: E1213 01:29:57.181742 3172 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:57.256952 kubelet[3172]: I1213 01:29:57.256920 3172 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-29" Dec 13 01:29:57.274711 kubelet[3172]: E1213 01:29:57.274662 3172 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:29:57.283079 kubelet[3172]: I1213 01:29:57.282209 3172 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-29" Dec 13 01:29:57.283079 kubelet[3172]: I1213 01:29:57.282327 3172 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-29" Dec 13 01:29:57.309401 kubelet[3172]: I1213 01:29:57.309359 3172 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:57.309888 kubelet[3172]: I1213 01:29:57.309870 3172 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:57.310016 kubelet[3172]: I1213 01:29:57.310004 3172 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:57.310783 kubelet[3172]: I1213 01:29:57.310765 3172 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:29:57.310877 kubelet[3172]: I1213 01:29:57.310858 3172 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:29:57.311125 kubelet[3172]: I1213 01:29:57.311105 3172 policy_none.go:49] "None policy: Start" Dec 13 01:29:57.313743 kubelet[3172]: I1213 01:29:57.313720 3172 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:57.313836 kubelet[3172]: I1213 01:29:57.313753 3172 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:57.314077 kubelet[3172]: I1213 01:29:57.313997 3172 state_mem.go:75] "Updated machine memory state" Dec 13 01:29:57.324646 kubelet[3172]: I1213 01:29:57.323043 3172 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:57.325348 kubelet[3172]: I1213 01:29:57.324936 3172 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:57.325348 kubelet[3172]: I1213 01:29:57.325092 3172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:57.475241 kubelet[3172]: I1213 01:29:57.474807 3172 topology_manager.go:215] "Topology Admit Handler" podUID="9b9d239738c5d4f489e07edc3d6f6635" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-29" Dec 13 01:29:57.475241 kubelet[3172]: I1213 01:29:57.474974 3172 topology_manager.go:215] "Topology Admit Handler" podUID="13c313a84007b143e9b2d4c435aac01a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.475241 kubelet[3172]: I1213 01:29:57.475075 3172 topology_manager.go:215] "Topology Admit Handler" podUID="00ba691aae577bbc1813777d4bcf69b3" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-29" Dec 13 01:29:57.554261 kubelet[3172]: I1213 01:29:57.553665 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:57.554261 kubelet[3172]: I1213 01:29:57.553737 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.554261 kubelet[3172]: I1213 01:29:57.553767 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.554261 kubelet[3172]: I1213 01:29:57.553812 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-ca-certs\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:57.554261 kubelet[3172]: I1213 01:29:57.553834 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b9d239738c5d4f489e07edc3d6f6635-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-29\" (UID: \"9b9d239738c5d4f489e07edc3d6f6635\") " pod="kube-system/kube-apiserver-ip-172-31-30-29" Dec 13 01:29:57.554829 kubelet[3172]: I1213 01:29:57.553856 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.554829 kubelet[3172]: I1213 01:29:57.553909 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.554829 kubelet[3172]: I1213 01:29:57.553935 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13c313a84007b143e9b2d4c435aac01a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-29\" (UID: \"13c313a84007b143e9b2d4c435aac01a\") " pod="kube-system/kube-controller-manager-ip-172-31-30-29" Dec 13 01:29:57.554829 kubelet[3172]: I1213 01:29:57.554305 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ba691aae577bbc1813777d4bcf69b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-29\" (UID: \"00ba691aae577bbc1813777d4bcf69b3\") " pod="kube-system/kube-scheduler-ip-172-31-30-29" Dec 13 01:29:58.114194 kubelet[3172]: I1213 01:29:58.114152 3172 apiserver.go:52] "Watching apiserver" Dec 13 01:29:58.154073 kubelet[3172]: I1213 01:29:58.151203 3172 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:29:58.281453 kubelet[3172]: I1213 01:29:58.281251 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-29" podStartSLOduration=1.281228766 podStartE2EDuration="1.281228766s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.279852929 +0000 UTC m=+1.303506815" watchObservedRunningTime="2024-12-13 01:29:58.281228766 +0000 UTC m=+1.304882663" Dec 13 01:29:58.318533 kubelet[3172]: I1213 01:29:58.318319 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-29" podStartSLOduration=1.318296862 podStartE2EDuration="1.318296862s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.303512589 +0000 UTC m=+1.327166477" watchObservedRunningTime="2024-12-13 01:29:58.318296862 +0000 UTC m=+1.341950749" Dec 13 01:29:58.336689 kubelet[3172]: I1213 01:29:58.336132 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-29" podStartSLOduration=1.336110866 podStartE2EDuration="1.336110866s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.319205108 +0000 UTC m=+1.342858994" watchObservedRunningTime="2024-12-13 01:29:58.336110866 +0000 UTC m=+1.359764744" Dec 13 01:29:58.559001 sudo[2251]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:58.583970 sshd[2246]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:58.588432 systemd[1]: sshd@4-172.31.30.29:22-139.178.68.195:37586.service: Deactivated successfully. Dec 13 01:29:58.593555 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:29:58.593813 systemd[1]: session-5.scope: Consumed 4.732s CPU time, 189.3M memory peak, 0B memory swap peak. Dec 13 01:29:58.596877 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:29:58.598689 systemd-logind[1950]: Removed session 5. Dec 13 01:29:59.414287 update_engine[1953]: I20241213 01:29:59.414113 1953 update_attempter.cc:509] Updating boot flags... Dec 13 01:29:59.485155 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3241) Dec 13 01:30:10.606948 kubelet[3172]: I1213 01:30:10.606906 3172 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:30:10.607546 containerd[1966]: time="2024-12-13T01:30:10.607491347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:30:10.608293 kubelet[3172]: I1213 01:30:10.607892 3172 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:30:11.645647 kubelet[3172]: I1213 01:30:11.645602 3172 topology_manager.go:215] "Topology Admit Handler" podUID="ee91e215-b036-401f-be12-0f70270cc433" podNamespace="kube-system" podName="kube-proxy-6m7qk" Dec 13 01:30:11.672007 systemd[1]: Created slice kubepods-besteffort-podee91e215_b036_401f_be12_0f70270cc433.slice - libcontainer container kubepods-besteffort-podee91e215_b036_401f_be12_0f70270cc433.slice. Dec 13 01:30:11.680205 kubelet[3172]: W1213 01:30:11.679137 3172 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.681778 kubelet[3172]: E1213 01:30:11.681384 3172 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.693080 kubelet[3172]: I1213 01:30:11.692462 3172 topology_manager.go:215] "Topology Admit Handler" podUID="08b6dc65-f5e1-4378-92f3-304ae4f17185" podNamespace="kube-flannel" podName="kube-flannel-ds-wj29q" Dec 13 01:30:11.702721 kubelet[3172]: W1213 01:30:11.702689 3172 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.703014 kubelet[3172]: E1213 01:30:11.702997 3172 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.703865 kubelet[3172]: W1213 01:30:11.703834 3172 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.704015 kubelet[3172]: E1213 01:30:11.703982 3172 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-29" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-30-29' and this object Dec 13 01:30:11.709815 systemd[1]: Created slice kubepods-burstable-pod08b6dc65_f5e1_4378_92f3_304ae4f17185.slice - libcontainer container kubepods-burstable-pod08b6dc65_f5e1_4378_92f3_304ae4f17185.slice. Dec 13 01:30:11.757941 kubelet[3172]: I1213 01:30:11.757900 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee91e215-b036-401f-be12-0f70270cc433-xtables-lock\") pod \"kube-proxy-6m7qk\" (UID: \"ee91e215-b036-401f-be12-0f70270cc433\") " pod="kube-system/kube-proxy-6m7qk" Dec 13 01:30:11.758131 kubelet[3172]: I1213 01:30:11.757968 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjr5\" (UniqueName: \"kubernetes.io/projected/ee91e215-b036-401f-be12-0f70270cc433-kube-api-access-hkjr5\") pod \"kube-proxy-6m7qk\" (UID: \"ee91e215-b036-401f-be12-0f70270cc433\") " pod="kube-system/kube-proxy-6m7qk" Dec 13 01:30:11.758131 kubelet[3172]: I1213 01:30:11.758018 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee91e215-b036-401f-be12-0f70270cc433-kube-proxy\") pod \"kube-proxy-6m7qk\" (UID: \"ee91e215-b036-401f-be12-0f70270cc433\") " pod="kube-system/kube-proxy-6m7qk" Dec 13 01:30:11.758131 kubelet[3172]: I1213 01:30:11.758041 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee91e215-b036-401f-be12-0f70270cc433-lib-modules\") pod \"kube-proxy-6m7qk\" (UID: \"ee91e215-b036-401f-be12-0f70270cc433\") " pod="kube-system/kube-proxy-6m7qk" Dec 13 01:30:11.858941 kubelet[3172]: I1213 01:30:11.858889 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpd5\" (UniqueName: \"kubernetes.io/projected/08b6dc65-f5e1-4378-92f3-304ae4f17185-kube-api-access-rxpd5\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:11.859161 kubelet[3172]: I1213 01:30:11.858964 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/08b6dc65-f5e1-4378-92f3-304ae4f17185-flannel-cfg\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:11.859161 kubelet[3172]: I1213 01:30:11.859002 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/08b6dc65-f5e1-4378-92f3-304ae4f17185-cni\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:11.859161 kubelet[3172]: I1213 01:30:11.859124 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/08b6dc65-f5e1-4378-92f3-304ae4f17185-cni-plugin\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:11.859321 kubelet[3172]: I1213 01:30:11.859169 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/08b6dc65-f5e1-4378-92f3-304ae4f17185-run\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:11.859321 kubelet[3172]: I1213 01:30:11.859193 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08b6dc65-f5e1-4378-92f3-304ae4f17185-xtables-lock\") pod \"kube-flannel-ds-wj29q\" (UID: \"08b6dc65-f5e1-4378-92f3-304ae4f17185\") " pod="kube-flannel/kube-flannel-ds-wj29q" Dec 13 01:30:12.890318 containerd[1966]: time="2024-12-13T01:30:12.890236442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m7qk,Uid:ee91e215-b036-401f-be12-0f70270cc433,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:12.951437 containerd[1966]: time="2024-12-13T01:30:12.950737558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:12.951437 containerd[1966]: time="2024-12-13T01:30:12.951383266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:12.951974 containerd[1966]: time="2024-12-13T01:30:12.951636543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:12.951974 containerd[1966]: time="2024-12-13T01:30:12.951890568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:12.980711 kubelet[3172]: E1213 01:30:12.980629 3172 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:12.982481 kubelet[3172]: E1213 01:30:12.981947 3172 projected.go:200] Error preparing data for projected volume kube-api-access-rxpd5 for pod kube-flannel/kube-flannel-ds-wj29q: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:12.982481 kubelet[3172]: E1213 01:30:12.982090 3172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08b6dc65-f5e1-4378-92f3-304ae4f17185-kube-api-access-rxpd5 podName:08b6dc65-f5e1-4378-92f3-304ae4f17185 nodeName:}" failed. No retries permitted until 2024-12-13 01:30:13.482044321 +0000 UTC m=+16.505698203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxpd5" (UniqueName: "kubernetes.io/projected/08b6dc65-f5e1-4378-92f3-304ae4f17185-kube-api-access-rxpd5") pod "kube-flannel-ds-wj29q" (UID: "08b6dc65-f5e1-4378-92f3-304ae4f17185") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:12.997797 systemd[1]: run-containerd-runc-k8s.io-fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb-runc.OsOQpn.mount: Deactivated successfully. Dec 13 01:30:13.015992 systemd[1]: Started cri-containerd-fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb.scope - libcontainer container fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb. Dec 13 01:30:13.060499 containerd[1966]: time="2024-12-13T01:30:13.060273092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m7qk,Uid:ee91e215-b036-401f-be12-0f70270cc433,Namespace:kube-system,Attempt:0,} returns sandbox id \"fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb\"" Dec 13 01:30:13.067538 containerd[1966]: time="2024-12-13T01:30:13.067452371Z" level=info msg="CreateContainer within sandbox \"fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:30:13.129123 containerd[1966]: time="2024-12-13T01:30:13.129020110Z" level=info msg="CreateContainer within sandbox \"fef3d1426fd0d05da2c6af07e31d17618c123cac60f3070f6d5bf2da19fdf8bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2080acce5be6ba59017c925a055770b7acbe9fc728927701a4692ef703a3b263\"" Dec 13 01:30:13.131277 containerd[1966]: time="2024-12-13T01:30:13.130242416Z" level=info msg="StartContainer for \"2080acce5be6ba59017c925a055770b7acbe9fc728927701a4692ef703a3b263\"" Dec 13 01:30:13.189257 systemd[1]: Started cri-containerd-2080acce5be6ba59017c925a055770b7acbe9fc728927701a4692ef703a3b263.scope - libcontainer container 2080acce5be6ba59017c925a055770b7acbe9fc728927701a4692ef703a3b263. Dec 13 01:30:13.274256 containerd[1966]: time="2024-12-13T01:30:13.274102828Z" level=info msg="StartContainer for \"2080acce5be6ba59017c925a055770b7acbe9fc728927701a4692ef703a3b263\" returns successfully" Dec 13 01:30:13.303082 kubelet[3172]: I1213 01:30:13.301390 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6m7qk" podStartSLOduration=2.301370461 podStartE2EDuration="2.301370461s" podCreationTimestamp="2024-12-13 01:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:13.300948483 +0000 UTC m=+16.324602374" watchObservedRunningTime="2024-12-13 01:30:13.301370461 +0000 UTC m=+16.325024349" Dec 13 01:30:13.816771 containerd[1966]: time="2024-12-13T01:30:13.816673961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wj29q,Uid:08b6dc65-f5e1-4378-92f3-304ae4f17185,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:30:13.871138 containerd[1966]: time="2024-12-13T01:30:13.870374692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:13.871138 containerd[1966]: time="2024-12-13T01:30:13.870506358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:13.871138 containerd[1966]: time="2024-12-13T01:30:13.870527251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.871952 containerd[1966]: time="2024-12-13T01:30:13.871290736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.913346 systemd[1]: Started cri-containerd-049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5.scope - libcontainer container 049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5. Dec 13 01:30:13.967035 containerd[1966]: time="2024-12-13T01:30:13.966986904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wj29q,Uid:08b6dc65-f5e1-4378-92f3-304ae4f17185,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\"" Dec 13 01:30:13.970492 containerd[1966]: time="2024-12-13T01:30:13.970449473Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:30:15.893275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939950161.mount: Deactivated successfully. Dec 13 01:30:15.988789 containerd[1966]: time="2024-12-13T01:30:15.988705915Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:15.990507 containerd[1966]: time="2024-12-13T01:30:15.990192551Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852934" Dec 13 01:30:15.994088 containerd[1966]: time="2024-12-13T01:30:15.992536212Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:15.996417 containerd[1966]: time="2024-12-13T01:30:15.996372371Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:15.998309 containerd[1966]: time="2024-12-13T01:30:15.998264818Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.027756622s" Dec 13 01:30:15.998551 containerd[1966]: time="2024-12-13T01:30:15.998527953Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:30:16.002782 containerd[1966]: time="2024-12-13T01:30:16.002737567Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:30:16.029813 containerd[1966]: time="2024-12-13T01:30:16.029762759Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26\"" Dec 13 01:30:16.030752 containerd[1966]: time="2024-12-13T01:30:16.030444129Z" level=info msg="StartContainer for \"6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26\"" Dec 13 01:30:16.078294 systemd[1]: Started cri-containerd-6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26.scope - libcontainer container 6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26. Dec 13 01:30:16.150879 systemd[1]: cri-containerd-6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26.scope: Deactivated successfully. Dec 13 01:30:16.164099 containerd[1966]: time="2024-12-13T01:30:16.160134485Z" level=info msg="StartContainer for \"6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26\" returns successfully" Dec 13 01:30:16.227173 containerd[1966]: time="2024-12-13T01:30:16.227090132Z" level=info msg="shim disconnected" id=6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26 namespace=k8s.io Dec 13 01:30:16.227173 containerd[1966]: time="2024-12-13T01:30:16.227145855Z" level=warning msg="cleaning up after shim disconnected" id=6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26 namespace=k8s.io Dec 13 01:30:16.227173 containerd[1966]: time="2024-12-13T01:30:16.227158135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:16.299560 containerd[1966]: time="2024-12-13T01:30:16.299521790Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:30:16.755700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d0ae41ed780181e8af67a699c3dd654d47ac05cfee6f5db1b0d4c7b27e3ad26-rootfs.mount: Deactivated successfully. Dec 13 01:30:18.386914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987778465.mount: Deactivated successfully. Dec 13 01:30:20.819905 containerd[1966]: time="2024-12-13T01:30:20.819549583Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.822527 containerd[1966]: time="2024-12-13T01:30:20.822424578Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:30:20.824456 containerd[1966]: time="2024-12-13T01:30:20.824367701Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.832140 containerd[1966]: time="2024-12-13T01:30:20.829082923Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.832140 containerd[1966]: time="2024-12-13T01:30:20.831770136Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.532202314s" Dec 13 01:30:20.832376 containerd[1966]: time="2024-12-13T01:30:20.831906501Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:30:20.862316 containerd[1966]: time="2024-12-13T01:30:20.862253792Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:30:20.903094 containerd[1966]: time="2024-12-13T01:30:20.902973021Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20\"" Dec 13 01:30:20.905445 containerd[1966]: time="2024-12-13T01:30:20.905333005Z" level=info msg="StartContainer for \"3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20\"" Dec 13 01:30:20.987905 systemd[1]: run-containerd-runc-k8s.io-3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20-runc.7oMGoI.mount: Deactivated successfully. Dec 13 01:30:20.996388 systemd[1]: Started cri-containerd-3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20.scope - libcontainer container 3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20. Dec 13 01:30:21.036230 systemd[1]: cri-containerd-3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20.scope: Deactivated successfully. Dec 13 01:30:21.063454 containerd[1966]: time="2024-12-13T01:30:21.063406376Z" level=info msg="StartContainer for \"3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20\" returns successfully" Dec 13 01:30:21.070999 kubelet[3172]: I1213 01:30:21.066216 3172 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:30:21.130271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20-rootfs.mount: Deactivated successfully. Dec 13 01:30:21.174334 kubelet[3172]: I1213 01:30:21.174288 3172 topology_manager.go:215] "Topology Admit Handler" podUID="8b8183b1-cbcf-4e98-b8f0-ea0c230faad6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6c5cd" Dec 13 01:30:21.174570 kubelet[3172]: I1213 01:30:21.174547 3172 topology_manager.go:215] "Topology Admit Handler" podUID="dbdb28b4-93c1-4098-a860-70aeac08da44" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wcql8" Dec 13 01:30:21.194535 systemd[1]: Created slice kubepods-burstable-pod8b8183b1_cbcf_4e98_b8f0_ea0c230faad6.slice - libcontainer container kubepods-burstable-pod8b8183b1_cbcf_4e98_b8f0_ea0c230faad6.slice. Dec 13 01:30:21.213197 systemd[1]: Created slice kubepods-burstable-poddbdb28b4_93c1_4098_a860_70aeac08da44.slice - libcontainer container kubepods-burstable-poddbdb28b4_93c1_4098_a860_70aeac08da44.slice. Dec 13 01:30:21.370320 kubelet[3172]: I1213 01:30:21.369105 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxb75\" (UniqueName: \"kubernetes.io/projected/8b8183b1-cbcf-4e98-b8f0-ea0c230faad6-kube-api-access-kxb75\") pod \"coredns-7db6d8ff4d-6c5cd\" (UID: \"8b8183b1-cbcf-4e98-b8f0-ea0c230faad6\") " pod="kube-system/coredns-7db6d8ff4d-6c5cd" Dec 13 01:30:21.370320 kubelet[3172]: I1213 01:30:21.369306 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkzp\" (UniqueName: \"kubernetes.io/projected/dbdb28b4-93c1-4098-a860-70aeac08da44-kube-api-access-9pkzp\") pod \"coredns-7db6d8ff4d-wcql8\" (UID: \"dbdb28b4-93c1-4098-a860-70aeac08da44\") " pod="kube-system/coredns-7db6d8ff4d-wcql8" Dec 13 01:30:21.370320 kubelet[3172]: I1213 01:30:21.369423 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b8183b1-cbcf-4e98-b8f0-ea0c230faad6-config-volume\") pod \"coredns-7db6d8ff4d-6c5cd\" (UID: \"8b8183b1-cbcf-4e98-b8f0-ea0c230faad6\") " pod="kube-system/coredns-7db6d8ff4d-6c5cd" Dec 13 01:30:21.370320 kubelet[3172]: I1213 01:30:21.369455 3172 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbdb28b4-93c1-4098-a860-70aeac08da44-config-volume\") pod \"coredns-7db6d8ff4d-wcql8\" (UID: \"dbdb28b4-93c1-4098-a860-70aeac08da44\") " pod="kube-system/coredns-7db6d8ff4d-wcql8" Dec 13 01:30:21.507304 containerd[1966]: time="2024-12-13T01:30:21.507221076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6c5cd,Uid:8b8183b1-cbcf-4e98-b8f0-ea0c230faad6,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:21.517885 containerd[1966]: time="2024-12-13T01:30:21.517042411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wcql8,Uid:dbdb28b4-93c1-4098-a860-70aeac08da44,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:22.011698 systemd[1]: run-netns-cni\x2d7275c7ab\x2dadde\x2d34f0\x2dc453\x2d75f8782acfd5.mount: Deactivated successfully. Dec 13 01:30:22.015595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d-shm.mount: Deactivated successfully. Dec 13 01:30:22.015947 systemd[1]: run-netns-cni\x2d061f7bd5\x2dbabe\x2de29b\x2dd8b8\x2d07a512799b40.mount: Deactivated successfully. Dec 13 01:30:22.016114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232-shm.mount: Deactivated successfully. Dec 13 01:30:22.073187 containerd[1966]: time="2024-12-13T01:30:22.073131547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6c5cd,Uid:8b8183b1-cbcf-4e98-b8f0-ea0c230faad6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:30:22.074333 kubelet[3172]: E1213 01:30:22.073961 3172 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:30:22.074333 kubelet[3172]: E1213 01:30:22.074088 3172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6c5cd" Dec 13 01:30:22.074333 kubelet[3172]: E1213 01:30:22.074115 3172 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6c5cd" Dec 13 01:30:22.074333 kubelet[3172]: E1213 01:30:22.074171 3172 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6c5cd_kube-system(8b8183b1-cbcf-4e98-b8f0-ea0c230faad6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6c5cd_kube-system(8b8183b1-cbcf-4e98-b8f0-ea0c230faad6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c4f322cb6fd5e3accf10a38c7ab99d3453b332d946142eb53208999183e232\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-6c5cd" podUID="8b8183b1-cbcf-4e98-b8f0-ea0c230faad6" Dec 13 01:30:22.079133 containerd[1966]: time="2024-12-13T01:30:22.078960925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wcql8,Uid:dbdb28b4-93c1-4098-a860-70aeac08da44,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:30:22.079750 kubelet[3172]: E1213 01:30:22.079690 3172 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:30:22.083417 kubelet[3172]: E1213 01:30:22.083191 3172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wcql8" Dec 13 01:30:22.083417 kubelet[3172]: E1213 01:30:22.083229 3172 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wcql8" Dec 13 01:30:22.083417 kubelet[3172]: E1213 01:30:22.083283 3172 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wcql8_kube-system(dbdb28b4-93c1-4098-a860-70aeac08da44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wcql8_kube-system(dbdb28b4-93c1-4098-a860-70aeac08da44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25bd8fb67f263116a604ffb68dd003565d70db6dd13ba68689058cda26d5cb9d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wcql8" podUID="dbdb28b4-93c1-4098-a860-70aeac08da44" Dec 13 01:30:22.094499 containerd[1966]: time="2024-12-13T01:30:22.094270361Z" level=info msg="shim disconnected" id=3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20 namespace=k8s.io Dec 13 01:30:22.094499 containerd[1966]: time="2024-12-13T01:30:22.094327887Z" level=warning msg="cleaning up after shim disconnected" id=3c5b32220286115e3b2869ffe270f4ae20d2f7a9fce16ba2e3549355cb6def20 namespace=k8s.io Dec 13 01:30:22.094499 containerd[1966]: time="2024-12-13T01:30:22.094339542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:22.345781 containerd[1966]: time="2024-12-13T01:30:22.345298828Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:30:22.384684 containerd[1966]: time="2024-12-13T01:30:22.384637770Z" level=info msg="CreateContainer within sandbox \"049d100cdead6284e4c61b4bb7dcd90cfd755e72249c043b2135214ca3610ee5\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"58b17165cb3d26b205e9ef81328b1c46efa21412c6f6b23d69d59272c6b8cf01\"" Dec 13 01:30:22.387083 containerd[1966]: time="2024-12-13T01:30:22.386463529Z" level=info msg="StartContainer for \"58b17165cb3d26b205e9ef81328b1c46efa21412c6f6b23d69d59272c6b8cf01\"" Dec 13 01:30:22.435317 systemd[1]: Started cri-containerd-58b17165cb3d26b205e9ef81328b1c46efa21412c6f6b23d69d59272c6b8cf01.scope - libcontainer container 58b17165cb3d26b205e9ef81328b1c46efa21412c6f6b23d69d59272c6b8cf01. Dec 13 01:30:22.505392 containerd[1966]: time="2024-12-13T01:30:22.505345228Z" level=info msg="StartContainer for \"58b17165cb3d26b205e9ef81328b1c46efa21412c6f6b23d69d59272c6b8cf01\" returns successfully" Dec 13 01:30:22.904402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061258901.mount: Deactivated successfully. Dec 13 01:30:23.373093 kubelet[3172]: I1213 01:30:23.368564 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wj29q" podStartSLOduration=5.503673366 podStartE2EDuration="12.368542238s" podCreationTimestamp="2024-12-13 01:30:11 +0000 UTC" firstStartedPulling="2024-12-13 01:30:13.968442136 +0000 UTC m=+16.992096003" lastFinishedPulling="2024-12-13 01:30:20.833311006 +0000 UTC m=+23.856964875" observedRunningTime="2024-12-13 01:30:23.367840939 +0000 UTC m=+26.391494827" watchObservedRunningTime="2024-12-13 01:30:23.368542238 +0000 UTC m=+26.392196124" Dec 13 01:30:23.737766 (udev-worker)[3804]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:23.766426 systemd-networkd[1814]: flannel.1: Link UP Dec 13 01:30:23.766439 systemd-networkd[1814]: flannel.1: Gained carrier Dec 13 01:30:25.620288 systemd-networkd[1814]: flannel.1: Gained IPv6LL Dec 13 01:30:28.353570 ntpd[1944]: Listen normally on 7 flannel.1 192.168.0.0:123 Dec 13 01:30:28.353660 ntpd[1944]: Listen normally on 8 flannel.1 [fe80::383f:dbff:fe31:c13d%4]:123 Dec 13 01:30:28.355313 ntpd[1944]: 13 Dec 01:30:28 ntpd[1944]: Listen normally on 7 flannel.1 192.168.0.0:123 Dec 13 01:30:28.355313 ntpd[1944]: 13 Dec 01:30:28 ntpd[1944]: Listen normally on 8 flannel.1 [fe80::383f:dbff:fe31:c13d%4]:123 Dec 13 01:30:35.176231 containerd[1966]: time="2024-12-13T01:30:35.175736354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6c5cd,Uid:8b8183b1-cbcf-4e98-b8f0-ea0c230faad6,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:35.239248 systemd-networkd[1814]: cni0: Link UP Dec 13 01:30:35.239259 systemd-networkd[1814]: cni0: Gained carrier Dec 13 01:30:35.243382 systemd-networkd[1814]: cni0: Lost carrier Dec 13 01:30:35.243418 (udev-worker)[3940]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:35.594161 systemd-networkd[1814]: vethf0494751: Link UP Dec 13 01:30:35.595791 kernel: cni0: port 1(vethf0494751) entered blocking state Dec 13 01:30:35.595883 kernel: cni0: port 1(vethf0494751) entered disabled state Dec 13 01:30:35.595910 kernel: vethf0494751: entered allmulticast mode Dec 13 01:30:35.597623 kernel: vethf0494751: entered promiscuous mode Dec 13 01:30:35.597701 kernel: cni0: port 1(vethf0494751) entered blocking state Dec 13 01:30:35.599169 kernel: cni0: port 1(vethf0494751) entered forwarding state Dec 13 01:30:35.599243 kernel: cni0: port 1(vethf0494751) entered disabled state Dec 13 01:30:35.626143 kernel: cni0: port 1(vethf0494751) entered blocking state Dec 13 01:30:35.626225 kernel: cni0: port 1(vethf0494751) entered forwarding state Dec 13 01:30:35.626338 systemd-networkd[1814]: vethf0494751: Gained carrier Dec 13 01:30:35.627387 systemd-networkd[1814]: cni0: Gained carrier Dec 13 01:30:35.634802 containerd[1966]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:30:35.634802 containerd[1966]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:30:35.672565 containerd[1966]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:30:35.672122837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:35.672565 containerd[1966]: time="2024-12-13T01:30:35.672194382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:35.672565 containerd[1966]: time="2024-12-13T01:30:35.672309266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:35.673313 containerd[1966]: time="2024-12-13T01:30:35.672677087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:35.704633 systemd[1]: run-containerd-runc-k8s.io-4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28-runc.nxJYcG.mount: Deactivated successfully. Dec 13 01:30:35.713305 systemd[1]: Started cri-containerd-4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28.scope - libcontainer container 4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28. Dec 13 01:30:35.799567 containerd[1966]: time="2024-12-13T01:30:35.799461372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6c5cd,Uid:8b8183b1-cbcf-4e98-b8f0-ea0c230faad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28\"" Dec 13 01:30:35.822830 containerd[1966]: time="2024-12-13T01:30:35.822790395Z" level=info msg="CreateContainer within sandbox \"4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:35.843909 containerd[1966]: time="2024-12-13T01:30:35.843771814Z" level=info msg="CreateContainer within sandbox \"4886af91a146347dc30349334233f0246395d19aaaac15de1bd016b538cccf28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20ff1fe23d05d61c91301557a0ab7d8a6ef027b18d8f5ff23ac6179fcf76b746\"" Dec 13 01:30:35.846170 containerd[1966]: time="2024-12-13T01:30:35.846066457Z" level=info msg="StartContainer for \"20ff1fe23d05d61c91301557a0ab7d8a6ef027b18d8f5ff23ac6179fcf76b746\"" Dec 13 01:30:35.921314 systemd[1]: Started cri-containerd-20ff1fe23d05d61c91301557a0ab7d8a6ef027b18d8f5ff23ac6179fcf76b746.scope - libcontainer container 20ff1fe23d05d61c91301557a0ab7d8a6ef027b18d8f5ff23ac6179fcf76b746. Dec 13 01:30:35.994757 containerd[1966]: time="2024-12-13T01:30:35.994700346Z" level=info msg="StartContainer for \"20ff1fe23d05d61c91301557a0ab7d8a6ef027b18d8f5ff23ac6179fcf76b746\" returns successfully" Dec 13 01:30:36.313287 systemd-networkd[1814]: cni0: Gained IPv6LL Dec 13 01:30:36.692270 systemd-networkd[1814]: vethf0494751: Gained IPv6LL Dec 13 01:30:37.175753 containerd[1966]: time="2024-12-13T01:30:37.175687857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wcql8,Uid:dbdb28b4-93c1-4098-a860-70aeac08da44,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:37.233574 systemd-networkd[1814]: vetha7171463: Link UP Dec 13 01:30:37.235334 kernel: cni0: port 2(vetha7171463) entered blocking state Dec 13 01:30:37.235568 kernel: cni0: port 2(vetha7171463) entered disabled state Dec 13 01:30:37.235719 kernel: vetha7171463: entered allmulticast mode Dec 13 01:30:37.235749 kernel: vetha7171463: entered promiscuous mode Dec 13 01:30:37.237302 (udev-worker)[3951]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:37.271777 kernel: cni0: port 2(vetha7171463) entered blocking state Dec 13 01:30:37.271868 kernel: cni0: port 2(vetha7171463) entered forwarding state Dec 13 01:30:37.272198 systemd-networkd[1814]: vetha7171463: Gained carrier Dec 13 01:30:37.276208 containerd[1966]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Dec 13 01:30:37.276208 containerd[1966]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:30:37.325517 containerd[1966]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:30:37.325368473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:37.325691 containerd[1966]: time="2024-12-13T01:30:37.325498213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:37.326030 containerd[1966]: time="2024-12-13T01:30:37.325753492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:37.326030 containerd[1966]: time="2024-12-13T01:30:37.325881607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:37.365478 systemd[1]: Started cri-containerd-97450a0f33533d329b7cd933de9b47a08af96cdf3041cdf71b3116833897b095.scope - libcontainer container 97450a0f33533d329b7cd933de9b47a08af96cdf3041cdf71b3116833897b095. Dec 13 01:30:37.430332 kubelet[3172]: I1213 01:30:37.429047 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6c5cd" podStartSLOduration=26.42894879 podStartE2EDuration="26.42894879s" podCreationTimestamp="2024-12-13 01:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:36.45536092 +0000 UTC m=+39.479014806" watchObservedRunningTime="2024-12-13 01:30:37.42894879 +0000 UTC m=+40.452602677" Dec 13 01:30:37.485031 containerd[1966]: time="2024-12-13T01:30:37.484913337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wcql8,Uid:dbdb28b4-93c1-4098-a860-70aeac08da44,Namespace:kube-system,Attempt:0,} returns sandbox id \"97450a0f33533d329b7cd933de9b47a08af96cdf3041cdf71b3116833897b095\"" Dec 13 01:30:37.492815 containerd[1966]: time="2024-12-13T01:30:37.492562120Z" level=info msg="CreateContainer within sandbox \"97450a0f33533d329b7cd933de9b47a08af96cdf3041cdf71b3116833897b095\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:37.527537 containerd[1966]: time="2024-12-13T01:30:37.527487676Z" level=info msg="CreateContainer within sandbox \"97450a0f33533d329b7cd933de9b47a08af96cdf3041cdf71b3116833897b095\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e25d08195e2ced8c0a1c1f54990b91e786ed4b129fb424838a436e887587d2ff\"" Dec 13 01:30:37.530099 containerd[1966]: time="2024-12-13T01:30:37.528857596Z" level=info msg="StartContainer for \"e25d08195e2ced8c0a1c1f54990b91e786ed4b129fb424838a436e887587d2ff\"" Dec 13 01:30:37.570942 systemd[1]: Started cri-containerd-e25d08195e2ced8c0a1c1f54990b91e786ed4b129fb424838a436e887587d2ff.scope - libcontainer container e25d08195e2ced8c0a1c1f54990b91e786ed4b129fb424838a436e887587d2ff. Dec 13 01:30:37.642540 containerd[1966]: time="2024-12-13T01:30:37.642485864Z" level=info msg="StartContainer for \"e25d08195e2ced8c0a1c1f54990b91e786ed4b129fb424838a436e887587d2ff\" returns successfully" Dec 13 01:30:38.215964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180720317.mount: Deactivated successfully. Dec 13 01:30:38.422558 kubelet[3172]: I1213 01:30:38.422358 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wcql8" podStartSLOduration=27.422336793 podStartE2EDuration="27.422336793s" podCreationTimestamp="2024-12-13 01:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:38.42168544 +0000 UTC m=+41.445339327" watchObservedRunningTime="2024-12-13 01:30:38.422336793 +0000 UTC m=+41.445990680" Dec 13 01:30:38.932385 systemd-networkd[1814]: vetha7171463: Gained IPv6LL Dec 13 01:30:40.244591 systemd[1]: Started sshd@5-172.31.30.29:22-139.178.68.195:59482.service - OpenSSH per-connection server daemon (139.178.68.195:59482). Dec 13 01:30:40.433541 sshd[4169]: Accepted publickey for core from 139.178.68.195 port 59482 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:40.435299 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:40.442285 systemd-logind[1950]: New session 6 of user core. Dec 13 01:30:40.448288 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:30:40.651312 sshd[4169]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:40.656500 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:30:40.658158 systemd[1]: sshd@5-172.31.30.29:22-139.178.68.195:59482.service: Deactivated successfully. Dec 13 01:30:40.660342 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:30:40.662530 systemd-logind[1950]: Removed session 6. Dec 13 01:30:41.353673 ntpd[1944]: Listen normally on 9 cni0 192.168.0.1:123 Dec 13 01:30:41.354171 ntpd[1944]: 13 Dec 01:30:41 ntpd[1944]: Listen normally on 9 cni0 192.168.0.1:123 Dec 13 01:30:41.354171 ntpd[1944]: 13 Dec 01:30:41 ntpd[1944]: Listen normally on 10 cni0 [fe80::c0e8:c9ff:fe90:7975%5]:123 Dec 13 01:30:41.354171 ntpd[1944]: 13 Dec 01:30:41 ntpd[1944]: Listen normally on 11 vethf0494751 [fe80::e8ae:8bff:fe55:e16b%6]:123 Dec 13 01:30:41.354171 ntpd[1944]: 13 Dec 01:30:41 ntpd[1944]: Listen normally on 12 vetha7171463 [fe80::5cf5:fcff:feb7:703f%7]:123 Dec 13 01:30:41.353777 ntpd[1944]: Listen normally on 10 cni0 [fe80::c0e8:c9ff:fe90:7975%5]:123 Dec 13 01:30:41.353835 ntpd[1944]: Listen normally on 11 vethf0494751 [fe80::e8ae:8bff:fe55:e16b%6]:123 Dec 13 01:30:41.353886 ntpd[1944]: Listen normally on 12 vetha7171463 [fe80::5cf5:fcff:feb7:703f%7]:123 Dec 13 01:30:45.694456 systemd[1]: Started sshd@6-172.31.30.29:22-139.178.68.195:59492.service - OpenSSH per-connection server daemon (139.178.68.195:59492). Dec 13 01:30:45.861554 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 59492 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:45.863124 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:45.868254 systemd-logind[1950]: New session 7 of user core. Dec 13 01:30:45.872438 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:30:46.065690 sshd[4206]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:46.069900 systemd[1]: sshd@6-172.31.30.29:22-139.178.68.195:59492.service: Deactivated successfully. Dec 13 01:30:46.072738 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:30:46.074474 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:30:46.075659 systemd-logind[1950]: Removed session 7. Dec 13 01:30:51.106756 systemd[1]: Started sshd@7-172.31.30.29:22-139.178.68.195:58006.service - OpenSSH per-connection server daemon (139.178.68.195:58006). Dec 13 01:30:51.276651 sshd[4242]: Accepted publickey for core from 139.178.68.195 port 58006 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:51.278491 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:51.291849 systemd-logind[1950]: New session 8 of user core. Dec 13 01:30:51.300654 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:30:51.568170 sshd[4242]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:51.575760 systemd-logind[1950]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:30:51.576359 systemd[1]: sshd@7-172.31.30.29:22-139.178.68.195:58006.service: Deactivated successfully. Dec 13 01:30:51.589793 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:30:51.598788 systemd[1]: Started sshd@8-172.31.30.29:22-139.178.68.195:58022.service - OpenSSH per-connection server daemon (139.178.68.195:58022). Dec 13 01:30:51.600268 systemd-logind[1950]: Removed session 8. Dec 13 01:30:51.765026 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 58022 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:51.766558 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:51.771931 systemd-logind[1950]: New session 9 of user core. Dec 13 01:30:51.777247 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:30:52.011377 sshd[4255]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:52.019158 systemd-logind[1950]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:30:52.019986 systemd[1]: sshd@8-172.31.30.29:22-139.178.68.195:58022.service: Deactivated successfully. Dec 13 01:30:52.025436 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:30:52.029970 systemd-logind[1950]: Removed session 9. Dec 13 01:30:52.047454 systemd[1]: Started sshd@9-172.31.30.29:22-139.178.68.195:58030.service - OpenSSH per-connection server daemon (139.178.68.195:58030). Dec 13 01:30:52.238136 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 58030 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:52.239735 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:52.244496 systemd-logind[1950]: New session 10 of user core. Dec 13 01:30:52.250249 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:30:52.471087 sshd[4266]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:52.485572 systemd[1]: sshd@9-172.31.30.29:22-139.178.68.195:58030.service: Deactivated successfully. Dec 13 01:30:52.498076 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:30:52.507409 systemd-logind[1950]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:30:52.512189 systemd-logind[1950]: Removed session 10. Dec 13 01:30:57.503489 systemd[1]: Started sshd@10-172.31.30.29:22-139.178.68.195:36030.service - OpenSSH per-connection server daemon (139.178.68.195:36030). Dec 13 01:30:57.679092 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 36030 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:57.679913 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:57.686129 systemd-logind[1950]: New session 11 of user core. Dec 13 01:30:57.694392 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:30:57.883992 sshd[4302]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:57.889554 systemd-logind[1950]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:30:57.890546 systemd[1]: sshd@10-172.31.30.29:22-139.178.68.195:36030.service: Deactivated successfully. Dec 13 01:30:57.892901 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:30:57.894096 systemd-logind[1950]: Removed session 11. Dec 13 01:31:02.924475 systemd[1]: Started sshd@11-172.31.30.29:22-139.178.68.195:36044.service - OpenSSH per-connection server daemon (139.178.68.195:36044). Dec 13 01:31:03.082282 sshd[4336]: Accepted publickey for core from 139.178.68.195 port 36044 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:03.084125 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:03.106541 systemd-logind[1950]: New session 12 of user core. Dec 13 01:31:03.116420 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:31:03.315199 sshd[4336]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:03.320802 systemd[1]: sshd@11-172.31.30.29:22-139.178.68.195:36044.service: Deactivated successfully. Dec 13 01:31:03.323580 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:31:03.326264 systemd-logind[1950]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:31:03.329341 systemd-logind[1950]: Removed session 12. Dec 13 01:31:08.351438 systemd[1]: Started sshd@12-172.31.30.29:22-139.178.68.195:37380.service - OpenSSH per-connection server daemon (139.178.68.195:37380). Dec 13 01:31:08.518087 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 37380 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:08.519408 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:08.524274 systemd-logind[1950]: New session 13 of user core. Dec 13 01:31:08.527487 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:31:08.727256 sshd[4369]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:08.731965 systemd[1]: sshd@12-172.31.30.29:22-139.178.68.195:37380.service: Deactivated successfully. Dec 13 01:31:08.734758 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:31:08.735662 systemd-logind[1950]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:31:08.736879 systemd-logind[1950]: Removed session 13. Dec 13 01:31:13.759548 systemd[1]: Started sshd@13-172.31.30.29:22-139.178.68.195:37384.service - OpenSSH per-connection server daemon (139.178.68.195:37384). Dec 13 01:31:13.935531 sshd[4405]: Accepted publickey for core from 139.178.68.195 port 37384 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:13.937159 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:13.957002 systemd-logind[1950]: New session 14 of user core. Dec 13 01:31:13.971030 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:31:14.225443 sshd[4405]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:14.232262 systemd[1]: sshd@13-172.31.30.29:22-139.178.68.195:37384.service: Deactivated successfully. Dec 13 01:31:14.237450 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:31:14.239326 systemd-logind[1950]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:31:14.240690 systemd-logind[1950]: Removed session 14. Dec 13 01:31:14.256398 systemd[1]: Started sshd@14-172.31.30.29:22-139.178.68.195:37398.service - OpenSSH per-connection server daemon (139.178.68.195:37398). Dec 13 01:31:14.440330 sshd[4423]: Accepted publickey for core from 139.178.68.195 port 37398 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:14.442236 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:14.447588 systemd-logind[1950]: New session 15 of user core. Dec 13 01:31:14.454263 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:31:14.946528 sshd[4423]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:14.950348 systemd[1]: sshd@14-172.31.30.29:22-139.178.68.195:37398.service: Deactivated successfully. Dec 13 01:31:14.952967 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:31:14.955286 systemd-logind[1950]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:31:14.956421 systemd-logind[1950]: Removed session 15. Dec 13 01:31:14.981429 systemd[1]: Started sshd@15-172.31.30.29:22-139.178.68.195:37412.service - OpenSSH per-connection server daemon (139.178.68.195:37412). Dec 13 01:31:15.144618 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 37412 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:15.145665 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:15.153529 systemd-logind[1950]: New session 16 of user core. Dec 13 01:31:15.165488 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:31:16.924240 sshd[4448]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:16.934751 systemd[1]: sshd@15-172.31.30.29:22-139.178.68.195:37412.service: Deactivated successfully. Dec 13 01:31:16.942306 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:31:16.944517 systemd-logind[1950]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:31:16.963847 systemd[1]: Started sshd@16-172.31.30.29:22-139.178.68.195:40030.service - OpenSSH per-connection server daemon (139.178.68.195:40030). Dec 13 01:31:16.965657 systemd-logind[1950]: Removed session 16. Dec 13 01:31:17.128730 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 40030 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:17.129642 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:17.134748 systemd-logind[1950]: New session 17 of user core. Dec 13 01:31:17.144265 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:17.552916 sshd[4474]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:17.557842 systemd[1]: sshd@16-172.31.30.29:22-139.178.68.195:40030.service: Deactivated successfully. Dec 13 01:31:17.561680 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:17.563675 systemd-logind[1950]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:17.567167 systemd-logind[1950]: Removed session 17. Dec 13 01:31:17.588986 systemd[1]: Started sshd@17-172.31.30.29:22-139.178.68.195:40042.service - OpenSSH per-connection server daemon (139.178.68.195:40042). Dec 13 01:31:17.757415 sshd[4485]: Accepted publickey for core from 139.178.68.195 port 40042 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:17.759280 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:17.767131 systemd-logind[1950]: New session 18 of user core. Dec 13 01:31:17.771337 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:17.990549 sshd[4485]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:17.996546 systemd[1]: sshd@17-172.31.30.29:22-139.178.68.195:40042.service: Deactivated successfully. Dec 13 01:31:18.002800 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:18.005902 systemd-logind[1950]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:18.007703 systemd-logind[1950]: Removed session 18. Dec 13 01:31:23.026899 systemd[1]: Started sshd@18-172.31.30.29:22-139.178.68.195:40052.service - OpenSSH per-connection server daemon (139.178.68.195:40052). Dec 13 01:31:23.205379 sshd[4521]: Accepted publickey for core from 139.178.68.195 port 40052 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:23.206099 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:23.212683 systemd-logind[1950]: New session 19 of user core. Dec 13 01:31:23.216590 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:23.451520 sshd[4521]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:23.458869 systemd[1]: sshd@18-172.31.30.29:22-139.178.68.195:40052.service: Deactivated successfully. Dec 13 01:31:23.462460 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:23.465536 systemd-logind[1950]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:23.469027 systemd-logind[1950]: Removed session 19. Dec 13 01:31:28.494843 systemd[1]: Started sshd@19-172.31.30.29:22-139.178.68.195:42716.service - OpenSSH per-connection server daemon (139.178.68.195:42716). Dec 13 01:31:28.663385 sshd[4557]: Accepted publickey for core from 139.178.68.195 port 42716 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:28.664259 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:28.670128 systemd-logind[1950]: New session 20 of user core. Dec 13 01:31:28.675264 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:28.880312 sshd[4557]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:28.886514 systemd-logind[1950]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:28.887794 systemd[1]: sshd@19-172.31.30.29:22-139.178.68.195:42716.service: Deactivated successfully. Dec 13 01:31:28.890585 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:28.892252 systemd-logind[1950]: Removed session 20. Dec 13 01:31:33.917388 systemd[1]: Started sshd@20-172.31.30.29:22-139.178.68.195:42720.service - OpenSSH per-connection server daemon (139.178.68.195:42720). Dec 13 01:31:34.077090 sshd[4591]: Accepted publickey for core from 139.178.68.195 port 42720 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:34.078444 sshd[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.085094 systemd-logind[1950]: New session 21 of user core. Dec 13 01:31:34.089267 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:34.323313 sshd[4591]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:34.336807 systemd[1]: sshd@20-172.31.30.29:22-139.178.68.195:42720.service: Deactivated successfully. Dec 13 01:31:34.340831 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:34.343126 systemd-logind[1950]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:34.345682 systemd-logind[1950]: Removed session 21. Dec 13 01:31:39.360569 systemd[1]: Started sshd@21-172.31.30.29:22-139.178.68.195:44896.service - OpenSSH per-connection server daemon (139.178.68.195:44896). Dec 13 01:31:39.554125 sshd[4631]: Accepted publickey for core from 139.178.68.195 port 44896 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:39.558642 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:39.571120 systemd-logind[1950]: New session 22 of user core. Dec 13 01:31:39.577252 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:39.821152 sshd[4631]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:39.825294 systemd[1]: sshd@21-172.31.30.29:22-139.178.68.195:44896.service: Deactivated successfully. Dec 13 01:31:39.827434 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:39.828859 systemd-logind[1950]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:39.830566 systemd-logind[1950]: Removed session 22. Dec 13 01:31:55.627176 systemd[1]: cri-containerd-69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c.scope: Deactivated successfully. Dec 13 01:31:55.627807 systemd[1]: cri-containerd-69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c.scope: Consumed 2.754s CPU time, 24.0M memory peak, 0B memory swap peak. Dec 13 01:31:55.689835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c-rootfs.mount: Deactivated successfully. Dec 13 01:31:55.716952 containerd[1966]: time="2024-12-13T01:31:55.716640613Z" level=info msg="shim disconnected" id=69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c namespace=k8s.io Dec 13 01:31:55.716952 containerd[1966]: time="2024-12-13T01:31:55.716761598Z" level=warning msg="cleaning up after shim disconnected" id=69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c namespace=k8s.io Dec 13 01:31:55.716952 containerd[1966]: time="2024-12-13T01:31:55.716774641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:56.620790 kubelet[3172]: I1213 01:31:56.620751 3172 scope.go:117] "RemoveContainer" containerID="69f3acaa150dc6d90d5d08a852ec230eb3673904d2dfb71ae48cbfe55d7efa8c" Dec 13 01:31:56.627623 containerd[1966]: time="2024-12-13T01:31:56.627502578Z" level=info msg="CreateContainer within sandbox \"f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:31:56.670082 containerd[1966]: time="2024-12-13T01:31:56.670016503Z" level=info msg="CreateContainer within sandbox \"f9437c8ed97ae827e87e37e41b7b2d34f940932dd13b173add6cbd8eec9eda2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cd2260bfecc44f88da847b9624e279a773dd21b864c397d159fe2971b7091776\"" Dec 13 01:31:56.670834 containerd[1966]: time="2024-12-13T01:31:56.670696866Z" level=info msg="StartContainer for \"cd2260bfecc44f88da847b9624e279a773dd21b864c397d159fe2971b7091776\"" Dec 13 01:31:56.729250 systemd[1]: Started cri-containerd-cd2260bfecc44f88da847b9624e279a773dd21b864c397d159fe2971b7091776.scope - libcontainer container cd2260bfecc44f88da847b9624e279a773dd21b864c397d159fe2971b7091776. Dec 13 01:31:56.781514 containerd[1966]: time="2024-12-13T01:31:56.781298230Z" level=info msg="StartContainer for \"cd2260bfecc44f88da847b9624e279a773dd21b864c397d159fe2971b7091776\" returns successfully" Dec 13 01:31:59.287928 systemd[1]: cri-containerd-6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8.scope: Deactivated successfully. Dec 13 01:31:59.288238 systemd[1]: cri-containerd-6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8.scope: Consumed 1.646s CPU time, 18.3M memory peak, 0B memory swap peak. Dec 13 01:31:59.318264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8-rootfs.mount: Deactivated successfully. Dec 13 01:31:59.386576 containerd[1966]: time="2024-12-13T01:31:59.386512205Z" level=info msg="shim disconnected" id=6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8 namespace=k8s.io Dec 13 01:31:59.386576 containerd[1966]: time="2024-12-13T01:31:59.386573169Z" level=warning msg="cleaning up after shim disconnected" id=6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8 namespace=k8s.io Dec 13 01:31:59.386576 containerd[1966]: time="2024-12-13T01:31:59.386584217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:59.629142 kubelet[3172]: I1213 01:31:59.629015 3172 scope.go:117] "RemoveContainer" containerID="6b7a2d5640dbe22d2815e87cf9de9bbb2424baca9bd75eb584124e86f39441f8" Dec 13 01:31:59.631527 containerd[1966]: time="2024-12-13T01:31:59.631491312Z" level=info msg="CreateContainer within sandbox \"b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:31:59.652529 kubelet[3172]: E1213 01:31:59.652274 3172 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-29?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:31:59.671550 containerd[1966]: time="2024-12-13T01:31:59.671501525Z" level=info msg="CreateContainer within sandbox \"b7dce99966b0044f9270368f7c0d7f6ce87f006a3454c8271414dff1c09ef4be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7ae9958bc225d54577bf7d5d04dd8b832d5378abb489924a7439c2576f981d81\"" Dec 13 01:31:59.672140 containerd[1966]: time="2024-12-13T01:31:59.672100865Z" level=info msg="StartContainer for \"7ae9958bc225d54577bf7d5d04dd8b832d5378abb489924a7439c2576f981d81\"" Dec 13 01:31:59.702240 systemd[1]: Started cri-containerd-7ae9958bc225d54577bf7d5d04dd8b832d5378abb489924a7439c2576f981d81.scope - libcontainer container 7ae9958bc225d54577bf7d5d04dd8b832d5378abb489924a7439c2576f981d81. Dec 13 01:31:59.749030 containerd[1966]: time="2024-12-13T01:31:59.748984469Z" level=info msg="StartContainer for \"7ae9958bc225d54577bf7d5d04dd8b832d5378abb489924a7439c2576f981d81\" returns successfully"