Jan 24 00:37:02.042285 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:37:02.042331 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:02.042350 kernel: BIOS-provided physical RAM map: Jan 24 00:37:02.042366 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:37:02.042376 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:37:02.042386 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:37:02.042400 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:37:02.042411 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:37:02.042422 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:37:02.042436 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:37:02.042448 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:37:02.042459 kernel: NX (Execute Disable) protection: active Jan 24 00:37:02.042475 kernel: APIC: Static calls initialized Jan 24 00:37:02.042490 kernel: efi: EFI v2.7 by EDK II Jan 24 00:37:02.042502 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:37:02.042519 kernel: SMBIOS 2.7 present. Jan 24 00:37:02.042531 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:37:02.042543 kernel: Hypervisor detected: KVM Jan 24 00:37:02.042557 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:37:02.042570 kernel: kvm-clock: using sched offset of 3851966223 cycles Jan 24 00:37:02.042585 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:37:02.042599 kernel: tsc: Detected 2499.996 MHz processor Jan 24 00:37:02.042613 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:37:02.042628 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:37:02.042642 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:37:02.042660 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:37:02.042675 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:37:02.042689 kernel: Using GB pages for direct mapping Jan 24 00:37:02.042703 kernel: Secure boot disabled Jan 24 00:37:02.042718 kernel: ACPI: Early table checksum verification disabled Jan 24 00:37:02.042732 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:37:02.042747 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:37:02.042761 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:37:02.042773 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:37:02.042788 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:37:02.042801 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:37:02.042813 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:37:02.042826 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:37:02.042841 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:37:02.043888 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:37:02.043932 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:37:02.043951 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:37:02.043963 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:37:02.043977 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:37:02.043992 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:37:02.044008 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:37:02.044023 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:37:02.044042 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:37:02.044057 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:37:02.044072 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:37:02.044086 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:37:02.044101 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:37:02.044115 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:37:02.044130 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:37:02.044145 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:37:02.044159 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:37:02.044175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:37:02.044195 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:37:02.044210 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:37:02.044227 kernel: Zone ranges: Jan 24 00:37:02.044242 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:37:02.044258 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:37:02.044273 kernel: Normal empty Jan 24 00:37:02.044289 kernel: Movable zone start for each node Jan 24 00:37:02.044314 kernel: Early memory node ranges Jan 24 00:37:02.044326 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:37:02.044343 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:37:02.044356 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:37:02.044368 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:37:02.044380 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:37:02.044393 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:37:02.044408 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:37:02.044424 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:37:02.044440 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:37:02.044455 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:37:02.044475 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:37:02.044491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:37:02.044507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:37:02.044523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:37:02.044539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:37:02.044555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:37:02.044571 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:37:02.044587 kernel: TSC deadline timer available Jan 24 00:37:02.044603 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:37:02.044619 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:37:02.044638 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:37:02.044654 kernel: Booting paravirtualized kernel on KVM Jan 24 00:37:02.044671 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:37:02.044687 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:37:02.044703 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:37:02.044719 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:37:02.044734 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:37:02.044749 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:37:02.044766 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:37:02.044788 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:02.044805 kernel: random: crng init done Jan 24 00:37:02.044821 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:37:02.044837 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:37:02.044852 kernel: Fallback order for Node 0: 0 Jan 24 00:37:02.044939 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:37:02.044954 kernel: Policy zone: DMA32 Jan 24 00:37:02.044969 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:37:02.044988 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 24 00:37:02.045003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:37:02.045018 kernel: Kernel/User page tables isolation: enabled Jan 24 00:37:02.045033 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:37:02.045048 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:37:02.045061 kernel: Dynamic Preempt: voluntary Jan 24 00:37:02.045075 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:37:02.045091 kernel: rcu: RCU event tracing is enabled. Jan 24 00:37:02.045106 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:37:02.045125 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:37:02.045141 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:37:02.045154 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:37:02.045170 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:37:02.045185 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:37:02.045200 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:37:02.045216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:37:02.045245 kernel: Console: colour dummy device 80x25 Jan 24 00:37:02.045261 kernel: printk: console [tty0] enabled Jan 24 00:37:02.045278 kernel: printk: console [ttyS0] enabled Jan 24 00:37:02.045294 kernel: ACPI: Core revision 20230628 Jan 24 00:37:02.045311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:37:02.045330 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:37:02.045346 kernel: x2apic enabled Jan 24 00:37:02.045361 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:37:02.045378 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 24 00:37:02.045395 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 24 00:37:02.045414 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:37:02.045431 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:37:02.045447 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:37:02.045463 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:37:02.045479 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:37:02.045495 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:37:02.045512 kernel: RETBleed: Vulnerable Jan 24 00:37:02.045528 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:37:02.045544 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:37:02.045560 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:37:02.045579 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:37:02.045595 kernel: active return thunk: its_return_thunk Jan 24 00:37:02.045611 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:37:02.045627 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:37:02.045643 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:37:02.045659 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:37:02.045675 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:37:02.045691 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:37:02.045707 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:37:02.045723 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:37:02.045739 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:37:02.045758 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:37:02.045775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:37:02.045791 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:37:02.045807 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:37:02.045823 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:37:02.045839 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:37:02.045873 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:37:02.045889 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:37:02.045904 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:37:02.045920 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:37:02.045936 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:37:02.045957 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:37:02.045973 kernel: landlock: Up and running. Jan 24 00:37:02.045988 kernel: SELinux: Initializing. Jan 24 00:37:02.046004 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:37:02.046020 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:37:02.046037 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:37:02.046053 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:02.046070 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:02.046088 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:02.046105 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:37:02.046126 kernel: signal: max sigframe size: 3632 Jan 24 00:37:02.046143 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:37:02.046160 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:37:02.046178 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:37:02.046195 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:37:02.046212 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:37:02.046228 kernel: .... node #0, CPUs: #1 Jan 24 00:37:02.046246 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:37:02.046264 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:37:02.046285 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:37:02.046302 kernel: smpboot: Max logical packages: 1 Jan 24 00:37:02.046319 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 24 00:37:02.046336 kernel: devtmpfs: initialized Jan 24 00:37:02.046353 kernel: x86/mm: Memory block size: 128MB Jan 24 00:37:02.046370 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:37:02.046388 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:37:02.046405 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:37:02.046422 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:37:02.046444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:37:02.046461 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:37:02.046479 kernel: audit: type=2000 audit(1769215021.042:1): state=initialized audit_enabled=0 res=1 Jan 24 00:37:02.046494 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:37:02.046511 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:37:02.046526 kernel: cpuidle: using governor menu Jan 24 00:37:02.046542 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:37:02.046559 kernel: dca service started, version 1.12.1 Jan 24 00:37:02.046574 kernel: PCI: Using configuration type 1 for base access Jan 24 00:37:02.046595 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:37:02.046610 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:37:02.046624 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:37:02.046638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:37:02.046652 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:37:02.046667 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:37:02.046681 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:37:02.046696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:37:02.046712 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:37:02.046731 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:37:02.046745 kernel: ACPI: Interpreter enabled Jan 24 00:37:02.046760 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:37:02.046776 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:37:02.046792 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:37:02.046808 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:37:02.046823 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:37:02.046839 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:37:02.047117 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:37:02.047292 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:37:02.047431 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:37:02.047451 kernel: acpiphp: Slot [3] registered Jan 24 00:37:02.047468 kernel: acpiphp: Slot [4] registered Jan 24 00:37:02.047484 kernel: acpiphp: Slot [5] registered Jan 24 00:37:02.047500 kernel: acpiphp: Slot [6] registered Jan 24 00:37:02.047516 kernel: acpiphp: Slot [7] registered Jan 24 00:37:02.047536 kernel: acpiphp: Slot [8] registered Jan 24 00:37:02.047552 kernel: acpiphp: Slot [9] registered Jan 24 00:37:02.047569 kernel: acpiphp: Slot [10] registered Jan 24 00:37:02.047585 kernel: acpiphp: Slot [11] registered Jan 24 00:37:02.047601 kernel: acpiphp: Slot [12] registered Jan 24 00:37:02.047617 kernel: acpiphp: Slot [13] registered Jan 24 00:37:02.047634 kernel: acpiphp: Slot [14] registered Jan 24 00:37:02.047650 kernel: acpiphp: Slot [15] registered Jan 24 00:37:02.047666 kernel: acpiphp: Slot [16] registered Jan 24 00:37:02.047683 kernel: acpiphp: Slot [17] registered Jan 24 00:37:02.047702 kernel: acpiphp: Slot [18] registered Jan 24 00:37:02.047718 kernel: acpiphp: Slot [19] registered Jan 24 00:37:02.047735 kernel: acpiphp: Slot [20] registered Jan 24 00:37:02.047751 kernel: acpiphp: Slot [21] registered Jan 24 00:37:02.047767 kernel: acpiphp: Slot [22] registered Jan 24 00:37:02.047783 kernel: acpiphp: Slot [23] registered Jan 24 00:37:02.047799 kernel: acpiphp: Slot [24] registered Jan 24 00:37:02.047815 kernel: acpiphp: Slot [25] registered Jan 24 00:37:02.047832 kernel: acpiphp: Slot [26] registered Jan 24 00:37:02.047851 kernel: acpiphp: Slot [27] registered Jan 24 00:37:02.051725 kernel: acpiphp: Slot [28] registered Jan 24 00:37:02.051743 kernel: acpiphp: Slot [29] registered Jan 24 00:37:02.051759 kernel: acpiphp: Slot [30] registered Jan 24 00:37:02.051774 kernel: acpiphp: Slot [31] registered Jan 24 00:37:02.051789 kernel: PCI host bridge to bus 0000:00 Jan 24 00:37:02.052029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:37:02.052170 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:37:02.052342 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:37:02.052490 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:37:02.052623 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:37:02.052752 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:37:02.052998 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:37:02.053157 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:37:02.053317 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:37:02.053466 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:37:02.053609 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:37:02.053750 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:37:02.053904 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:37:02.054034 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:37:02.054164 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:37:02.054293 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:37:02.054439 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:37:02.054575 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:37:02.054709 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:37:02.054842 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:37:02.055012 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:37:02.055161 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:37:02.055310 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:37:02.055459 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:37:02.055602 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:37:02.055624 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:37:02.055641 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:37:02.055658 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:37:02.055675 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:37:02.055692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:37:02.055714 kernel: iommu: Default domain type: Translated Jan 24 00:37:02.055731 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:37:02.055747 kernel: efivars: Registered efivars operations Jan 24 00:37:02.055764 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:37:02.055781 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:37:02.055798 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:37:02.055813 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:37:02.056004 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:37:02.056140 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:37:02.056278 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:37:02.056306 kernel: vgaarb: loaded Jan 24 00:37:02.056322 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:37:02.056338 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:37:02.056353 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:37:02.056368 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:37:02.056383 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:37:02.056398 kernel: pnp: PnP ACPI init Jan 24 00:37:02.056414 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:37:02.056433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:37:02.056449 kernel: NET: Registered PF_INET protocol family Jan 24 00:37:02.056465 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:37:02.056480 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:37:02.056496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:37:02.056512 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:37:02.056527 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:37:02.056542 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:37:02.056562 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:37:02.056578 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:37:02.056594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:37:02.056610 kernel: NET: Registered PF_XDP protocol family Jan 24 00:37:02.056745 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:37:02.057642 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:37:02.057800 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:37:02.058067 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:37:02.058198 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:37:02.058353 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:37:02.058374 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:37:02.058391 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:37:02.058407 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 24 00:37:02.058422 kernel: clocksource: Switched to clocksource tsc Jan 24 00:37:02.058437 kernel: Initialise system trusted keyrings Jan 24 00:37:02.058453 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:37:02.058470 kernel: Key type asymmetric registered Jan 24 00:37:02.058490 kernel: Asymmetric key parser 'x509' registered Jan 24 00:37:02.058505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:37:02.058520 kernel: io scheduler mq-deadline registered Jan 24 00:37:02.058536 kernel: io scheduler kyber registered Jan 24 00:37:02.058551 kernel: io scheduler bfq registered Jan 24 00:37:02.058567 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:37:02.058583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:37:02.058598 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:37:02.058613 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:37:02.058633 kernel: i8042: Warning: Keylock active Jan 24 00:37:02.058648 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:37:02.058661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:37:02.058819 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:37:02.058973 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:37:02.059107 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:37:01 UTC (1769215021) Jan 24 00:37:02.059236 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:37:02.059257 kernel: intel_pstate: CPU model not supported Jan 24 00:37:02.059279 kernel: efifb: probing for efifb Jan 24 00:37:02.059296 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:37:02.059313 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:37:02.059330 kernel: efifb: scrolling: redraw Jan 24 00:37:02.059347 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:37:02.059364 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:37:02.059381 kernel: fb0: EFI VGA frame buffer device Jan 24 00:37:02.059398 kernel: pstore: Using crash dump compression: deflate Jan 24 00:37:02.059415 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:37:02.059435 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:37:02.059452 kernel: Segment Routing with IPv6 Jan 24 00:37:02.059468 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:37:02.059486 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:37:02.059502 kernel: Key type dns_resolver registered Jan 24 00:37:02.059519 kernel: IPI shorthand broadcast: enabled Jan 24 00:37:02.059563 kernel: sched_clock: Marking stable (1201045400, 127527014)->(1408051447, -79479033) Jan 24 00:37:02.059584 kernel: registered taskstats version 1 Jan 24 00:37:02.059602 kernel: Loading compiled-in X.509 certificates Jan 24 00:37:02.059622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:37:02.059640 kernel: Key type .fscrypt registered Jan 24 00:37:02.059657 kernel: Key type fscrypt-provisioning registered Jan 24 00:37:02.059675 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:37:02.059692 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:37:02.059709 kernel: ima: No architecture policies found Jan 24 00:37:02.059727 kernel: clk: Disabling unused clocks Jan 24 00:37:02.059744 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:37:02.059763 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:37:02.059784 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:37:02.059800 kernel: Run /init as init process Jan 24 00:37:02.059818 kernel: with arguments: Jan 24 00:37:02.059836 kernel: /init Jan 24 00:37:02.059854 kernel: with environment: Jan 24 00:37:02.060007 kernel: HOME=/ Jan 24 00:37:02.060023 kernel: TERM=linux Jan 24 00:37:02.060041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:37:02.060065 systemd[1]: Detected virtualization amazon. Jan 24 00:37:02.060081 systemd[1]: Detected architecture x86-64. Jan 24 00:37:02.060100 systemd[1]: Running in initrd. Jan 24 00:37:02.060116 systemd[1]: No hostname configured, using default hostname. Jan 24 00:37:02.060132 systemd[1]: Hostname set to . Jan 24 00:37:02.060150 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:37:02.060167 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:37:02.060184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:02.060204 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:02.060221 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:37:02.060238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:37:02.060253 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:37:02.060273 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:37:02.060309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:37:02.060329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:37:02.060348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:02.060368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:02.060388 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:37:02.060408 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:37:02.060427 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:37:02.060450 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:37:02.060469 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:37:02.060488 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:37:02.060508 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:37:02.060527 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:37:02.060545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:02.060563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:02.060579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:02.060597 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:37:02.060617 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:37:02.060635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:37:02.060652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:37:02.060670 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:37:02.060686 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:37:02.060702 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:37:02.060719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:02.060735 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:37:02.060756 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:37:02.060773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:02.060827 systemd-journald[180]: Collecting audit messages is disabled. Jan 24 00:37:02.060881 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:37:02.060904 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:37:02.060923 systemd-journald[180]: Journal started Jan 24 00:37:02.060959 systemd-journald[180]: Runtime Journal (/run/log/journal/ec25b44890b18cab7b7223d61bdba322) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:37:02.035439 systemd-modules-load[181]: Inserted module 'overlay' Jan 24 00:37:02.065718 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:37:02.077980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:02.087150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:02.092875 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:37:02.093152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:37:02.098924 kernel: Bridge firewalling registered Jan 24 00:37:02.098282 systemd-modules-load[181]: Inserted module 'br_netfilter' Jan 24 00:37:02.103188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:02.104639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:37:02.117125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:37:02.123679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:37:02.129373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:02.134964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:02.140288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:02.148043 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:37:02.152710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:37:02.156413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:02.166642 dracut-cmdline[216]: dracut-dracut-053 Jan 24 00:37:02.172451 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:02.211177 systemd-resolved[217]: Positive Trust Anchors: Jan 24 00:37:02.211197 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:37:02.211261 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:37:02.219594 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 24 00:37:02.222794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:37:02.223537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:02.262904 kernel: SCSI subsystem initialized Jan 24 00:37:02.272884 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:37:02.284890 kernel: iscsi: registered transport (tcp) Jan 24 00:37:02.306121 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:37:02.306207 kernel: QLogic iSCSI HBA Driver Jan 24 00:37:02.346601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:37:02.355147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:37:02.380928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:37:02.381008 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:37:02.382121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:37:02.424890 kernel: raid6: avx512x4 gen() 17866 MB/s Jan 24 00:37:02.442926 kernel: raid6: avx512x2 gen() 15029 MB/s Jan 24 00:37:02.460914 kernel: raid6: avx512x1 gen() 13327 MB/s Jan 24 00:37:02.478893 kernel: raid6: avx2x4 gen() 17721 MB/s Jan 24 00:37:02.496896 kernel: raid6: avx2x2 gen() 17828 MB/s Jan 24 00:37:02.515130 kernel: raid6: avx2x1 gen() 13790 MB/s Jan 24 00:37:02.515198 kernel: raid6: using algorithm avx512x4 gen() 17866 MB/s Jan 24 00:37:02.534521 kernel: raid6: .... xor() 7483 MB/s, rmw enabled Jan 24 00:37:02.534610 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:37:02.557892 kernel: xor: automatically using best checksumming function avx Jan 24 00:37:02.720889 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:37:02.731542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:37:02.737108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:02.753113 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 24 00:37:02.758349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:02.767137 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:37:02.795001 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 24 00:37:02.827228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:37:02.832077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:37:02.887215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:02.895704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:37:02.933619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:37:02.935828 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:37:02.938462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:02.939732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:37:02.947153 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:37:02.981756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:37:02.993937 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:37:03.008530 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:37:03.008707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:03.013687 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:03.014917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:03.015141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:03.023020 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:03.027931 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:37:03.027982 kernel: AES CTR mode by8 optimization enabled Jan 24 00:37:03.033630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:03.043741 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:37:03.044063 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:37:03.047716 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:37:03.047994 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:37:03.052738 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:37:03.053257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:03.054153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:03.062107 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:37:71:4a:94:7f Jan 24 00:37:03.065289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:03.071146 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:37:03.079260 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:37:03.079323 kernel: GPT:9289727 != 33554431 Jan 24 00:37:03.080171 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:37:03.082090 kernel: GPT:9289727 != 33554431 Jan 24 00:37:03.082135 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:37:03.084148 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:03.088123 (udev-worker)[461]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:03.104690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:03.113061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:03.133327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:03.172258 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (461) Jan 24 00:37:03.174879 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (459) Jan 24 00:37:03.212969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:37:03.221348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:37:03.230997 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:37:03.241191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:37:03.241767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:37:03.247047 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:37:03.256548 disk-uuid[635]: Primary Header is updated. Jan 24 00:37:03.256548 disk-uuid[635]: Secondary Entries is updated. Jan 24 00:37:03.256548 disk-uuid[635]: Secondary Header is updated. Jan 24 00:37:03.261892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:03.267926 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:03.271955 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:04.279998 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:04.286873 disk-uuid[636]: The operation has completed successfully. Jan 24 00:37:04.423960 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:37:04.424106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:37:04.446052 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:37:04.450885 sh[979]: Success Jan 24 00:37:04.466247 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:37:04.557360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:37:04.565383 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:37:04.572991 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:37:04.599900 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:37:04.599963 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:04.599977 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:37:04.603612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:37:04.603689 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:37:04.670962 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:37:04.686204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:37:04.687482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:37:04.692096 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:37:04.694071 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:37:04.722949 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:04.723022 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:04.725793 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:04.733920 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:04.750015 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:04.750044 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:37:04.760641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:37:04.767159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:37:04.803601 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:37:04.810121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:37:04.849503 systemd-networkd[1171]: lo: Link UP Jan 24 00:37:04.849517 systemd-networkd[1171]: lo: Gained carrier Jan 24 00:37:04.851252 systemd-networkd[1171]: Enumeration completed Jan 24 00:37:04.851729 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:04.851735 systemd-networkd[1171]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:37:04.852980 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:37:04.855050 systemd[1]: Reached target network.target - Network. Jan 24 00:37:04.855819 systemd-networkd[1171]: eth0: Link UP Jan 24 00:37:04.855825 systemd-networkd[1171]: eth0: Gained carrier Jan 24 00:37:04.855838 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:04.867953 systemd-networkd[1171]: eth0: DHCPv4 address 172.31.16.201/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:37:05.039407 ignition[1122]: Ignition 2.19.0 Jan 24 00:37:05.039418 ignition[1122]: Stage: fetch-offline Jan 24 00:37:05.039623 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:05.039632 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:05.040208 ignition[1122]: Ignition finished successfully Jan 24 00:37:05.041396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:37:05.049066 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:37:05.062380 ignition[1180]: Ignition 2.19.0 Jan 24 00:37:05.062394 ignition[1180]: Stage: fetch Jan 24 00:37:05.062787 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:05.062802 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:05.062923 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:05.077397 ignition[1180]: PUT result: OK Jan 24 00:37:05.079053 ignition[1180]: parsed url from cmdline: "" Jan 24 00:37:05.079157 ignition[1180]: no config URL provided Jan 24 00:37:05.079167 ignition[1180]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:37:05.079180 ignition[1180]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:37:05.079198 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:05.080149 ignition[1180]: PUT result: OK Jan 24 00:37:05.080195 ignition[1180]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:37:05.081007 ignition[1180]: GET result: OK Jan 24 00:37:05.081061 ignition[1180]: parsing config with SHA512: 973e683382aa79a63b9c15d7130cb5d110db6ce3f342807b3552ab99b8816bf0b244b5b1043e13ad1e61371f28415a6d3f4e3c0966be556c91cb61d8de7fdd9b Jan 24 00:37:05.086263 unknown[1180]: fetched base config from "system" Jan 24 00:37:05.086616 unknown[1180]: fetched base config from "system" Jan 24 00:37:05.086629 unknown[1180]: fetched user config from "aws" Jan 24 00:37:05.087041 ignition[1180]: fetch: fetch complete Jan 24 00:37:05.087046 ignition[1180]: fetch: fetch passed Jan 24 00:37:05.087087 ignition[1180]: Ignition finished successfully Jan 24 00:37:05.088746 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:37:05.094064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:37:05.108979 ignition[1186]: Ignition 2.19.0 Jan 24 00:37:05.108991 ignition[1186]: Stage: kargs Jan 24 00:37:05.109355 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:05.109364 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:05.109457 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:05.110599 ignition[1186]: PUT result: OK Jan 24 00:37:05.113388 ignition[1186]: kargs: kargs passed Jan 24 00:37:05.113461 ignition[1186]: Ignition finished successfully Jan 24 00:37:05.114960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:37:05.121080 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:37:05.135425 ignition[1192]: Ignition 2.19.0 Jan 24 00:37:05.135442 ignition[1192]: Stage: disks Jan 24 00:37:05.135804 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:05.135814 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:05.137626 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:05.138685 ignition[1192]: PUT result: OK Jan 24 00:37:05.142401 ignition[1192]: disks: disks passed Jan 24 00:37:05.142461 ignition[1192]: Ignition finished successfully Jan 24 00:37:05.144148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:37:05.145226 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:37:05.145891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:37:05.146313 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:37:05.147035 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:37:05.147612 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:37:05.153093 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:37:05.187409 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:37:05.190755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:37:05.196009 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:37:05.295885 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:37:05.297060 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:37:05.298203 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:37:05.311076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:37:05.314128 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:37:05.316059 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:37:05.316970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:37:05.317008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:37:05.328453 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:37:05.331882 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Jan 24 00:37:05.334928 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:05.334982 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:05.336085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:37:05.338435 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:05.347890 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:05.349989 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:37:05.640910 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:37:05.670494 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:37:05.675573 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:37:05.680455 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:37:05.910252 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:37:05.922063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:37:05.926204 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:37:05.936474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:37:05.937239 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:05.960883 ignition[1331]: INFO : Ignition 2.19.0 Jan 24 00:37:05.960883 ignition[1331]: INFO : Stage: mount Jan 24 00:37:05.964610 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:05.965751 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:05.965751 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:05.967734 ignition[1331]: INFO : PUT result: OK Jan 24 00:37:05.969875 ignition[1331]: INFO : mount: mount passed Jan 24 00:37:05.971713 ignition[1331]: INFO : Ignition finished successfully Jan 24 00:37:05.972742 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:37:05.986042 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:37:05.990353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:37:05.999096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:37:06.019884 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Jan 24 00:37:06.019946 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:06.022060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:06.024551 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:06.030885 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:06.032903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:37:06.053079 ignition[1359]: INFO : Ignition 2.19.0 Jan 24 00:37:06.054126 ignition[1359]: INFO : Stage: files Jan 24 00:37:06.055050 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:06.055050 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:06.055050 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:06.058426 ignition[1359]: INFO : PUT result: OK Jan 24 00:37:06.059281 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:37:06.060533 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:37:06.060533 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:37:06.076760 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:37:06.077835 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:37:06.077835 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:37:06.077332 unknown[1359]: wrote ssh authorized keys file for user: core Jan 24 00:37:06.081519 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:37:06.082358 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:37:06.165662 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:37:06.344398 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:37:06.344398 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:37:06.346629 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 00:37:06.539074 systemd-networkd[1171]: eth0: Gained IPv6LL Jan 24 00:37:06.563172 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:37:06.748665 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:06.750515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:37:07.154775 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:37:07.756790 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:07.756790 ignition[1359]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 24 00:37:07.771527 ignition[1359]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:37:07.772683 ignition[1359]: INFO : files: files passed Jan 24 00:37:07.772683 ignition[1359]: INFO : Ignition finished successfully Jan 24 00:37:07.773666 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:37:07.782080 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:37:07.784189 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:37:07.787416 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:37:07.787512 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:37:07.799922 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:07.799922 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:07.802572 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:07.803958 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:37:07.805037 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:37:07.816109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:37:07.842927 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:37:07.843034 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:37:07.844880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:37:07.845579 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:37:07.846428 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:37:07.847593 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:37:07.875092 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:37:07.881081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:37:07.894548 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:07.895265 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:07.896436 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:37:07.897320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:37:07.897496 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:37:07.898793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:37:07.899712 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:37:07.900686 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:37:07.901523 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:37:07.902354 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:37:07.903181 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:37:07.903981 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:37:07.904877 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:37:07.906049 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:37:07.906813 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:37:07.907551 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:37:07.907725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:37:07.908987 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:07.909786 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:07.910499 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:37:07.910632 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:07.911357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:37:07.911563 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:37:07.913033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:37:07.913206 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:37:07.913954 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:37:07.914100 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:37:07.921183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:37:07.922638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:37:07.922838 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:07.928161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:37:07.929466 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:37:07.930286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:07.931707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:37:07.931922 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:37:07.941774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:37:07.942956 ignition[1413]: INFO : Ignition 2.19.0 Jan 24 00:37:07.942956 ignition[1413]: INFO : Stage: umount Jan 24 00:37:07.942956 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:07.942956 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:07.942956 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:07.949558 ignition[1413]: INFO : PUT result: OK Jan 24 00:37:07.949558 ignition[1413]: INFO : umount: umount passed Jan 24 00:37:07.949558 ignition[1413]: INFO : Ignition finished successfully Jan 24 00:37:07.943846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:37:07.953336 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:37:07.953475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:37:07.954720 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:37:07.954838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:37:07.956158 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:37:07.956222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:37:07.956979 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:37:07.957043 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:37:07.957596 systemd[1]: Stopped target network.target - Network. Jan 24 00:37:07.959946 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:37:07.960022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:37:07.961553 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:37:07.962005 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:37:07.966390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:07.966834 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:37:07.967820 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:37:07.969360 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:37:07.969425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:37:07.969925 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:37:07.969972 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:37:07.970430 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:37:07.970494 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:37:07.971014 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:37:07.971075 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:37:07.971943 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:37:07.972670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:37:07.977940 systemd-networkd[1171]: eth0: DHCPv6 lease lost Jan 24 00:37:07.980457 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:37:07.980642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:37:07.984779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:37:07.986074 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:37:07.986226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:37:07.989929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:37:07.990017 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:07.995001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:37:07.996382 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:37:07.997067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:37:07.998422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:37:07.998488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:07.999012 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:37:07.999072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:07.999612 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:37:07.999668 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:08.000363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:08.003020 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:37:08.003150 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:37:08.014379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:37:08.014477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:37:08.017583 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:37:08.017698 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:37:08.018638 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:37:08.018817 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:08.020420 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:37:08.020490 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:08.021026 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:37:08.021076 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:08.022206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:37:08.022266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:37:08.023325 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:37:08.023387 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:37:08.024631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:37:08.024695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:08.032072 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:37:08.032799 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:37:08.032906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:08.033629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:08.033698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:08.040631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:37:08.041235 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:37:08.042048 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:37:08.046034 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:37:08.056659 systemd[1]: Switching root. Jan 24 00:37:08.096519 systemd-journald[180]: Journal stopped Jan 24 00:37:09.725196 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jan 24 00:37:09.725314 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:37:09.725338 kernel: SELinux: policy capability open_perms=1 Jan 24 00:37:09.725366 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:37:09.725393 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:37:09.725416 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:37:09.725438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:37:09.725463 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:37:09.725483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:37:09.725509 kernel: audit: type=1403 audit(1769215028.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:37:09.725536 systemd[1]: Successfully loaded SELinux policy in 41.104ms. Jan 24 00:37:09.725560 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.472ms. Jan 24 00:37:09.725588 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:37:09.725610 systemd[1]: Detected virtualization amazon. Jan 24 00:37:09.725631 systemd[1]: Detected architecture x86-64. Jan 24 00:37:09.725652 systemd[1]: Detected first boot. Jan 24 00:37:09.725674 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:37:09.725696 zram_generator::config[1456]: No configuration found. Jan 24 00:37:09.725723 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:37:09.725745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:37:09.725770 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:37:09.725794 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:37:09.725817 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:37:09.725839 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:37:09.725872 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:37:09.725892 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:37:09.725913 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:37:09.725935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:37:09.725957 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:37:09.725983 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:37:09.726005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:09.726027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:09.726049 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:37:09.726071 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:37:09.726093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:37:09.726116 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:37:09.726138 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:37:09.726159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:09.726184 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:37:09.726205 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:37:09.726227 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:37:09.726248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:37:09.726270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:09.726291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:37:09.726313 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:37:09.726335 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:37:09.726359 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:37:09.726382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:37:09.726403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:09.726424 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:09.726448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:09.726469 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:37:09.726490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:37:09.726511 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:37:09.726533 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:37:09.726558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:09.726580 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:37:09.726601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:37:09.726622 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:37:09.726645 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:37:09.726667 systemd[1]: Reached target machines.target - Containers. Jan 24 00:37:09.726687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:37:09.726709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:37:09.726732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:37:09.726754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:37:09.726775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:37:09.726795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:37:09.726814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:37:09.726835 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:37:09.742062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:37:09.742133 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:37:09.742167 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:37:09.742190 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:37:09.742210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:37:09.742233 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:37:09.742255 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:37:09.742277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:37:09.742303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:37:09.742323 kernel: loop: module loaded Jan 24 00:37:09.742344 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:37:09.742368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:37:09.742389 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:37:09.742416 systemd[1]: Stopped verity-setup.service. Jan 24 00:37:09.742436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:09.742458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:37:09.742476 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:37:09.742496 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:37:09.742515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:37:09.742535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:37:09.742561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:37:09.742582 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:09.742603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:37:09.742623 kernel: fuse: init (API version 7.39) Jan 24 00:37:09.742644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:37:09.742669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:37:09.742691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:37:09.742713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:37:09.742734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:37:09.742756 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:37:09.742782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:37:09.742804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:09.744903 systemd-journald[1538]: Collecting audit messages is disabled. Jan 24 00:37:09.744976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:37:09.745003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:37:09.745030 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:37:09.745058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:37:09.745079 kernel: ACPI: bus type drm_connector registered Jan 24 00:37:09.745106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:37:09.745126 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:37:09.745148 systemd-journald[1538]: Journal started Jan 24 00:37:09.745186 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec25b44890b18cab7b7223d61bdba322) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:37:09.295071 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:37:09.340393 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:37:09.340958 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:37:09.749049 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:37:09.761930 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:37:09.775877 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:37:09.775963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:37:09.785900 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:37:09.792518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:37:09.800885 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:37:09.805886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:37:09.812930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:37:09.819891 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:37:09.826904 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:37:09.832553 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:37:09.834239 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:37:09.834445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:37:09.836651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:37:09.837197 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:37:09.839179 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:09.840517 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:37:09.842111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:37:09.843631 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:37:09.869586 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:37:09.877555 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:37:09.890965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:37:09.897029 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:37:09.902083 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:37:09.906069 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:37:09.916396 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:37:09.920774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:09.922208 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:37:09.954642 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec25b44890b18cab7b7223d61bdba322 is 63.211ms for 993 entries. Jan 24 00:37:09.954642 systemd-journald[1538]: System Journal (/var/log/journal/ec25b44890b18cab7b7223d61bdba322) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:37:10.035109 systemd-journald[1538]: Received client request to flush runtime journal. Jan 24 00:37:10.035188 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:37:10.035225 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:37:09.965091 udevadm[1594]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:37:10.029234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:37:10.031050 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:37:10.040644 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:37:10.043498 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:37:10.054704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:37:10.098952 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 24 00:37:10.099415 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 24 00:37:10.107436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:10.111241 kernel: loop2: detected capacity change from 0 to 61336 Jan 24 00:37:10.185119 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:37:10.275900 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:37:10.316267 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:37:10.349891 kernel: loop6: detected capacity change from 0 to 61336 Jan 24 00:37:10.385519 kernel: loop7: detected capacity change from 0 to 142488 Jan 24 00:37:10.406990 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:37:10.409407 (sd-merge)[1610]: Merged extensions into '/usr'. Jan 24 00:37:10.415323 systemd[1]: Reloading requested from client PID 1566 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:37:10.415500 systemd[1]: Reloading... Jan 24 00:37:10.535887 zram_generator::config[1636]: No configuration found. Jan 24 00:37:10.786573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:10.895994 systemd[1]: Reloading finished in 479 ms. Jan 24 00:37:10.933077 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:37:10.935450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:37:10.946096 systemd[1]: Starting ensure-sysext.service... Jan 24 00:37:10.948492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:37:10.962103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:10.968091 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:37:10.968107 systemd[1]: Reloading... Jan 24 00:37:10.997368 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Jan 24 00:37:11.017957 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:37:11.018471 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:37:11.019720 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:37:11.020188 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jan 24 00:37:11.020290 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jan 24 00:37:11.032944 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:37:11.032965 systemd-tmpfiles[1689]: Skipping /boot Jan 24 00:37:11.065338 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:37:11.066089 systemd-tmpfiles[1689]: Skipping /boot Jan 24 00:37:11.085793 zram_generator::config[1717]: No configuration found. Jan 24 00:37:11.205029 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:11.347298 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:37:11.353096 ldconfig[1562]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:37:11.368913 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:37:11.372357 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:37:11.376901 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:37:11.376983 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 24 00:37:11.382900 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:37:11.442926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:11.472885 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:37:11.498895 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1726) Jan 24 00:37:11.589407 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:37:11.591310 systemd[1]: Reloading finished in 622 ms. Jan 24 00:37:11.610789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:11.612481 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:37:11.614592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:11.697834 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:37:11.709114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:37:11.710750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:11.720203 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:11.724201 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:37:11.725123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:37:11.734222 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:37:11.737231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:37:11.742922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:37:11.760600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:37:11.771705 lvm[1884]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:37:11.773202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:37:11.774048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:37:11.778016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:37:11.783194 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:37:11.794228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:37:11.802225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:37:11.804050 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:37:11.817664 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:37:11.822238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:11.824372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:11.829082 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:37:11.834808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:37:11.835197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:37:11.837044 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:37:11.837632 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:37:11.839004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:37:11.839330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:37:11.841248 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:37:11.841479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:37:11.848493 systemd[1]: Finished ensure-sysext.service. Jan 24 00:37:11.854314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:11.861054 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:37:11.862732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:37:11.863294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:37:11.869065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:37:11.884119 lvm[1912]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:37:11.884482 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:37:11.909296 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:37:11.913312 augenrules[1919]: No rules Jan 24 00:37:11.917389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:11.935110 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:37:11.956257 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:37:11.962160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:37:11.968221 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:37:11.972248 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:37:11.973103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:37:12.002386 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:37:12.071287 systemd-networkd[1899]: lo: Link UP Jan 24 00:37:12.071302 systemd-networkd[1899]: lo: Gained carrier Jan 24 00:37:12.074185 systemd-networkd[1899]: Enumeration completed Jan 24 00:37:12.074333 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:37:12.077356 systemd-networkd[1899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:12.077362 systemd-networkd[1899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:37:12.080323 systemd-networkd[1899]: eth0: Link UP Jan 24 00:37:12.080675 systemd-networkd[1899]: eth0: Gained carrier Jan 24 00:37:12.080775 systemd-networkd[1899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:12.086605 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:37:12.089460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:12.090382 systemd-resolved[1900]: Positive Trust Anchors: Jan 24 00:37:12.090392 systemd-resolved[1900]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:37:12.090445 systemd-resolved[1900]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:37:12.097011 systemd-networkd[1899]: eth0: DHCPv4 address 172.31.16.201/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:37:12.105922 systemd-resolved[1900]: Defaulting to hostname 'linux'. Jan 24 00:37:12.107810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:37:12.108549 systemd[1]: Reached target network.target - Network. Jan 24 00:37:12.109042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:12.109444 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:37:12.109951 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:37:12.110401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:37:12.111012 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:37:12.111424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:37:12.111749 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:37:12.112340 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:37:12.112432 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:37:12.112788 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:37:12.114386 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:37:12.116401 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:37:12.122213 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:37:12.123344 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:37:12.123849 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:37:12.124419 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:37:12.124840 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:37:12.124900 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:37:12.126049 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:37:12.130058 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:37:12.136076 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:37:12.139074 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:37:12.148112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:37:12.149990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:37:12.153312 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:37:12.164166 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:37:12.168962 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:37:12.177099 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:37:12.183319 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:37:12.197388 jq[1948]: false Jan 24 00:37:12.197533 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:37:12.206680 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:37:12.208875 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:37:12.210655 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:37:12.216071 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:37:12.257995 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:37:12.270425 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:37:12.270678 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:37:12.272788 jq[1960]: true Jan 24 00:37:12.314874 extend-filesystems[1949]: Found loop4 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found loop5 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found loop6 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found loop7 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p1 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p2 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p3 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found usr Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p4 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p6 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p7 Jan 24 00:37:12.322022 extend-filesystems[1949]: Found nvme0n1p9 Jan 24 00:37:12.322022 extend-filesystems[1949]: Checking size of /dev/nvme0n1p9 Jan 24 00:37:12.316296 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:37:12.369487 coreos-metadata[1946]: Jan 24 00:37:12.368 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:37:12.338635 dbus-daemon[1947]: [system] SELinux support is enabled Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: ---------------------------------------------------- Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: corporation. Support and training for ntp-4 are Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: available at https://www.nwtime.org/support Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: ---------------------------------------------------- Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: proto: precision = 0.078 usec (-24) Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: basedate set to 2026-01-11 Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: gps base set to 2026-01-11 (week 2401) Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:37:12.373275 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:37:12.373919 tar[1967]: linux-amd64/LICENSE Jan 24 00:37:12.373919 tar[1967]: linux-amd64/helm Jan 24 00:37:12.374178 jq[1972]: true Jan 24 00:37:12.316575 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:37:12.374458 coreos-metadata[1946]: Jan 24 00:37:12.372 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:37:12.344103 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:37:12.344401 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:37:12.344129 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:37:12.377333 coreos-metadata[1946]: Jan 24 00:37:12.377 INFO Fetch successful Jan 24 00:37:12.377333 coreos-metadata[1946]: Jan 24 00:37:12.377 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:37:12.352628 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listen normally on 3 eth0 172.31.16.201:123 Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listen normally on 4 lo [::1]:123 Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: bind(21) AF_INET6 fe80::437:71ff:fe4a:947f%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: unable to create socket on eth0 (5) for fe80::437:71ff:fe4a:947f%2#123 Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: failed to init interface for address fe80::437:71ff:fe4a:947f%2 Jan 24 00:37:12.377852 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Jan 24 00:37:12.344140 ntpd[1951]: ---------------------------------------------------- Jan 24 00:37:12.352673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:37:12.378603 coreos-metadata[1946]: Jan 24 00:37:12.377 INFO Fetch successful Jan 24 00:37:12.378603 coreos-metadata[1946]: Jan 24 00:37:12.377 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:37:12.378603 coreos-metadata[1946]: Jan 24 00:37:12.378 INFO Fetch successful Jan 24 00:37:12.378603 coreos-metadata[1946]: Jan 24 00:37:12.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:37:12.344150 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:37:12.358243 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:37:12.380502 coreos-metadata[1946]: Jan 24 00:37:12.378 INFO Fetch successful Jan 24 00:37:12.380502 coreos-metadata[1946]: Jan 24 00:37:12.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:37:12.380502 coreos-metadata[1946]: Jan 24 00:37:12.380 INFO Fetch failed with 404: resource not found Jan 24 00:37:12.380502 coreos-metadata[1946]: Jan 24 00:37:12.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:37:12.344160 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:37:12.358272 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.380 INFO Fetch successful Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.381 INFO Fetch successful Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.384 INFO Fetch successful Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.386 INFO Fetch successful Jan 24 00:37:12.387264 coreos-metadata[1946]: Jan 24 00:37:12.387 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:37:12.344170 ntpd[1951]: corporation. Support and training for ntp-4 are Jan 24 00:37:12.379450 (ntainerd)[1986]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:37:12.344181 ntpd[1951]: available at https://www.nwtime.org/support Jan 24 00:37:12.344191 ntpd[1951]: ---------------------------------------------------- Jan 24 00:37:12.349088 ntpd[1951]: proto: precision = 0.078 usec (-24) Jan 24 00:37:12.352955 ntpd[1951]: basedate set to 2026-01-11 Jan 24 00:37:12.352975 ntpd[1951]: gps base set to 2026-01-11 (week 2401) Jan 24 00:37:12.363937 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1899 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:37:12.371686 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:37:12.371740 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:37:12.374773 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:37:12.374822 ntpd[1951]: Listen normally on 3 eth0 172.31.16.201:123 Jan 24 00:37:12.374887 ntpd[1951]: Listen normally on 4 lo [::1]:123 Jan 24 00:37:12.375994 ntpd[1951]: bind(21) AF_INET6 fe80::437:71ff:fe4a:947f%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:37:12.376021 ntpd[1951]: unable to create socket on eth0 (5) for fe80::437:71ff:fe4a:947f%2#123 Jan 24 00:37:12.376038 ntpd[1951]: failed to init interface for address fe80::437:71ff:fe4a:947f%2 Jan 24 00:37:12.376075 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Jan 24 00:37:12.388649 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:37:12.392070 coreos-metadata[1946]: Jan 24 00:37:12.389 INFO Fetch successful Jan 24 00:37:12.392149 update_engine[1959]: I20260124 00:37:12.377790 1959 main.cc:92] Flatcar Update Engine starting Jan 24 00:37:12.389402 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:37:12.405112 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:37:12.414236 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:12.417022 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:37:12.418136 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:12.418136 ntpd[1951]: 24 Jan 00:37:12 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:12.414276 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:12.423435 update_engine[1959]: I20260124 00:37:12.423372 1959 update_check_scheduler.cc:74] Next update check in 5m21s Jan 24 00:37:12.428059 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:37:12.445883 extend-filesystems[1949]: Resized partition /dev/nvme0n1p9 Jan 24 00:37:12.451834 extend-filesystems[2001]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:37:12.474820 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:37:12.486968 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:37:12.575904 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1750) Jan 24 00:37:12.591322 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:37:12.617029 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:37:12.618247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:37:12.676936 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:37:12.698526 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:37:12.698551 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:37:12.702963 extend-filesystems[2001]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:37:12.702963 extend-filesystems[2001]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:37:12.702963 extend-filesystems[2001]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:37:12.706937 extend-filesystems[1949]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:37:12.717116 systemd[1]: Starting sshkeys.service... Jan 24 00:37:12.719698 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:37:12.720815 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:37:12.754235 systemd-logind[1958]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:37:12.754269 systemd-logind[1958]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 24 00:37:12.754296 systemd-logind[1958]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:37:12.754674 systemd-logind[1958]: New seat seat0. Jan 24 00:37:12.760252 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:37:12.788816 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:37:12.801321 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:37:12.943179 coreos-metadata[2074]: Jan 24 00:37:12.943 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:37:12.959942 coreos-metadata[2074]: Jan 24 00:37:12.955 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:37:12.959942 coreos-metadata[2074]: Jan 24 00:37:12.955 INFO Fetch successful Jan 24 00:37:12.959942 coreos-metadata[2074]: Jan 24 00:37:12.955 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:37:12.963250 coreos-metadata[2074]: Jan 24 00:37:12.963 INFO Fetch successful Jan 24 00:37:12.964915 unknown[2074]: wrote ssh authorized keys file for user: core Jan 24 00:37:13.040691 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:37:13.040893 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:37:13.047553 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1994 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:37:13.061311 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:37:13.075893 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:37:13.077698 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:37:13.080435 systemd[1]: Finished sshkeys.service. Jan 24 00:37:13.119316 polkitd[2136]: Started polkitd version 121 Jan 24 00:37:13.131004 systemd-networkd[1899]: eth0: Gained IPv6LL Jan 24 00:37:13.149772 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:37:13.151005 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:37:13.161287 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:37:13.171184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:13.182737 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:37:13.187608 polkitd[2136]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:37:13.187698 polkitd[2136]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:37:13.196235 polkitd[2136]: Finished loading, compiling and executing 2 rules Jan 24 00:37:13.202054 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:37:13.202347 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:37:13.204232 polkitd[2136]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:37:13.287760 systemd-hostnamed[1994]: Hostname set to (transient) Jan 24 00:37:13.288035 systemd-resolved[1900]: System hostname changed to 'ip-172-31-16-201'. Jan 24 00:37:13.291140 containerd[1986]: time="2026-01-24T00:37:13.290608961Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:37:13.310949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:37:13.363111 amazon-ssm-agent[2143]: Initializing new seelog logger Jan 24 00:37:13.363111 amazon-ssm-agent[2143]: New Seelog Logger Creation Complete Jan 24 00:37:13.363111 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.363111 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.363111 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 processing appconfig overrides Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 processing appconfig overrides Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.366395 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 processing appconfig overrides Jan 24 00:37:13.367834 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO Proxy environment variables: Jan 24 00:37:13.375075 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.375075 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:13.375075 amazon-ssm-agent[2143]: 2026/01/24 00:37:13 processing appconfig overrides Jan 24 00:37:13.468659 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO https_proxy: Jan 24 00:37:13.470026 containerd[1986]: time="2026-01-24T00:37:13.468923124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.478053 containerd[1986]: time="2026-01-24T00:37:13.477993709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:13.478053 containerd[1986]: time="2026-01-24T00:37:13.478051457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:37:13.478193 containerd[1986]: time="2026-01-24T00:37:13.478077829Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:37:13.478348 containerd[1986]: time="2026-01-24T00:37:13.478322879Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:37:13.478405 containerd[1986]: time="2026-01-24T00:37:13.478356464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.478884 containerd[1986]: time="2026-01-24T00:37:13.478436393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:13.478884 containerd[1986]: time="2026-01-24T00:37:13.478457613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.479112 containerd[1986]: time="2026-01-24T00:37:13.479079082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:13.479177 containerd[1986]: time="2026-01-24T00:37:13.479115119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.479177 containerd[1986]: time="2026-01-24T00:37:13.479138290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:13.479177 containerd[1986]: time="2026-01-24T00:37:13.479154347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.479281 containerd[1986]: time="2026-01-24T00:37:13.479268114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.481621 containerd[1986]: time="2026-01-24T00:37:13.479544884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:13.481621 containerd[1986]: time="2026-01-24T00:37:13.479787784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:13.481621 containerd[1986]: time="2026-01-24T00:37:13.479814022Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:37:13.483111 containerd[1986]: time="2026-01-24T00:37:13.483082915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:37:13.483179 containerd[1986]: time="2026-01-24T00:37:13.483165933Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:37:13.493849 containerd[1986]: time="2026-01-24T00:37:13.493797992Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:37:13.493978 containerd[1986]: time="2026-01-24T00:37:13.493902644Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:37:13.493978 containerd[1986]: time="2026-01-24T00:37:13.493925467Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:37:13.493978 containerd[1986]: time="2026-01-24T00:37:13.493947447Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:37:13.493978 containerd[1986]: time="2026-01-24T00:37:13.493968571Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:37:13.494176 containerd[1986]: time="2026-01-24T00:37:13.494153629Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:37:13.494513 containerd[1986]: time="2026-01-24T00:37:13.494493203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:37:13.494659 containerd[1986]: time="2026-01-24T00:37:13.494638635Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:37:13.494704 containerd[1986]: time="2026-01-24T00:37:13.494667539Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:37:13.494704 containerd[1986]: time="2026-01-24T00:37:13.494687351Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:37:13.494785 containerd[1986]: time="2026-01-24T00:37:13.494708705Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494785 containerd[1986]: time="2026-01-24T00:37:13.494729325Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494785 containerd[1986]: time="2026-01-24T00:37:13.494751484Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494785 containerd[1986]: time="2026-01-24T00:37:13.494772643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494946 containerd[1986]: time="2026-01-24T00:37:13.494796485Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494946 containerd[1986]: time="2026-01-24T00:37:13.494823575Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.494946 containerd[1986]: time="2026-01-24T00:37:13.494843794Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.498869718Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.498927298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.498951306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.498971648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.498993705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499012639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499069178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499087678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499110784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499130969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499154610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499172793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499191582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499210595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.500876 containerd[1986]: time="2026-01-24T00:37:13.499234065Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:37:13.501504 containerd[1986]: time="2026-01-24T00:37:13.499271754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.501504 containerd[1986]: time="2026-01-24T00:37:13.499289954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.501504 containerd[1986]: time="2026-01-24T00:37:13.499307099Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:37:13.503752 containerd[1986]: time="2026-01-24T00:37:13.503353005Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:37:13.503834 containerd[1986]: time="2026-01-24T00:37:13.503767644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:37:13.503834 containerd[1986]: time="2026-01-24T00:37:13.503786829Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:37:13.503834 containerd[1986]: time="2026-01-24T00:37:13.503807461Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:37:13.503834 containerd[1986]: time="2026-01-24T00:37:13.503821738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.504004 containerd[1986]: time="2026-01-24T00:37:13.503842306Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:37:13.504004 containerd[1986]: time="2026-01-24T00:37:13.503870873Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:37:13.504004 containerd[1986]: time="2026-01-24T00:37:13.503887924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:37:13.504408 containerd[1986]: time="2026-01-24T00:37:13.504314965Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:37:13.504619 containerd[1986]: time="2026-01-24T00:37:13.504421183Z" level=info msg="Connect containerd service" Jan 24 00:37:13.504619 containerd[1986]: time="2026-01-24T00:37:13.504473617Z" level=info msg="using legacy CRI server" Jan 24 00:37:13.504619 containerd[1986]: time="2026-01-24T00:37:13.504486150Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:37:13.504720 containerd[1986]: time="2026-01-24T00:37:13.504670341Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:37:13.511659 containerd[1986]: time="2026-01-24T00:37:13.511564611Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:37:13.514759 containerd[1986]: time="2026-01-24T00:37:13.514687096Z" level=info msg="Start subscribing containerd event" Jan 24 00:37:13.514875 containerd[1986]: time="2026-01-24T00:37:13.514784014Z" level=info msg="Start recovering state" Jan 24 00:37:13.514931 containerd[1986]: time="2026-01-24T00:37:13.514912733Z" level=info msg="Start event monitor" Jan 24 00:37:13.514972 containerd[1986]: time="2026-01-24T00:37:13.514948422Z" level=info msg="Start snapshots syncer" Jan 24 00:37:13.514972 containerd[1986]: time="2026-01-24T00:37:13.514963341Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:37:13.515037 containerd[1986]: time="2026-01-24T00:37:13.514974666Z" level=info msg="Start streaming server" Jan 24 00:37:13.518896 containerd[1986]: time="2026-01-24T00:37:13.518052958Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:37:13.518896 containerd[1986]: time="2026-01-24T00:37:13.518135629Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:37:13.518896 containerd[1986]: time="2026-01-24T00:37:13.518211679Z" level=info msg="containerd successfully booted in 0.231859s" Jan 24 00:37:13.518331 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:37:13.571931 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO http_proxy: Jan 24 00:37:13.668024 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO no_proxy: Jan 24 00:37:13.768875 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:37:13.865520 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:37:13.963874 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO Agent will take identity from EC2 Jan 24 00:37:14.017533 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:14.017533 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:14.017533 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [Registrar] Starting registrar module Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:13 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:37:14.017712 amazon-ssm-agent[2143]: 2026-01-24 00:37:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:37:14.063219 amazon-ssm-agent[2143]: 2026-01-24 00:37:14 INFO [CredentialRefresher] Next credential rotation will be in 32.39999392226667 minutes Jan 24 00:37:14.156846 tar[1967]: linux-amd64/README.md Jan 24 00:37:14.176768 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:37:14.260326 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:37:14.291188 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:37:14.304996 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:37:14.311790 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:37:14.312058 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:37:14.319487 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:37:14.334136 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:37:14.344102 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:37:14.347056 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:37:14.348139 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:37:15.031367 amazon-ssm-agent[2143]: 2026-01-24 00:37:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:37:15.091130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:15.094318 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:37:15.094917 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:37:15.095581 systemd[1]: Startup finished in 1.398s (kernel) + 6.726s (initrd) + 6.634s (userspace) = 14.759s. Jan 24 00:37:15.131967 amazon-ssm-agent[2143]: 2026-01-24 00:37:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2190) started Jan 24 00:37:15.232776 amazon-ssm-agent[2143]: 2026-01-24 00:37:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:37:15.344578 ntpd[1951]: Listen normally on 6 eth0 [fe80::437:71ff:fe4a:947f%2]:123 Jan 24 00:37:15.344973 ntpd[1951]: 24 Jan 00:37:15 ntpd[1951]: Listen normally on 6 eth0 [fe80::437:71ff:fe4a:947f%2]:123 Jan 24 00:37:15.874565 kubelet[2200]: E0124 00:37:15.874507 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:37:15.877136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:37:15.877291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:37:15.877561 systemd[1]: kubelet.service: Consumed 1.098s CPU time. Jan 24 00:37:16.404312 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:37:16.410178 systemd[1]: Started sshd@0-172.31.16.201:22-4.153.228.146:48010.service - OpenSSH per-connection server daemon (4.153.228.146:48010). Jan 24 00:37:16.906175 sshd[2217]: Accepted publickey for core from 4.153.228.146 port 48010 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:16.907401 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:16.917048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:37:16.922245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:37:16.925154 systemd-logind[1958]: New session 1 of user core. Jan 24 00:37:16.943411 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:37:16.954215 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:37:16.957809 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:37:17.071945 systemd[2221]: Queued start job for default target default.target. Jan 24 00:37:17.078995 systemd[2221]: Created slice app.slice - User Application Slice. Jan 24 00:37:17.079028 systemd[2221]: Reached target paths.target - Paths. Jan 24 00:37:17.079043 systemd[2221]: Reached target timers.target - Timers. Jan 24 00:37:17.080426 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:37:17.092926 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:37:17.093049 systemd[2221]: Reached target sockets.target - Sockets. Jan 24 00:37:17.093065 systemd[2221]: Reached target basic.target - Basic System. Jan 24 00:37:17.093106 systemd[2221]: Reached target default.target - Main User Target. Jan 24 00:37:17.093136 systemd[2221]: Startup finished in 128ms. Jan 24 00:37:17.093304 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:37:17.105147 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:37:17.473024 systemd[1]: Started sshd@1-172.31.16.201:22-4.153.228.146:48024.service - OpenSSH per-connection server daemon (4.153.228.146:48024). Jan 24 00:37:17.954130 sshd[2232]: Accepted publickey for core from 4.153.228.146 port 48024 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:17.955630 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:17.961121 systemd-logind[1958]: New session 2 of user core. Jan 24 00:37:17.967117 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:37:18.308963 sshd[2232]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:18.313170 systemd[1]: sshd@1-172.31.16.201:22-4.153.228.146:48024.service: Deactivated successfully. Jan 24 00:37:18.315374 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:37:18.317145 systemd-logind[1958]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:37:18.318800 systemd-logind[1958]: Removed session 2. Jan 24 00:37:18.402368 systemd[1]: Started sshd@2-172.31.16.201:22-4.153.228.146:48040.service - OpenSSH per-connection server daemon (4.153.228.146:48040). Jan 24 00:37:18.890028 sshd[2239]: Accepted publickey for core from 4.153.228.146 port 48040 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:18.891448 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:18.896350 systemd-logind[1958]: New session 3 of user core. Jan 24 00:37:18.902081 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:37:19.234995 sshd[2239]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:19.238104 systemd[1]: sshd@2-172.31.16.201:22-4.153.228.146:48040.service: Deactivated successfully. Jan 24 00:37:19.239875 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:37:19.241302 systemd-logind[1958]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:37:19.242209 systemd-logind[1958]: Removed session 3. Jan 24 00:37:19.321961 systemd[1]: Started sshd@3-172.31.16.201:22-4.153.228.146:48054.service - OpenSSH per-connection server daemon (4.153.228.146:48054). Jan 24 00:37:20.206276 systemd-resolved[1900]: Clock change detected. Flushing caches. Jan 24 00:37:20.673402 sshd[2246]: Accepted publickey for core from 4.153.228.146 port 48054 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:20.675029 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:20.679888 systemd-logind[1958]: New session 4 of user core. Jan 24 00:37:20.694684 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:37:21.027315 sshd[2246]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:21.030877 systemd[1]: sshd@3-172.31.16.201:22-4.153.228.146:48054.service: Deactivated successfully. Jan 24 00:37:21.032650 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:37:21.033209 systemd-logind[1958]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:37:21.034373 systemd-logind[1958]: Removed session 4. Jan 24 00:37:21.117176 systemd[1]: Started sshd@4-172.31.16.201:22-4.153.228.146:48068.service - OpenSSH per-connection server daemon (4.153.228.146:48068). Jan 24 00:37:21.605500 sshd[2253]: Accepted publickey for core from 4.153.228.146 port 48068 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:21.607069 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:21.612437 systemd-logind[1958]: New session 5 of user core. Jan 24 00:37:21.614481 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:37:21.896371 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:37:21.896679 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:21.908912 sudo[2256]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:21.987006 sshd[2253]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:21.990287 systemd[1]: sshd@4-172.31.16.201:22-4.153.228.146:48068.service: Deactivated successfully. Jan 24 00:37:21.992111 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:37:21.993528 systemd-logind[1958]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:37:21.995289 systemd-logind[1958]: Removed session 5. Jan 24 00:37:22.074585 systemd[1]: Started sshd@5-172.31.16.201:22-4.153.228.146:48072.service - OpenSSH per-connection server daemon (4.153.228.146:48072). Jan 24 00:37:22.564323 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 48072 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:22.566130 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:22.571482 systemd-logind[1958]: New session 6 of user core. Jan 24 00:37:22.578516 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:37:22.842305 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:37:22.842597 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:22.846397 sudo[2265]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:22.852241 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:37:22.852691 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:22.866648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:22.870616 auditctl[2268]: No rules Jan 24 00:37:22.871052 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:37:22.871296 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:22.878944 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:22.904545 augenrules[2286]: No rules Jan 24 00:37:22.906073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:22.907455 sudo[2264]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:22.986007 sshd[2261]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:22.988968 systemd[1]: sshd@5-172.31.16.201:22-4.153.228.146:48072.service: Deactivated successfully. Jan 24 00:37:22.990738 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:37:22.991939 systemd-logind[1958]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:37:22.992822 systemd-logind[1958]: Removed session 6. Jan 24 00:37:23.083221 systemd[1]: Started sshd@6-172.31.16.201:22-4.153.228.146:48082.service - OpenSSH per-connection server daemon (4.153.228.146:48082). Jan 24 00:37:23.610578 sshd[2294]: Accepted publickey for core from 4.153.228.146 port 48082 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:23.612841 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:23.617686 systemd-logind[1958]: New session 7 of user core. Jan 24 00:37:23.621488 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:37:23.901966 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:37:23.902292 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:24.267531 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:37:24.278715 (dockerd)[2313]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:37:24.663101 dockerd[2313]: time="2026-01-24T00:37:24.662766718Z" level=info msg="Starting up" Jan 24 00:37:24.816643 systemd[1]: var-lib-docker-metacopy\x2dcheck1430375506-merged.mount: Deactivated successfully. Jan 24 00:37:24.838676 dockerd[2313]: time="2026-01-24T00:37:24.838425572Z" level=info msg="Loading containers: start." Jan 24 00:37:24.973282 kernel: Initializing XFRM netlink socket Jan 24 00:37:25.004553 (udev-worker)[2335]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:25.070012 systemd-networkd[1899]: docker0: Link UP Jan 24 00:37:25.098279 dockerd[2313]: time="2026-01-24T00:37:25.098217824Z" level=info msg="Loading containers: done." Jan 24 00:37:25.128793 dockerd[2313]: time="2026-01-24T00:37:25.128719241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:37:25.129000 dockerd[2313]: time="2026-01-24T00:37:25.128861737Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:37:25.129052 dockerd[2313]: time="2026-01-24T00:37:25.129014122Z" level=info msg="Daemon has completed initialization" Jan 24 00:37:25.179099 dockerd[2313]: time="2026-01-24T00:37:25.178962191Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:37:25.179202 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:37:26.199680 containerd[1986]: time="2026-01-24T00:37:26.199641401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:37:26.753768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:37:26.760511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:26.780581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876728890.mount: Deactivated successfully. Jan 24 00:37:27.032280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:27.042625 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:37:27.100139 kubelet[2474]: E0124 00:37:27.100053 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:37:27.104613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:37:27.104766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:37:28.416213 containerd[1986]: time="2026-01-24T00:37:28.416158878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:28.417222 containerd[1986]: time="2026-01-24T00:37:28.417176839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:37:28.420079 containerd[1986]: time="2026-01-24T00:37:28.418183150Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:28.422966 containerd[1986]: time="2026-01-24T00:37:28.421095725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:28.422966 containerd[1986]: time="2026-01-24T00:37:28.422493367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.222811012s" Jan 24 00:37:28.422966 containerd[1986]: time="2026-01-24T00:37:28.422535050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:37:28.423531 containerd[1986]: time="2026-01-24T00:37:28.423313318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:37:30.191566 containerd[1986]: time="2026-01-24T00:37:30.191514950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:30.192900 containerd[1986]: time="2026-01-24T00:37:30.192849075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:37:30.195272 containerd[1986]: time="2026-01-24T00:37:30.194314922Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:30.198168 containerd[1986]: time="2026-01-24T00:37:30.198115977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:30.202043 containerd[1986]: time="2026-01-24T00:37:30.201990764Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.778353758s" Jan 24 00:37:30.202043 containerd[1986]: time="2026-01-24T00:37:30.202040825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:37:30.203466 containerd[1986]: time="2026-01-24T00:37:30.203434391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:37:31.761151 containerd[1986]: time="2026-01-24T00:37:31.761087690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:31.762212 containerd[1986]: time="2026-01-24T00:37:31.762171410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:37:31.763175 containerd[1986]: time="2026-01-24T00:37:31.762944750Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:31.765420 containerd[1986]: time="2026-01-24T00:37:31.765390122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:31.766684 containerd[1986]: time="2026-01-24T00:37:31.766657303Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.563189713s" Jan 24 00:37:31.766795 containerd[1986]: time="2026-01-24T00:37:31.766780319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:37:31.767482 containerd[1986]: time="2026-01-24T00:37:31.767222504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:37:32.804887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030765965.mount: Deactivated successfully. Jan 24 00:37:33.367002 containerd[1986]: time="2026-01-24T00:37:33.366946094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:33.367986 containerd[1986]: time="2026-01-24T00:37:33.367838318Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:37:33.371267 containerd[1986]: time="2026-01-24T00:37:33.368938394Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:33.372385 containerd[1986]: time="2026-01-24T00:37:33.372350187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:33.373133 containerd[1986]: time="2026-01-24T00:37:33.373086525Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.605835455s" Jan 24 00:37:33.373205 containerd[1986]: time="2026-01-24T00:37:33.373131460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:37:33.373906 containerd[1986]: time="2026-01-24T00:37:33.373884688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:37:33.876403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945818242.mount: Deactivated successfully. Jan 24 00:37:34.975370 containerd[1986]: time="2026-01-24T00:37:34.975301223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:34.976729 containerd[1986]: time="2026-01-24T00:37:34.976460054Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:37:34.979275 containerd[1986]: time="2026-01-24T00:37:34.977975979Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:34.981127 containerd[1986]: time="2026-01-24T00:37:34.981084752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:34.982515 containerd[1986]: time="2026-01-24T00:37:34.982476466Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.608481136s" Jan 24 00:37:34.982659 containerd[1986]: time="2026-01-24T00:37:34.982635351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:37:34.983596 containerd[1986]: time="2026-01-24T00:37:34.983565941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:37:35.416589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851751875.mount: Deactivated successfully. Jan 24 00:37:35.423002 containerd[1986]: time="2026-01-24T00:37:35.422957816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:35.423907 containerd[1986]: time="2026-01-24T00:37:35.423810621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:37:35.425001 containerd[1986]: time="2026-01-24T00:37:35.424952101Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:35.427798 containerd[1986]: time="2026-01-24T00:37:35.427742684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:35.429193 containerd[1986]: time="2026-01-24T00:37:35.428580366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.979236ms" Jan 24 00:37:35.429193 containerd[1986]: time="2026-01-24T00:37:35.428618825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:37:35.429637 containerd[1986]: time="2026-01-24T00:37:35.429532741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:37:35.922963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525582396.mount: Deactivated successfully. Jan 24 00:37:37.355154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:37:37.360543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:37.751569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:37.754493 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:37:37.876061 kubelet[2658]: E0124 00:37:37.875981 2658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:37:37.879926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:37:37.880213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:37:38.492379 containerd[1986]: time="2026-01-24T00:37:38.492297561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:38.494225 containerd[1986]: time="2026-01-24T00:37:38.494163230Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:37:38.496595 containerd[1986]: time="2026-01-24T00:37:38.496531604Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:38.501354 containerd[1986]: time="2026-01-24T00:37:38.500919766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:38.502719 containerd[1986]: time="2026-01-24T00:37:38.502663402Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.073098942s" Jan 24 00:37:38.502719 containerd[1986]: time="2026-01-24T00:37:38.502717141Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:37:41.264839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:41.273958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:41.309672 systemd[1]: Reloading requested from client PID 2695 ('systemctl') (unit session-7.scope)... Jan 24 00:37:41.309695 systemd[1]: Reloading... Jan 24 00:37:41.451273 zram_generator::config[2736]: No configuration found. Jan 24 00:37:41.605422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:41.692482 systemd[1]: Reloading finished in 381 ms. Jan 24 00:37:41.764804 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:37:41.767236 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:41.768050 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:37:41.768318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:41.771705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:41.967618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:41.977761 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:37:42.027290 kubelet[2802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:37:42.027290 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:37:42.027290 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:37:42.027770 kubelet[2802]: I0124 00:37:42.027390 2802 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:37:42.557507 kubelet[2802]: I0124 00:37:42.556333 2802 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:37:42.557507 kubelet[2802]: I0124 00:37:42.556386 2802 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:37:42.557507 kubelet[2802]: I0124 00:37:42.557059 2802 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:37:42.611762 kubelet[2802]: E0124 00:37:42.611722 2802 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:42.613573 kubelet[2802]: I0124 00:37:42.613530 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:37:42.633140 kubelet[2802]: E0124 00:37:42.633088 2802 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:37:42.633140 kubelet[2802]: I0124 00:37:42.633136 2802 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:37:42.638860 kubelet[2802]: I0124 00:37:42.638822 2802 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:37:42.641109 kubelet[2802]: I0124 00:37:42.641053 2802 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:37:42.641289 kubelet[2802]: I0124 00:37:42.641101 2802 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-201","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:37:42.644392 kubelet[2802]: I0124 00:37:42.644365 2802 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:37:42.644452 kubelet[2802]: I0124 00:37:42.644397 2802 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:37:42.646187 kubelet[2802]: I0124 00:37:42.646159 2802 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:37:42.656594 kubelet[2802]: I0124 00:37:42.656283 2802 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:37:42.656594 kubelet[2802]: I0124 00:37:42.656327 2802 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:37:42.656594 kubelet[2802]: I0124 00:37:42.656353 2802 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:37:42.656594 kubelet[2802]: I0124 00:37:42.656364 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:37:42.663432 kubelet[2802]: W0124 00:37:42.663070 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-201&limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:42.663432 kubelet[2802]: E0124 00:37:42.663135 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-201&limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:42.663588 kubelet[2802]: W0124 00:37:42.663522 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:42.663588 kubelet[2802]: E0124 00:37:42.663558 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:42.665599 kubelet[2802]: I0124 00:37:42.665557 2802 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:37:42.669309 kubelet[2802]: I0124 00:37:42.669285 2802 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:37:42.669545 kubelet[2802]: W0124 00:37:42.669475 2802 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:37:42.670843 kubelet[2802]: I0124 00:37:42.670296 2802 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:37:42.670843 kubelet[2802]: I0124 00:37:42.670326 2802 server.go:1287] "Started kubelet" Jan 24 00:37:42.671229 kubelet[2802]: I0124 00:37:42.671185 2802 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:37:42.678908 kubelet[2802]: I0124 00:37:42.678309 2802 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:37:42.682430 kubelet[2802]: I0124 00:37:42.682360 2802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:37:42.682675 kubelet[2802]: I0124 00:37:42.682660 2802 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:37:42.684867 kubelet[2802]: I0124 00:37:42.684847 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:37:42.687564 kubelet[2802]: E0124 00:37:42.684154 2802 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.201:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.201:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-201.188d83b9cdbbf495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-201,UID:ip-172-31-16-201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-201,},FirstTimestamp:2026-01-24 00:37:42.670308501 +0000 UTC m=+0.689011978,LastTimestamp:2026-01-24 00:37:42.670308501 +0000 UTC m=+0.689011978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-201,}" Jan 24 00:37:42.696850 kubelet[2802]: I0124 00:37:42.696806 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:37:42.699343 kubelet[2802]: I0124 00:37:42.698488 2802 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:37:42.699343 kubelet[2802]: E0124 00:37:42.698733 2802 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-201\" not found" Jan 24 00:37:42.699343 kubelet[2802]: I0124 00:37:42.699180 2802 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:37:42.700839 kubelet[2802]: I0124 00:37:42.700808 2802 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:37:42.701982 kubelet[2802]: W0124 00:37:42.701939 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:42.702051 kubelet[2802]: E0124 00:37:42.701992 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:42.702080 kubelet[2802]: E0124 00:37:42.702052 2802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": dial tcp 172.31.16.201:6443: connect: connection refused" interval="200ms" Jan 24 00:37:42.710692 kubelet[2802]: I0124 00:37:42.710655 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:37:42.714648 kubelet[2802]: I0124 00:37:42.714626 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:37:42.720301 kubelet[2802]: I0124 00:37:42.720280 2802 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:37:42.720439 kubelet[2802]: I0124 00:37:42.720428 2802 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:37:42.720480 kubelet[2802]: I0124 00:37:42.720475 2802 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:37:42.720616 kubelet[2802]: E0124 00:37:42.720577 2802 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:37:42.722454 kubelet[2802]: I0124 00:37:42.722414 2802 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:37:42.722563 kubelet[2802]: I0124 00:37:42.722502 2802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:37:42.723497 kubelet[2802]: W0124 00:37:42.723439 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:42.723497 kubelet[2802]: E0124 00:37:42.723494 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:42.724699 kubelet[2802]: E0124 00:37:42.724535 2802 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:37:42.725206 kubelet[2802]: I0124 00:37:42.725188 2802 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:37:42.747045 kubelet[2802]: I0124 00:37:42.746779 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:37:42.747045 kubelet[2802]: I0124 00:37:42.746795 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:37:42.747045 kubelet[2802]: I0124 00:37:42.746827 2802 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:37:42.751883 kubelet[2802]: I0124 00:37:42.751612 2802 policy_none.go:49] "None policy: Start" Jan 24 00:37:42.751883 kubelet[2802]: I0124 00:37:42.751648 2802 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:37:42.751883 kubelet[2802]: I0124 00:37:42.751661 2802 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:37:42.759368 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:37:42.772668 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:37:42.775635 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:37:42.787487 kubelet[2802]: I0124 00:37:42.787205 2802 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:37:42.790343 kubelet[2802]: I0124 00:37:42.790020 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:37:42.790343 kubelet[2802]: I0124 00:37:42.790054 2802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:37:42.790343 kubelet[2802]: I0124 00:37:42.790346 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:37:42.791452 kubelet[2802]: E0124 00:37:42.791430 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:37:42.791525 kubelet[2802]: E0124 00:37:42.791466 2802 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-201\" not found" Jan 24 00:37:42.838627 systemd[1]: Created slice kubepods-burstable-podf19feed6af48288d71160cd89e332cb4.slice - libcontainer container kubepods-burstable-podf19feed6af48288d71160cd89e332cb4.slice. Jan 24 00:37:42.859152 kubelet[2802]: E0124 00:37:42.859091 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:42.863168 systemd[1]: Created slice kubepods-burstable-pod34f089f3a1cbece89744b5ca7f165d37.slice - libcontainer container kubepods-burstable-pod34f089f3a1cbece89744b5ca7f165d37.slice. Jan 24 00:37:42.866014 kubelet[2802]: E0124 00:37:42.865982 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:42.868814 systemd[1]: Created slice kubepods-burstable-pod0318eab7f1303fef5421df485567abb4.slice - libcontainer container kubepods-burstable-pod0318eab7f1303fef5421df485567abb4.slice. Jan 24 00:37:42.870864 kubelet[2802]: E0124 00:37:42.870833 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:42.892261 kubelet[2802]: I0124 00:37:42.892213 2802 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:42.892568 kubelet[2802]: E0124 00:37:42.892541 2802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.201:6443/api/v1/nodes\": dial tcp 172.31.16.201:6443: connect: connection refused" node="ip-172-31-16-201" Jan 24 00:37:42.902678 kubelet[2802]: E0124 00:37:42.902626 2802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": dial tcp 172.31.16.201:6443: connect: connection refused" interval="400ms" Jan 24 00:37:43.003132 kubelet[2802]: I0124 00:37:43.003065 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:43.003132 kubelet[2802]: I0124 00:37:43.003118 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:43.003335 kubelet[2802]: I0124 00:37:43.003148 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:43.003335 kubelet[2802]: I0124 00:37:43.003173 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:43.003335 kubelet[2802]: I0124 00:37:43.003197 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:43.003335 kubelet[2802]: I0124 00:37:43.003220 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0318eab7f1303fef5421df485567abb4-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-201\" (UID: \"0318eab7f1303fef5421df485567abb4\") " pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:43.003335 kubelet[2802]: I0124 00:37:43.003240 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:43.003587 kubelet[2802]: I0124 00:37:43.003276 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:43.003842 kubelet[2802]: I0124 00:37:43.003799 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:43.095152 kubelet[2802]: I0124 00:37:43.094765 2802 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:43.095152 kubelet[2802]: E0124 00:37:43.095054 2802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.201:6443/api/v1/nodes\": dial tcp 172.31.16.201:6443: connect: connection refused" node="ip-172-31-16-201" Jan 24 00:37:43.139971 kubelet[2802]: E0124 00:37:43.139866 2802 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.201:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.201:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-201.188d83b9cdbbf495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-201,UID:ip-172-31-16-201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-201,},FirstTimestamp:2026-01-24 00:37:42.670308501 +0000 UTC m=+0.689011978,LastTimestamp:2026-01-24 00:37:42.670308501 +0000 UTC m=+0.689011978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-201,}" Jan 24 00:37:43.160923 containerd[1986]: time="2026-01-24T00:37:43.160880143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-201,Uid:f19feed6af48288d71160cd89e332cb4,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:43.175214 containerd[1986]: time="2026-01-24T00:37:43.175171919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-201,Uid:34f089f3a1cbece89744b5ca7f165d37,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:43.175719 containerd[1986]: time="2026-01-24T00:37:43.175171923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-201,Uid:0318eab7f1303fef5421df485567abb4,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:43.303980 kubelet[2802]: E0124 00:37:43.303933 2802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": dial tcp 172.31.16.201:6443: connect: connection refused" interval="800ms" Jan 24 00:37:43.497284 kubelet[2802]: I0124 00:37:43.497168 2802 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:43.497755 kubelet[2802]: E0124 00:37:43.497709 2802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.201:6443/api/v1/nodes\": dial tcp 172.31.16.201:6443: connect: connection refused" node="ip-172-31-16-201" Jan 24 00:37:43.603626 kubelet[2802]: W0124 00:37:43.603580 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:43.603626 kubelet[2802]: E0124 00:37:43.603627 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:43.644017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002570356.mount: Deactivated successfully. Jan 24 00:37:43.660327 containerd[1986]: time="2026-01-24T00:37:43.660276047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:37:43.662319 containerd[1986]: time="2026-01-24T00:37:43.662279588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:37:43.664123 containerd[1986]: time="2026-01-24T00:37:43.664045345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:37:43.666220 containerd[1986]: time="2026-01-24T00:37:43.666164560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:37:43.668839 containerd[1986]: time="2026-01-24T00:37:43.668124314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:37:43.670685 containerd[1986]: time="2026-01-24T00:37:43.670651619Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:37:43.672847 containerd[1986]: time="2026-01-24T00:37:43.672667105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:37:43.676082 containerd[1986]: time="2026-01-24T00:37:43.676052856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:37:43.676915 containerd[1986]: time="2026-01-24T00:37:43.676712601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.266042ms" Jan 24 00:37:43.677788 containerd[1986]: time="2026-01-24T00:37:43.677752858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.788118ms" Jan 24 00:37:43.681705 containerd[1986]: time="2026-01-24T00:37:43.681652426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.290757ms" Jan 24 00:37:43.728274 kubelet[2802]: W0124 00:37:43.725753 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-201&limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:43.728274 kubelet[2802]: E0124 00:37:43.725825 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-201&limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:43.867416 containerd[1986]: time="2026-01-24T00:37:43.867098021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:43.867416 containerd[1986]: time="2026-01-24T00:37:43.867155681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:43.867416 containerd[1986]: time="2026-01-24T00:37:43.867172006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.867416 containerd[1986]: time="2026-01-24T00:37:43.867262167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.869074 containerd[1986]: time="2026-01-24T00:37:43.868673181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:43.869074 containerd[1986]: time="2026-01-24T00:37:43.868718000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:43.869074 containerd[1986]: time="2026-01-24T00:37:43.868744492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.869074 containerd[1986]: time="2026-01-24T00:37:43.868824431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.871958 containerd[1986]: time="2026-01-24T00:37:43.871749841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:43.871958 containerd[1986]: time="2026-01-24T00:37:43.871928337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:43.872177 containerd[1986]: time="2026-01-24T00:37:43.872070033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.875714 containerd[1986]: time="2026-01-24T00:37:43.875514907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:43.895565 systemd[1]: Started cri-containerd-97665fff1f6cbba940631683bdd31a4b1c3294dedb72a91df93ee8fd13460584.scope - libcontainer container 97665fff1f6cbba940631683bdd31a4b1c3294dedb72a91df93ee8fd13460584. Jan 24 00:37:43.914460 systemd[1]: Started cri-containerd-323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f.scope - libcontainer container 323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f. Jan 24 00:37:43.916833 systemd[1]: Started cri-containerd-91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f.scope - libcontainer container 91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f. Jan 24 00:37:43.977162 containerd[1986]: time="2026-01-24T00:37:43.976878157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-201,Uid:f19feed6af48288d71160cd89e332cb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"97665fff1f6cbba940631683bdd31a4b1c3294dedb72a91df93ee8fd13460584\"" Jan 24 00:37:43.990231 containerd[1986]: time="2026-01-24T00:37:43.989289925Z" level=info msg="CreateContainer within sandbox \"97665fff1f6cbba940631683bdd31a4b1c3294dedb72a91df93ee8fd13460584\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:37:44.001405 containerd[1986]: time="2026-01-24T00:37:43.994027523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-201,Uid:0318eab7f1303fef5421df485567abb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f\"" Jan 24 00:37:44.001405 containerd[1986]: time="2026-01-24T00:37:43.999688248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-201,Uid:34f089f3a1cbece89744b5ca7f165d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f\"" Jan 24 00:37:44.005769 containerd[1986]: time="2026-01-24T00:37:44.005729857Z" level=info msg="CreateContainer within sandbox \"323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:37:44.006219 containerd[1986]: time="2026-01-24T00:37:44.006196945Z" level=info msg="CreateContainer within sandbox \"91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:37:44.052052 containerd[1986]: time="2026-01-24T00:37:44.051998954Z" level=info msg="CreateContainer within sandbox \"97665fff1f6cbba940631683bdd31a4b1c3294dedb72a91df93ee8fd13460584\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c605560eea13822ab26b6befdaebaa49cba71cf7fc856726a660e1093c138d11\"" Jan 24 00:37:44.052722 kubelet[2802]: W0124 00:37:44.052670 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:44.053644 kubelet[2802]: E0124 00:37:44.052728 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:44.055771 containerd[1986]: time="2026-01-24T00:37:44.054717854Z" level=info msg="CreateContainer within sandbox \"323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468\"" Jan 24 00:37:44.055771 containerd[1986]: time="2026-01-24T00:37:44.054958483Z" level=info msg="StartContainer for \"c605560eea13822ab26b6befdaebaa49cba71cf7fc856726a660e1093c138d11\"" Jan 24 00:37:44.059271 containerd[1986]: time="2026-01-24T00:37:44.057805222Z" level=info msg="CreateContainer within sandbox \"91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb\"" Jan 24 00:37:44.059271 containerd[1986]: time="2026-01-24T00:37:44.057960050Z" level=info msg="StartContainer for \"9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468\"" Jan 24 00:37:44.066912 containerd[1986]: time="2026-01-24T00:37:44.066876295Z" level=info msg="StartContainer for \"a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb\"" Jan 24 00:37:44.078587 kubelet[2802]: W0124 00:37:44.078243 2802 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.201:6443: connect: connection refused Jan 24 00:37:44.078709 kubelet[2802]: E0124 00:37:44.078693 2802 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:44.092468 systemd[1]: Started cri-containerd-c605560eea13822ab26b6befdaebaa49cba71cf7fc856726a660e1093c138d11.scope - libcontainer container c605560eea13822ab26b6befdaebaa49cba71cf7fc856726a660e1093c138d11. Jan 24 00:37:44.096849 systemd[1]: Started cri-containerd-9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468.scope - libcontainer container 9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468. Jan 24 00:37:44.105497 kubelet[2802]: E0124 00:37:44.105457 2802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": dial tcp 172.31.16.201:6443: connect: connection refused" interval="1.6s" Jan 24 00:37:44.118569 systemd[1]: Started cri-containerd-a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb.scope - libcontainer container a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb. Jan 24 00:37:44.182201 containerd[1986]: time="2026-01-24T00:37:44.182119829Z" level=info msg="StartContainer for \"c605560eea13822ab26b6befdaebaa49cba71cf7fc856726a660e1093c138d11\" returns successfully" Jan 24 00:37:44.184964 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:37:44.222387 containerd[1986]: time="2026-01-24T00:37:44.222344757Z" level=info msg="StartContainer for \"9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468\" returns successfully" Jan 24 00:37:44.232452 containerd[1986]: time="2026-01-24T00:37:44.232373726Z" level=info msg="StartContainer for \"a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb\" returns successfully" Jan 24 00:37:44.301172 kubelet[2802]: I0124 00:37:44.301146 2802 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:44.302600 kubelet[2802]: E0124 00:37:44.302565 2802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.201:6443/api/v1/nodes\": dial tcp 172.31.16.201:6443: connect: connection refused" node="ip-172-31-16-201" Jan 24 00:37:44.656045 kubelet[2802]: E0124 00:37:44.656003 2802 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.201:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:37:44.753346 kubelet[2802]: E0124 00:37:44.752898 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:44.758037 kubelet[2802]: E0124 00:37:44.757949 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:44.759870 kubelet[2802]: E0124 00:37:44.759650 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:45.763175 kubelet[2802]: E0124 00:37:45.763141 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:45.766262 kubelet[2802]: E0124 00:37:45.765478 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:45.905120 kubelet[2802]: I0124 00:37:45.905090 2802 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:46.935600 kubelet[2802]: E0124 00:37:46.935561 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:48.132506 kubelet[2802]: E0124 00:37:48.132444 2802 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:48.229931 kubelet[2802]: E0124 00:37:48.229734 2802 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-201\" not found" node="ip-172-31-16-201" Jan 24 00:37:48.389145 kubelet[2802]: I0124 00:37:48.388993 2802 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-201" Jan 24 00:37:48.389145 kubelet[2802]: E0124 00:37:48.389045 2802 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-201\": node \"ip-172-31-16-201\" not found" Jan 24 00:37:48.399724 kubelet[2802]: I0124 00:37:48.399335 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:48.423782 kubelet[2802]: E0124 00:37:48.423598 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:48.423782 kubelet[2802]: I0124 00:37:48.423647 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:48.429279 kubelet[2802]: E0124 00:37:48.427186 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:48.429279 kubelet[2802]: I0124 00:37:48.427217 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:48.431547 kubelet[2802]: E0124 00:37:48.431510 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:48.667192 kubelet[2802]: I0124 00:37:48.667061 2802 apiserver.go:52] "Watching apiserver" Jan 24 00:37:48.700258 kubelet[2802]: I0124 00:37:48.700212 2802 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:37:50.441913 systemd[1]: Reloading requested from client PID 3075 ('systemctl') (unit session-7.scope)... Jan 24 00:37:50.441934 systemd[1]: Reloading... Jan 24 00:37:50.567288 zram_generator::config[3119]: No configuration found. Jan 24 00:37:50.688703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:50.792203 systemd[1]: Reloading finished in 349 ms. Jan 24 00:37:50.832541 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:50.842660 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:37:50.842926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:50.842993 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 131.1M memory peak, 0B memory swap peak. Jan 24 00:37:50.852987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:51.088356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:51.094101 (kubelet)[3175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:37:51.202062 kubelet[3175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:37:51.202539 kubelet[3175]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:37:51.202617 kubelet[3175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:37:51.208877 kubelet[3175]: I0124 00:37:51.208803 3175 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:37:51.227852 kubelet[3175]: I0124 00:37:51.227809 3175 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:37:51.227852 kubelet[3175]: I0124 00:37:51.227837 3175 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:37:51.228530 kubelet[3175]: I0124 00:37:51.228495 3175 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:37:51.235921 kubelet[3175]: I0124 00:37:51.235883 3175 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:37:51.241800 kubelet[3175]: I0124 00:37:51.240878 3175 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:37:51.254128 kubelet[3175]: E0124 00:37:51.254053 3175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:37:51.254128 kubelet[3175]: I0124 00:37:51.254127 3175 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:37:51.262194 kubelet[3175]: I0124 00:37:51.262163 3175 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:37:51.265404 kubelet[3175]: I0124 00:37:51.264936 3175 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:37:51.265595 kubelet[3175]: I0124 00:37:51.265410 3175 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-201","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:37:51.265699 kubelet[3175]: I0124 00:37:51.265598 3175 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:37:51.265699 kubelet[3175]: I0124 00:37:51.265609 3175 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:37:51.265699 kubelet[3175]: I0124 00:37:51.265653 3175 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:37:51.266139 kubelet[3175]: I0124 00:37:51.265789 3175 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:37:51.266139 kubelet[3175]: I0124 00:37:51.265810 3175 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:37:51.266139 kubelet[3175]: I0124 00:37:51.265827 3175 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:37:51.266139 kubelet[3175]: I0124 00:37:51.265837 3175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:37:51.268467 kubelet[3175]: I0124 00:37:51.268432 3175 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:37:51.268814 kubelet[3175]: I0124 00:37:51.268795 3175 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:37:51.276269 kubelet[3175]: I0124 00:37:51.273513 3175 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:37:51.276269 kubelet[3175]: I0124 00:37:51.273557 3175 server.go:1287] "Started kubelet" Jan 24 00:37:51.276269 kubelet[3175]: I0124 00:37:51.276046 3175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:37:51.276608 kubelet[3175]: I0124 00:37:51.276596 3175 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:37:51.279206 kubelet[3175]: I0124 00:37:51.279184 3175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:37:51.294527 kubelet[3175]: I0124 00:37:51.294476 3175 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:37:51.298928 kubelet[3175]: I0124 00:37:51.298898 3175 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:37:51.303886 kubelet[3175]: I0124 00:37:51.303695 3175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:37:51.307272 kubelet[3175]: I0124 00:37:51.306061 3175 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:37:51.309004 kubelet[3175]: I0124 00:37:51.308875 3175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:37:51.309606 kubelet[3175]: I0124 00:37:51.309596 3175 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:37:51.311353 kubelet[3175]: E0124 00:37:51.311326 3175 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:37:51.311914 kubelet[3175]: I0124 00:37:51.311904 3175 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:37:51.312145 kubelet[3175]: I0124 00:37:51.312134 3175 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:37:51.312282 kubelet[3175]: I0124 00:37:51.312224 3175 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:37:51.316535 kubelet[3175]: I0124 00:37:51.315396 3175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:37:51.317531 kubelet[3175]: I0124 00:37:51.317509 3175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:37:51.317531 kubelet[3175]: I0124 00:37:51.317535 3175 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:37:51.317661 kubelet[3175]: I0124 00:37:51.317553 3175 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:37:51.317661 kubelet[3175]: I0124 00:37:51.317559 3175 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:37:51.319797 kubelet[3175]: E0124 00:37:51.319759 3175 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:37:51.394417 kubelet[3175]: I0124 00:37:51.394138 3175 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:37:51.394417 kubelet[3175]: I0124 00:37:51.394161 3175 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:37:51.394417 kubelet[3175]: I0124 00:37:51.394182 3175 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:37:51.394614 kubelet[3175]: I0124 00:37:51.394438 3175 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:37:51.394614 kubelet[3175]: I0124 00:37:51.394452 3175 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:37:51.394614 kubelet[3175]: I0124 00:37:51.394475 3175 policy_none.go:49] "None policy: Start" Jan 24 00:37:51.394614 kubelet[3175]: I0124 00:37:51.394488 3175 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:37:51.394614 kubelet[3175]: I0124 00:37:51.394500 3175 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:37:51.395500 kubelet[3175]: I0124 00:37:51.394641 3175 state_mem.go:75] "Updated machine memory state" Jan 24 00:37:51.407953 kubelet[3175]: I0124 00:37:51.407146 3175 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:37:51.407953 kubelet[3175]: I0124 00:37:51.407892 3175 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:37:51.408132 kubelet[3175]: I0124 00:37:51.407910 3175 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:37:51.408345 kubelet[3175]: I0124 00:37:51.408319 3175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:37:51.412342 kubelet[3175]: E0124 00:37:51.412013 3175 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:37:51.420897 kubelet[3175]: I0124 00:37:51.420863 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:51.428780 kubelet[3175]: I0124 00:37:51.427883 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:51.428780 kubelet[3175]: I0124 00:37:51.427932 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.473719 sudo[3208]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 00:37:51.474049 sudo[3208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 00:37:51.512693 kubelet[3175]: I0124 00:37:51.512660 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.513085 kubelet[3175]: I0124 00:37:51.512888 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.513085 kubelet[3175]: I0124 00:37:51.512913 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.513085 kubelet[3175]: I0124 00:37:51.512932 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:51.513085 kubelet[3175]: I0124 00:37:51.512948 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:51.513085 kubelet[3175]: I0124 00:37:51.512964 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.513347 kubelet[3175]: I0124 00:37:51.512979 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0318eab7f1303fef5421df485567abb4-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-201\" (UID: \"0318eab7f1303fef5421df485567abb4\") " pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:51.513347 kubelet[3175]: I0124 00:37:51.512995 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f19feed6af48288d71160cd89e332cb4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-201\" (UID: \"f19feed6af48288d71160cd89e332cb4\") " pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:51.513347 kubelet[3175]: I0124 00:37:51.513026 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34f089f3a1cbece89744b5ca7f165d37-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-201\" (UID: \"34f089f3a1cbece89744b5ca7f165d37\") " pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:51.526357 kubelet[3175]: I0124 00:37:51.523921 3175 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-201" Jan 24 00:37:51.532577 kubelet[3175]: I0124 00:37:51.532533 3175 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-201" Jan 24 00:37:51.532782 kubelet[3175]: I0124 00:37:51.532628 3175 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-201" Jan 24 00:37:52.146019 sudo[3208]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:52.273831 kubelet[3175]: I0124 00:37:52.273787 3175 apiserver.go:52] "Watching apiserver" Jan 24 00:37:52.310689 kubelet[3175]: I0124 00:37:52.310652 3175 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:37:52.356973 kubelet[3175]: I0124 00:37:52.354529 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:52.356973 kubelet[3175]: I0124 00:37:52.354913 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:52.356973 kubelet[3175]: I0124 00:37:52.355173 3175 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:52.368704 kubelet[3175]: E0124 00:37:52.368669 3175 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-201\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-201" Jan 24 00:37:52.371402 kubelet[3175]: E0124 00:37:52.371372 3175 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-201\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-201" Jan 24 00:37:52.373263 kubelet[3175]: E0124 00:37:52.373224 3175 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-201\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-201" Jan 24 00:37:52.402188 kubelet[3175]: I0124 00:37:52.401068 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-201" podStartSLOduration=1.401026399 podStartE2EDuration="1.401026399s" podCreationTimestamp="2026-01-24 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:37:52.400050282 +0000 UTC m=+1.275450273" watchObservedRunningTime="2026-01-24 00:37:52.401026399 +0000 UTC m=+1.276426387" Jan 24 00:37:52.416484 kubelet[3175]: I0124 00:37:52.415788 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-201" podStartSLOduration=1.415767995 podStartE2EDuration="1.415767995s" podCreationTimestamp="2026-01-24 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:37:52.414344408 +0000 UTC m=+1.289744404" watchObservedRunningTime="2026-01-24 00:37:52.415767995 +0000 UTC m=+1.291167981" Jan 24 00:37:53.815961 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:53.899792 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:53.903925 systemd-logind[1958]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:37:53.904916 systemd[1]: sshd@6-172.31.16.201:22-4.153.228.146:48082.service: Deactivated successfully. Jan 24 00:37:53.906974 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:37:53.907177 systemd[1]: session-7.scope: Consumed 4.772s CPU time, 142.3M memory peak, 0B memory swap peak. Jan 24 00:37:53.908113 systemd-logind[1958]: Removed session 7. Jan 24 00:37:54.961970 kubelet[3175]: I0124 00:37:54.961890 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-201" podStartSLOduration=3.961870169 podStartE2EDuration="3.961870169s" podCreationTimestamp="2026-01-24 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:37:52.426844912 +0000 UTC m=+1.302244907" watchObservedRunningTime="2026-01-24 00:37:54.961870169 +0000 UTC m=+3.837270164" Jan 24 00:37:55.905923 kubelet[3175]: I0124 00:37:55.905698 3175 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:37:55.906565 containerd[1986]: time="2026-01-24T00:37:55.906370178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:37:55.907488 kubelet[3175]: I0124 00:37:55.906898 3175 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:37:56.622354 systemd[1]: Created slice kubepods-besteffort-pod8f4dd153_d9c5_46c2_8b22_3a5fe605123a.slice - libcontainer container kubepods-besteffort-pod8f4dd153_d9c5_46c2_8b22_3a5fe605123a.slice. Jan 24 00:37:56.635930 systemd[1]: Created slice kubepods-burstable-pod9a8bb0af_e9f5_40ee_9b0b_2efb099f3b53.slice - libcontainer container kubepods-burstable-pod9a8bb0af_e9f5_40ee_9b0b_2efb099f3b53.slice. Jan 24 00:37:56.645010 kubelet[3175]: I0124 00:37:56.644631 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-cgroup\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645010 kubelet[3175]: I0124 00:37:56.644661 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-kernel\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645010 kubelet[3175]: I0124 00:37:56.644678 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f4dd153-d9c5-46c2-8b22-3a5fe605123a-xtables-lock\") pod \"kube-proxy-j2ddt\" (UID: \"8f4dd153-d9c5-46c2-8b22-3a5fe605123a\") " pod="kube-system/kube-proxy-j2ddt" Jan 24 00:37:56.645010 kubelet[3175]: I0124 00:37:56.644693 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f4dd153-d9c5-46c2-8b22-3a5fe605123a-lib-modules\") pod \"kube-proxy-j2ddt\" (UID: \"8f4dd153-d9c5-46c2-8b22-3a5fe605123a\") " pod="kube-system/kube-proxy-j2ddt" Jan 24 00:37:56.645010 kubelet[3175]: I0124 00:37:56.644711 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-lib-modules\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644726 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwd9\" (UniqueName: \"kubernetes.io/projected/8f4dd153-d9c5-46c2-8b22-3a5fe605123a-kube-api-access-lzwd9\") pod \"kube-proxy-j2ddt\" (UID: \"8f4dd153-d9c5-46c2-8b22-3a5fe605123a\") " pod="kube-system/kube-proxy-j2ddt" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644742 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hubble-tls\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644756 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hostproc\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644771 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f4dd153-d9c5-46c2-8b22-3a5fe605123a-kube-proxy\") pod \"kube-proxy-j2ddt\" (UID: \"8f4dd153-d9c5-46c2-8b22-3a5fe605123a\") " pod="kube-system/kube-proxy-j2ddt" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644788 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhhw\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-kube-api-access-gqhhw\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645575 kubelet[3175]: I0124 00:37:56.644805 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-bpf-maps\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644821 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-etc-cni-netd\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644837 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-clustermesh-secrets\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644853 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-xtables-lock\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644868 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-config-path\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644881 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-net\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645731 kubelet[3175]: I0124 00:37:56.644898 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-run\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.645876 kubelet[3175]: I0124 00:37:56.644915 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cni-path\") pod \"cilium-bspgv\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " pod="kube-system/cilium-bspgv" Jan 24 00:37:56.933696 containerd[1986]: time="2026-01-24T00:37:56.933560599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ddt,Uid:8f4dd153-d9c5-46c2-8b22-3a5fe605123a,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:56.944161 containerd[1986]: time="2026-01-24T00:37:56.944117956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bspgv,Uid:9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:56.946290 kubelet[3175]: I0124 00:37:56.946202 3175 status_manager.go:890] "Failed to get status for pod" podUID="1cb6defc-5874-469b-82f8-9f943c362c4d" pod="kube-system/cilium-operator-6c4d7847fc-46s7k" err="pods \"cilium-operator-6c4d7847fc-46s7k\" is forbidden: User \"system:node:ip-172-31-16-201\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-201' and this object" Jan 24 00:37:56.949067 kubelet[3175]: I0124 00:37:56.948813 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7f54\" (UniqueName: \"kubernetes.io/projected/1cb6defc-5874-469b-82f8-9f943c362c4d-kube-api-access-q7f54\") pod \"cilium-operator-6c4d7847fc-46s7k\" (UID: \"1cb6defc-5874-469b-82f8-9f943c362c4d\") " pod="kube-system/cilium-operator-6c4d7847fc-46s7k" Jan 24 00:37:56.949067 kubelet[3175]: I0124 00:37:56.948877 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb6defc-5874-469b-82f8-9f943c362c4d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-46s7k\" (UID: \"1cb6defc-5874-469b-82f8-9f943c362c4d\") " pod="kube-system/cilium-operator-6c4d7847fc-46s7k" Jan 24 00:37:56.955475 systemd[1]: Created slice kubepods-besteffort-pod1cb6defc_5874_469b_82f8_9f943c362c4d.slice - libcontainer container kubepods-besteffort-pod1cb6defc_5874_469b_82f8_9f943c362c4d.slice. Jan 24 00:37:57.010294 containerd[1986]: time="2026-01-24T00:37:57.008526289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:57.010605 containerd[1986]: time="2026-01-24T00:37:57.010564937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:57.010796 containerd[1986]: time="2026-01-24T00:37:57.010728610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.011159 containerd[1986]: time="2026-01-24T00:37:57.011023947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.037492 containerd[1986]: time="2026-01-24T00:37:57.036904404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:57.037492 containerd[1986]: time="2026-01-24T00:37:57.037182783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:57.037492 containerd[1986]: time="2026-01-24T00:37:57.037239742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.037737 containerd[1986]: time="2026-01-24T00:37:57.037556002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.046489 systemd[1]: Started cri-containerd-d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14.scope - libcontainer container d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14. Jan 24 00:37:57.082617 systemd[1]: Started cri-containerd-29ba57784586c5017d5a6206cfef9310e6d7a4647564573d141070bb99d77259.scope - libcontainer container 29ba57784586c5017d5a6206cfef9310e6d7a4647564573d141070bb99d77259. Jan 24 00:37:57.102580 containerd[1986]: time="2026-01-24T00:37:57.102530734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bspgv,Uid:9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53,Namespace:kube-system,Attempt:0,} returns sandbox id \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\"" Jan 24 00:37:57.115120 containerd[1986]: time="2026-01-24T00:37:57.115080592Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:37:57.128005 containerd[1986]: time="2026-01-24T00:37:57.127973388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ddt,Uid:8f4dd153-d9c5-46c2-8b22-3a5fe605123a,Namespace:kube-system,Attempt:0,} returns sandbox id \"29ba57784586c5017d5a6206cfef9310e6d7a4647564573d141070bb99d77259\"" Jan 24 00:37:57.133016 containerd[1986]: time="2026-01-24T00:37:57.132967801Z" level=info msg="CreateContainer within sandbox \"29ba57784586c5017d5a6206cfef9310e6d7a4647564573d141070bb99d77259\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:37:57.160900 containerd[1986]: time="2026-01-24T00:37:57.160831973Z" level=info msg="CreateContainer within sandbox \"29ba57784586c5017d5a6206cfef9310e6d7a4647564573d141070bb99d77259\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"826f24806449137cbe61ee5d359fff46f14d1b4d95c2d5ccf82b0bf99c11700c\"" Jan 24 00:37:57.161685 containerd[1986]: time="2026-01-24T00:37:57.161646461Z" level=info msg="StartContainer for \"826f24806449137cbe61ee5d359fff46f14d1b4d95c2d5ccf82b0bf99c11700c\"" Jan 24 00:37:57.190457 systemd[1]: Started cri-containerd-826f24806449137cbe61ee5d359fff46f14d1b4d95c2d5ccf82b0bf99c11700c.scope - libcontainer container 826f24806449137cbe61ee5d359fff46f14d1b4d95c2d5ccf82b0bf99c11700c. Jan 24 00:37:57.226660 containerd[1986]: time="2026-01-24T00:37:57.226537174Z" level=info msg="StartContainer for \"826f24806449137cbe61ee5d359fff46f14d1b4d95c2d5ccf82b0bf99c11700c\" returns successfully" Jan 24 00:37:57.259503 containerd[1986]: time="2026-01-24T00:37:57.259458258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-46s7k,Uid:1cb6defc-5874-469b-82f8-9f943c362c4d,Namespace:kube-system,Attempt:0,}" Jan 24 00:37:57.292762 containerd[1986]: time="2026-01-24T00:37:57.292644185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:37:57.292937 containerd[1986]: time="2026-01-24T00:37:57.292698915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:37:57.292937 containerd[1986]: time="2026-01-24T00:37:57.292839597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.293107 containerd[1986]: time="2026-01-24T00:37:57.293062218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:37:57.312502 systemd[1]: Started cri-containerd-61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262.scope - libcontainer container 61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262. Jan 24 00:37:57.362545 containerd[1986]: time="2026-01-24T00:37:57.362473891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-46s7k,Uid:1cb6defc-5874-469b-82f8-9f943c362c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\"" Jan 24 00:37:57.388857 kubelet[3175]: I0124 00:37:57.387068 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2ddt" podStartSLOduration=1.38705178 podStartE2EDuration="1.38705178s" podCreationTimestamp="2026-01-24 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:37:57.386898981 +0000 UTC m=+6.262298976" watchObservedRunningTime="2026-01-24 00:37:57.38705178 +0000 UTC m=+6.262451775" Jan 24 00:37:58.198429 update_engine[1959]: I20260124 00:37:58.198374 1959 update_attempter.cc:509] Updating boot flags... Jan 24 00:37:58.331163 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3556) Jan 24 00:38:02.169061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152321016.mount: Deactivated successfully. Jan 24 00:38:04.902733 containerd[1986]: time="2026-01-24T00:38:04.902676644Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:04.910942 containerd[1986]: time="2026-01-24T00:38:04.910870503Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:38:04.913531 containerd[1986]: time="2026-01-24T00:38:04.913457996Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:04.915026 containerd[1986]: time="2026-01-24T00:38:04.914797441Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.795787179s" Jan 24 00:38:04.915026 containerd[1986]: time="2026-01-24T00:38:04.914836412Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:38:04.928979 containerd[1986]: time="2026-01-24T00:38:04.928941532Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:38:04.962389 containerd[1986]: time="2026-01-24T00:38:04.962287273Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:38:05.083569 containerd[1986]: time="2026-01-24T00:38:05.083509725Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\"" Jan 24 00:38:05.114298 containerd[1986]: time="2026-01-24T00:38:05.113832887Z" level=info msg="StartContainer for \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\"" Jan 24 00:38:05.230500 systemd[1]: Started cri-containerd-8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c.scope - libcontainer container 8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c. Jan 24 00:38:05.298164 containerd[1986]: time="2026-01-24T00:38:05.297665841Z" level=info msg="StartContainer for \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\" returns successfully" Jan 24 00:38:05.309723 systemd[1]: cri-containerd-8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c.scope: Deactivated successfully. Jan 24 00:38:05.533760 containerd[1986]: time="2026-01-24T00:38:05.524467717Z" level=info msg="shim disconnected" id=8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c namespace=k8s.io Jan 24 00:38:05.533760 containerd[1986]: time="2026-01-24T00:38:05.533691841Z" level=warning msg="cleaning up after shim disconnected" id=8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c namespace=k8s.io Jan 24 00:38:05.533760 containerd[1986]: time="2026-01-24T00:38:05.533707583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:06.030169 systemd[1]: run-containerd-runc-k8s.io-8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c-runc.00upGz.mount: Deactivated successfully. Jan 24 00:38:06.031295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c-rootfs.mount: Deactivated successfully. Jan 24 00:38:06.086664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696495310.mount: Deactivated successfully. Jan 24 00:38:06.452725 containerd[1986]: time="2026-01-24T00:38:06.452682109Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:38:06.481580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620586099.mount: Deactivated successfully. Jan 24 00:38:06.483982 containerd[1986]: time="2026-01-24T00:38:06.482759010Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\"" Jan 24 00:38:06.487976 containerd[1986]: time="2026-01-24T00:38:06.486530527Z" level=info msg="StartContainer for \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\"" Jan 24 00:38:06.533707 systemd[1]: Started cri-containerd-dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31.scope - libcontainer container dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31. Jan 24 00:38:06.587699 containerd[1986]: time="2026-01-24T00:38:06.587615370Z" level=info msg="StartContainer for \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\" returns successfully" Jan 24 00:38:06.612217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:38:06.612648 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:38:06.612739 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:38:06.620708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:38:06.621004 systemd[1]: cri-containerd-dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31.scope: Deactivated successfully. Jan 24 00:38:06.708334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:38:06.770923 containerd[1986]: time="2026-01-24T00:38:06.770551224Z" level=info msg="shim disconnected" id=dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31 namespace=k8s.io Jan 24 00:38:06.770923 containerd[1986]: time="2026-01-24T00:38:06.770764953Z" level=warning msg="cleaning up after shim disconnected" id=dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31 namespace=k8s.io Jan 24 00:38:06.770923 containerd[1986]: time="2026-01-24T00:38:06.770777314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:06.839514 containerd[1986]: time="2026-01-24T00:38:06.839468946Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:06.840526 containerd[1986]: time="2026-01-24T00:38:06.840344849Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:38:06.841768 containerd[1986]: time="2026-01-24T00:38:06.841498367Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:06.843545 containerd[1986]: time="2026-01-24T00:38:06.843240597Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.914260648s" Jan 24 00:38:06.843633 containerd[1986]: time="2026-01-24T00:38:06.843547693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:38:06.846377 containerd[1986]: time="2026-01-24T00:38:06.846319618Z" level=info msg="CreateContainer within sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:38:06.859006 containerd[1986]: time="2026-01-24T00:38:06.858963203Z" level=info msg="CreateContainer within sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\"" Jan 24 00:38:06.860580 containerd[1986]: time="2026-01-24T00:38:06.859594947Z" level=info msg="StartContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\"" Jan 24 00:38:06.891504 systemd[1]: Started cri-containerd-b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba.scope - libcontainer container b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba. Jan 24 00:38:06.921648 containerd[1986]: time="2026-01-24T00:38:06.921602926Z" level=info msg="StartContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" returns successfully" Jan 24 00:38:07.467301 containerd[1986]: time="2026-01-24T00:38:07.467259056Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:38:07.508359 containerd[1986]: time="2026-01-24T00:38:07.508309478Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\"" Jan 24 00:38:07.508907 containerd[1986]: time="2026-01-24T00:38:07.508869938Z" level=info msg="StartContainer for \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\"" Jan 24 00:38:07.580133 systemd[1]: run-containerd-runc-k8s.io-5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1-runc.NuvrX3.mount: Deactivated successfully. Jan 24 00:38:07.593483 systemd[1]: Started cri-containerd-5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1.scope - libcontainer container 5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1. Jan 24 00:38:07.665729 containerd[1986]: time="2026-01-24T00:38:07.665358490Z" level=info msg="StartContainer for \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\" returns successfully" Jan 24 00:38:07.695353 systemd[1]: cri-containerd-5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1.scope: Deactivated successfully. Jan 24 00:38:07.757348 kubelet[3175]: I0124 00:38:07.755602 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-46s7k" podStartSLOduration=2.276337831 podStartE2EDuration="11.755570278s" podCreationTimestamp="2026-01-24 00:37:56 +0000 UTC" firstStartedPulling="2026-01-24 00:37:57.365034512 +0000 UTC m=+6.240434491" lastFinishedPulling="2026-01-24 00:38:06.844266952 +0000 UTC m=+15.719666938" observedRunningTime="2026-01-24 00:38:07.591204642 +0000 UTC m=+16.466604647" watchObservedRunningTime="2026-01-24 00:38:07.755570278 +0000 UTC m=+16.630970272" Jan 24 00:38:07.767550 containerd[1986]: time="2026-01-24T00:38:07.766637418Z" level=info msg="shim disconnected" id=5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1 namespace=k8s.io Jan 24 00:38:07.767550 containerd[1986]: time="2026-01-24T00:38:07.767325647Z" level=warning msg="cleaning up after shim disconnected" id=5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1 namespace=k8s.io Jan 24 00:38:07.767550 containerd[1986]: time="2026-01-24T00:38:07.767344722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:08.028703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1-rootfs.mount: Deactivated successfully. Jan 24 00:38:08.466986 containerd[1986]: time="2026-01-24T00:38:08.466726015Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:38:08.497787 containerd[1986]: time="2026-01-24T00:38:08.497740276Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\"" Jan 24 00:38:08.499700 containerd[1986]: time="2026-01-24T00:38:08.498897477Z" level=info msg="StartContainer for \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\"" Jan 24 00:38:08.533793 systemd[1]: Started cri-containerd-6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15.scope - libcontainer container 6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15. Jan 24 00:38:08.568337 systemd[1]: cri-containerd-6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15.scope: Deactivated successfully. Jan 24 00:38:08.572495 containerd[1986]: time="2026-01-24T00:38:08.572455834Z" level=info msg="StartContainer for \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\" returns successfully" Jan 24 00:38:08.594751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15-rootfs.mount: Deactivated successfully. Jan 24 00:38:08.606616 containerd[1986]: time="2026-01-24T00:38:08.606560770Z" level=info msg="shim disconnected" id=6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15 namespace=k8s.io Jan 24 00:38:08.606616 containerd[1986]: time="2026-01-24T00:38:08.606612025Z" level=warning msg="cleaning up after shim disconnected" id=6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15 namespace=k8s.io Jan 24 00:38:08.606616 containerd[1986]: time="2026-01-24T00:38:08.606620140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:09.473085 containerd[1986]: time="2026-01-24T00:38:09.473038175Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:38:09.527289 containerd[1986]: time="2026-01-24T00:38:09.527201262Z" level=info msg="CreateContainer within sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\"" Jan 24 00:38:09.528084 containerd[1986]: time="2026-01-24T00:38:09.528040987Z" level=info msg="StartContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\"" Jan 24 00:38:09.564597 systemd[1]: run-containerd-runc-k8s.io-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150-runc.hcIs0H.mount: Deactivated successfully. Jan 24 00:38:09.576513 systemd[1]: Started cri-containerd-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150.scope - libcontainer container 162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150. Jan 24 00:38:09.616988 containerd[1986]: time="2026-01-24T00:38:09.616935127Z" level=info msg="StartContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" returns successfully" Jan 24 00:38:09.801834 kubelet[3175]: I0124 00:38:09.801707 3175 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:38:09.850603 systemd[1]: Created slice kubepods-burstable-pod7164acd8_8370_4df8_87cb_4a2fc6b50d45.slice - libcontainer container kubepods-burstable-pod7164acd8_8370_4df8_87cb_4a2fc6b50d45.slice. Jan 24 00:38:09.858945 systemd[1]: Created slice kubepods-burstable-pod9822af21_e5ad_41e4_9959_756eaf4e6bb8.slice - libcontainer container kubepods-burstable-pod9822af21_e5ad_41e4_9959_756eaf4e6bb8.slice. Jan 24 00:38:09.867333 kubelet[3175]: I0124 00:38:09.867295 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjclw\" (UniqueName: \"kubernetes.io/projected/9822af21-e5ad-41e4-9959-756eaf4e6bb8-kube-api-access-mjclw\") pod \"coredns-668d6bf9bc-2bg9w\" (UID: \"9822af21-e5ad-41e4-9959-756eaf4e6bb8\") " pod="kube-system/coredns-668d6bf9bc-2bg9w" Jan 24 00:38:09.867450 kubelet[3175]: I0124 00:38:09.867436 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7164acd8-8370-4df8-87cb-4a2fc6b50d45-config-volume\") pod \"coredns-668d6bf9bc-2mxzc\" (UID: \"7164acd8-8370-4df8-87cb-4a2fc6b50d45\") " pod="kube-system/coredns-668d6bf9bc-2mxzc" Jan 24 00:38:09.868326 kubelet[3175]: I0124 00:38:09.867479 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzv7x\" (UniqueName: \"kubernetes.io/projected/7164acd8-8370-4df8-87cb-4a2fc6b50d45-kube-api-access-rzv7x\") pod \"coredns-668d6bf9bc-2mxzc\" (UID: \"7164acd8-8370-4df8-87cb-4a2fc6b50d45\") " pod="kube-system/coredns-668d6bf9bc-2mxzc" Jan 24 00:38:09.868326 kubelet[3175]: I0124 00:38:09.868018 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9822af21-e5ad-41e4-9959-756eaf4e6bb8-config-volume\") pod \"coredns-668d6bf9bc-2bg9w\" (UID: \"9822af21-e5ad-41e4-9959-756eaf4e6bb8\") " pod="kube-system/coredns-668d6bf9bc-2bg9w" Jan 24 00:38:10.155688 containerd[1986]: time="2026-01-24T00:38:10.155581107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mxzc,Uid:7164acd8-8370-4df8-87cb-4a2fc6b50d45,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:10.164654 containerd[1986]: time="2026-01-24T00:38:10.164213402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bg9w,Uid:9822af21-e5ad-41e4-9959-756eaf4e6bb8,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:10.501287 kubelet[3175]: I0124 00:38:10.499701 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bspgv" podStartSLOduration=6.684051406 podStartE2EDuration="14.499674167s" podCreationTimestamp="2026-01-24 00:37:56 +0000 UTC" firstStartedPulling="2026-01-24 00:37:57.113072881 +0000 UTC m=+5.988472860" lastFinishedPulling="2026-01-24 00:38:04.928695635 +0000 UTC m=+13.804095621" observedRunningTime="2026-01-24 00:38:10.499335773 +0000 UTC m=+19.374735768" watchObservedRunningTime="2026-01-24 00:38:10.499674167 +0000 UTC m=+19.375074164" Jan 24 00:38:12.171891 (udev-worker)[4064]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:38:12.173592 (udev-worker)[4098]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:38:12.175555 systemd-networkd[1899]: cilium_host: Link UP Jan 24 00:38:12.175807 systemd-networkd[1899]: cilium_net: Link UP Jan 24 00:38:12.176022 systemd-networkd[1899]: cilium_net: Gained carrier Jan 24 00:38:12.176342 systemd-networkd[1899]: cilium_host: Gained carrier Jan 24 00:38:12.303606 (udev-worker)[4115]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:38:12.317695 systemd-networkd[1899]: cilium_vxlan: Link UP Jan 24 00:38:12.317707 systemd-networkd[1899]: cilium_vxlan: Gained carrier Jan 24 00:38:12.808387 systemd-networkd[1899]: cilium_host: Gained IPv6LL Jan 24 00:38:12.850028 kernel: NET: Registered PF_ALG protocol family Jan 24 00:38:13.129350 systemd-networkd[1899]: cilium_net: Gained IPv6LL Jan 24 00:38:13.512604 systemd-networkd[1899]: cilium_vxlan: Gained IPv6LL Jan 24 00:38:13.637020 systemd-networkd[1899]: lxc_health: Link UP Jan 24 00:38:13.647947 systemd-networkd[1899]: lxc_health: Gained carrier Jan 24 00:38:13.798076 systemd-networkd[1899]: lxc46f7fedb6641: Link UP Jan 24 00:38:13.803346 kernel: eth0: renamed from tmp6904b Jan 24 00:38:13.812356 systemd-networkd[1899]: lxc46f7fedb6641: Gained carrier Jan 24 00:38:13.846410 systemd-networkd[1899]: lxce67969f22ee8: Link UP Jan 24 00:38:13.852295 kernel: eth0: renamed from tmpb127c Jan 24 00:38:13.860543 systemd-networkd[1899]: lxce67969f22ee8: Gained carrier Jan 24 00:38:15.368554 systemd-networkd[1899]: lxc46f7fedb6641: Gained IPv6LL Jan 24 00:38:15.496959 systemd-networkd[1899]: lxc_health: Gained IPv6LL Jan 24 00:38:15.816836 systemd-networkd[1899]: lxce67969f22ee8: Gained IPv6LL Jan 24 00:38:18.206108 ntpd[1951]: Listen normally on 7 cilium_host 192.168.0.3:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 7 cilium_host 192.168.0.3:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 8 cilium_net [fe80::548e:95ff:fe5a:22e8%4]:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 9 cilium_host [fe80::33:a6ff:fe14:f7ff%5]:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 10 cilium_vxlan [fe80::b428:a1ff:fe3d:de19%6]:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 11 lxc_health [fe80::d064:c8ff:fe53:382a%8]:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 12 lxc46f7fedb6641 [fe80::d018:ff:fe23:c768%10]:123 Jan 24 00:38:18.206982 ntpd[1951]: 24 Jan 00:38:18 ntpd[1951]: Listen normally on 13 lxce67969f22ee8 [fe80::50f8:51ff:fef6:c3c2%12]:123 Jan 24 00:38:18.206210 ntpd[1951]: Listen normally on 8 cilium_net [fe80::548e:95ff:fe5a:22e8%4]:123 Jan 24 00:38:18.206295 ntpd[1951]: Listen normally on 9 cilium_host [fe80::33:a6ff:fe14:f7ff%5]:123 Jan 24 00:38:18.206340 ntpd[1951]: Listen normally on 10 cilium_vxlan [fe80::b428:a1ff:fe3d:de19%6]:123 Jan 24 00:38:18.206380 ntpd[1951]: Listen normally on 11 lxc_health [fe80::d064:c8ff:fe53:382a%8]:123 Jan 24 00:38:18.206424 ntpd[1951]: Listen normally on 12 lxc46f7fedb6641 [fe80::d018:ff:fe23:c768%10]:123 Jan 24 00:38:18.206466 ntpd[1951]: Listen normally on 13 lxce67969f22ee8 [fe80::50f8:51ff:fef6:c3c2%12]:123 Jan 24 00:38:18.578180 containerd[1986]: time="2026-01-24T00:38:18.576572724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:18.578180 containerd[1986]: time="2026-01-24T00:38:18.576664135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:18.578180 containerd[1986]: time="2026-01-24T00:38:18.576694128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:18.578180 containerd[1986]: time="2026-01-24T00:38:18.576855918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:18.590338 containerd[1986]: time="2026-01-24T00:38:18.588787101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:18.590338 containerd[1986]: time="2026-01-24T00:38:18.588851572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:18.590338 containerd[1986]: time="2026-01-24T00:38:18.588869625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:18.590338 containerd[1986]: time="2026-01-24T00:38:18.588975294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:18.664597 systemd[1]: run-containerd-runc-k8s.io-6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3-runc.zBRhqZ.mount: Deactivated successfully. Jan 24 00:38:18.682225 systemd[1]: Started cri-containerd-6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3.scope - libcontainer container 6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3. Jan 24 00:38:18.683989 systemd[1]: Started cri-containerd-b127c6ffbdd9bc1fe696e1fdd587be9952672c5ad7d95eac7dd60a9898c899ff.scope - libcontainer container b127c6ffbdd9bc1fe696e1fdd587be9952672c5ad7d95eac7dd60a9898c899ff. Jan 24 00:38:18.814745 containerd[1986]: time="2026-01-24T00:38:18.814697803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mxzc,Uid:7164acd8-8370-4df8-87cb-4a2fc6b50d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3\"" Jan 24 00:38:18.823036 containerd[1986]: time="2026-01-24T00:38:18.822992610Z" level=info msg="CreateContainer within sandbox \"6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:38:18.834221 containerd[1986]: time="2026-01-24T00:38:18.834109027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bg9w,Uid:9822af21-e5ad-41e4-9959-756eaf4e6bb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b127c6ffbdd9bc1fe696e1fdd587be9952672c5ad7d95eac7dd60a9898c899ff\"" Jan 24 00:38:18.848934 containerd[1986]: time="2026-01-24T00:38:18.848892590Z" level=info msg="CreateContainer within sandbox \"b127c6ffbdd9bc1fe696e1fdd587be9952672c5ad7d95eac7dd60a9898c899ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:38:18.886900 containerd[1986]: time="2026-01-24T00:38:18.886829035Z" level=info msg="CreateContainer within sandbox \"6904b6d066bcd7137a509eb651ab1acc21b3f53d468053047d2371ccb1f352d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8104887c806d762ebbdcfd28f2b56ee091bd6bd7f0d706af083e66e87f7d9178\"" Jan 24 00:38:18.887770 containerd[1986]: time="2026-01-24T00:38:18.887709816Z" level=info msg="StartContainer for \"8104887c806d762ebbdcfd28f2b56ee091bd6bd7f0d706af083e66e87f7d9178\"" Jan 24 00:38:18.893960 containerd[1986]: time="2026-01-24T00:38:18.893851448Z" level=info msg="CreateContainer within sandbox \"b127c6ffbdd9bc1fe696e1fdd587be9952672c5ad7d95eac7dd60a9898c899ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f67968feed90b3ab42b5d229d3b7e50a284cc96b360c9152e591847dc5d4512f\"" Jan 24 00:38:18.895579 containerd[1986]: time="2026-01-24T00:38:18.894624322Z" level=info msg="StartContainer for \"f67968feed90b3ab42b5d229d3b7e50a284cc96b360c9152e591847dc5d4512f\"" Jan 24 00:38:18.922517 systemd[1]: Started cri-containerd-8104887c806d762ebbdcfd28f2b56ee091bd6bd7f0d706af083e66e87f7d9178.scope - libcontainer container 8104887c806d762ebbdcfd28f2b56ee091bd6bd7f0d706af083e66e87f7d9178. Jan 24 00:38:18.934501 systemd[1]: Started cri-containerd-f67968feed90b3ab42b5d229d3b7e50a284cc96b360c9152e591847dc5d4512f.scope - libcontainer container f67968feed90b3ab42b5d229d3b7e50a284cc96b360c9152e591847dc5d4512f. Jan 24 00:38:18.982368 containerd[1986]: time="2026-01-24T00:38:18.982317979Z" level=info msg="StartContainer for \"8104887c806d762ebbdcfd28f2b56ee091bd6bd7f0d706af083e66e87f7d9178\" returns successfully" Jan 24 00:38:18.983075 containerd[1986]: time="2026-01-24T00:38:18.983029641Z" level=info msg="StartContainer for \"f67968feed90b3ab42b5d229d3b7e50a284cc96b360c9152e591847dc5d4512f\" returns successfully" Jan 24 00:38:19.534100 kubelet[3175]: I0124 00:38:19.533853 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2bg9w" podStartSLOduration=23.533831614 podStartE2EDuration="23.533831614s" podCreationTimestamp="2026-01-24 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:19.532667754 +0000 UTC m=+28.408067750" watchObservedRunningTime="2026-01-24 00:38:19.533831614 +0000 UTC m=+28.409231611" Jan 24 00:38:19.569358 kubelet[3175]: I0124 00:38:19.569293 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2mxzc" podStartSLOduration=23.569241508 podStartE2EDuration="23.569241508s" podCreationTimestamp="2026-01-24 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:19.549849925 +0000 UTC m=+28.425249919" watchObservedRunningTime="2026-01-24 00:38:19.569241508 +0000 UTC m=+28.444641507" Jan 24 00:38:19.595968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657102334.mount: Deactivated successfully. Jan 24 00:38:30.360555 systemd[1]: Started sshd@7-172.31.16.201:22-4.153.228.146:51868.service - OpenSSH per-connection server daemon (4.153.228.146:51868). Jan 24 00:38:30.876739 sshd[4643]: Accepted publickey for core from 4.153.228.146 port 51868 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:30.878954 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:30.885642 systemd-logind[1958]: New session 8 of user core. Jan 24 00:38:30.889477 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:38:31.903441 sshd[4643]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:31.909815 systemd-logind[1958]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:38:31.910681 systemd[1]: sshd@7-172.31.16.201:22-4.153.228.146:51868.service: Deactivated successfully. Jan 24 00:38:31.912614 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:38:31.913991 systemd-logind[1958]: Removed session 8. Jan 24 00:38:37.001839 systemd[1]: Started sshd@8-172.31.16.201:22-4.153.228.146:60192.service - OpenSSH per-connection server daemon (4.153.228.146:60192). Jan 24 00:38:37.522959 sshd[4657]: Accepted publickey for core from 4.153.228.146 port 60192 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:37.524605 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:37.530284 systemd-logind[1958]: New session 9 of user core. Jan 24 00:38:37.536506 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:38:37.959975 sshd[4657]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:37.963054 systemd[1]: sshd@8-172.31.16.201:22-4.153.228.146:60192.service: Deactivated successfully. Jan 24 00:38:37.965025 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:38:37.967396 systemd-logind[1958]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:38:37.968412 systemd-logind[1958]: Removed session 9. Jan 24 00:38:43.043568 systemd[1]: Started sshd@9-172.31.16.201:22-4.153.228.146:60206.service - OpenSSH per-connection server daemon (4.153.228.146:60206). Jan 24 00:38:43.533018 sshd[4671]: Accepted publickey for core from 4.153.228.146 port 60206 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:43.534592 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:43.539327 systemd-logind[1958]: New session 10 of user core. Jan 24 00:38:43.543468 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:38:43.955924 sshd[4671]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:43.959194 systemd[1]: sshd@9-172.31.16.201:22-4.153.228.146:60206.service: Deactivated successfully. Jan 24 00:38:43.961210 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:38:43.963094 systemd-logind[1958]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:38:43.964982 systemd-logind[1958]: Removed session 10. Jan 24 00:38:44.045704 systemd[1]: Started sshd@10-172.31.16.201:22-4.153.228.146:60222.service - OpenSSH per-connection server daemon (4.153.228.146:60222). Jan 24 00:38:44.526658 sshd[4685]: Accepted publickey for core from 4.153.228.146 port 60222 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:44.528142 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:44.533261 systemd-logind[1958]: New session 11 of user core. Jan 24 00:38:44.538480 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:38:45.005892 sshd[4685]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:45.009802 systemd[1]: sshd@10-172.31.16.201:22-4.153.228.146:60222.service: Deactivated successfully. Jan 24 00:38:45.011920 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:38:45.013716 systemd-logind[1958]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:38:45.015067 systemd-logind[1958]: Removed session 11. Jan 24 00:38:45.096673 systemd[1]: Started sshd@11-172.31.16.201:22-4.153.228.146:52842.service - OpenSSH per-connection server daemon (4.153.228.146:52842). Jan 24 00:38:45.591001 sshd[4696]: Accepted publickey for core from 4.153.228.146 port 52842 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:45.592442 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:45.597205 systemd-logind[1958]: New session 12 of user core. Jan 24 00:38:45.602468 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:38:46.005221 sshd[4696]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:46.008394 systemd[1]: sshd@11-172.31.16.201:22-4.153.228.146:52842.service: Deactivated successfully. Jan 24 00:38:46.010530 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:38:46.011571 systemd-logind[1958]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:38:46.012540 systemd-logind[1958]: Removed session 12. Jan 24 00:38:51.113680 systemd[1]: Started sshd@12-172.31.16.201:22-4.153.228.146:52844.service - OpenSSH per-connection server daemon (4.153.228.146:52844). Jan 24 00:38:51.639591 sshd[4710]: Accepted publickey for core from 4.153.228.146 port 52844 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:51.642000 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:51.649581 systemd-logind[1958]: New session 13 of user core. Jan 24 00:38:51.655440 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:38:52.087483 sshd[4710]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:52.090993 systemd[1]: sshd@12-172.31.16.201:22-4.153.228.146:52844.service: Deactivated successfully. Jan 24 00:38:52.093003 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:38:52.094598 systemd-logind[1958]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:38:52.095539 systemd-logind[1958]: Removed session 13. Jan 24 00:38:57.171610 systemd[1]: Started sshd@13-172.31.16.201:22-4.153.228.146:46024.service - OpenSSH per-connection server daemon (4.153.228.146:46024). Jan 24 00:38:57.648552 sshd[4726]: Accepted publickey for core from 4.153.228.146 port 46024 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:57.650115 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:57.655519 systemd-logind[1958]: New session 14 of user core. Jan 24 00:38:57.658451 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:38:58.102033 sshd[4726]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:58.105938 systemd-logind[1958]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:38:58.106674 systemd[1]: sshd@13-172.31.16.201:22-4.153.228.146:46024.service: Deactivated successfully. Jan 24 00:38:58.108626 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:38:58.109702 systemd-logind[1958]: Removed session 14. Jan 24 00:38:58.192466 systemd[1]: Started sshd@14-172.31.16.201:22-4.153.228.146:46040.service - OpenSSH per-connection server daemon (4.153.228.146:46040). Jan 24 00:38:58.678486 sshd[4741]: Accepted publickey for core from 4.153.228.146 port 46040 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:38:58.680013 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:58.685078 systemd-logind[1958]: New session 15 of user core. Jan 24 00:38:58.690451 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:38:59.510201 sshd[4741]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:59.515283 systemd[1]: sshd@14-172.31.16.201:22-4.153.228.146:46040.service: Deactivated successfully. Jan 24 00:38:59.517175 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:38:59.518326 systemd-logind[1958]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:38:59.519616 systemd-logind[1958]: Removed session 15. Jan 24 00:38:59.601664 systemd[1]: Started sshd@15-172.31.16.201:22-4.153.228.146:46042.service - OpenSSH per-connection server daemon (4.153.228.146:46042). Jan 24 00:39:00.152735 sshd[4752]: Accepted publickey for core from 4.153.228.146 port 46042 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:00.154833 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:00.161540 systemd-logind[1958]: New session 16 of user core. Jan 24 00:39:00.165794 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:39:01.213909 sshd[4752]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:01.300469 systemd[1]: sshd@15-172.31.16.201:22-4.153.228.146:46042.service: Deactivated successfully. Jan 24 00:39:01.303548 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:39:01.306289 systemd-logind[1958]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:39:01.313740 systemd[1]: Started sshd@16-172.31.16.201:22-4.153.228.146:46050.service - OpenSSH per-connection server daemon (4.153.228.146:46050). Jan 24 00:39:01.316494 systemd-logind[1958]: Removed session 16. Jan 24 00:39:01.876428 sshd[4769]: Accepted publickey for core from 4.153.228.146 port 46050 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:01.884394 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:01.932547 systemd-logind[1958]: New session 17 of user core. Jan 24 00:39:01.962022 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:39:02.644324 sshd[4769]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:02.649474 systemd[1]: sshd@16-172.31.16.201:22-4.153.228.146:46050.service: Deactivated successfully. Jan 24 00:39:02.652663 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:39:02.654958 systemd-logind[1958]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:39:02.657001 systemd-logind[1958]: Removed session 17. Jan 24 00:39:02.763116 systemd[1]: Started sshd@17-172.31.16.201:22-4.153.228.146:46054.service - OpenSSH per-connection server daemon (4.153.228.146:46054). Jan 24 00:39:03.279144 sshd[4780]: Accepted publickey for core from 4.153.228.146 port 46054 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:03.280831 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:03.287617 systemd-logind[1958]: New session 18 of user core. Jan 24 00:39:03.292443 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:39:03.694497 sshd[4780]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:03.699790 systemd[1]: sshd@17-172.31.16.201:22-4.153.228.146:46054.service: Deactivated successfully. Jan 24 00:39:03.702161 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:39:03.703943 systemd-logind[1958]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:39:03.705423 systemd-logind[1958]: Removed session 18. Jan 24 00:39:08.783589 systemd[1]: Started sshd@18-172.31.16.201:22-4.153.228.146:60906.service - OpenSSH per-connection server daemon (4.153.228.146:60906). Jan 24 00:39:09.280194 sshd[4794]: Accepted publickey for core from 4.153.228.146 port 60906 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:09.281819 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:09.286786 systemd-logind[1958]: New session 19 of user core. Jan 24 00:39:09.293434 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:39:09.702850 sshd[4794]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:09.707612 systemd[1]: sshd@18-172.31.16.201:22-4.153.228.146:60906.service: Deactivated successfully. Jan 24 00:39:09.709740 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:39:09.711782 systemd-logind[1958]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:39:09.713063 systemd-logind[1958]: Removed session 19. Jan 24 00:39:14.804328 systemd[1]: Started sshd@19-172.31.16.201:22-4.153.228.146:40524.service - OpenSSH per-connection server daemon (4.153.228.146:40524). Jan 24 00:39:15.326794 sshd[4807]: Accepted publickey for core from 4.153.228.146 port 40524 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:15.328347 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:15.333724 systemd-logind[1958]: New session 20 of user core. Jan 24 00:39:15.338487 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:39:15.767340 sshd[4807]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:15.770900 systemd[1]: sshd@19-172.31.16.201:22-4.153.228.146:40524.service: Deactivated successfully. Jan 24 00:39:15.772840 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:39:15.773587 systemd-logind[1958]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:39:15.774648 systemd-logind[1958]: Removed session 20. Jan 24 00:39:20.864865 systemd[1]: Started sshd@20-172.31.16.201:22-4.153.228.146:40532.service - OpenSSH per-connection server daemon (4.153.228.146:40532). Jan 24 00:39:21.382284 sshd[4821]: Accepted publickey for core from 4.153.228.146 port 40532 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:21.384458 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:21.393514 systemd-logind[1958]: New session 21 of user core. Jan 24 00:39:21.396562 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:39:21.820438 sshd[4821]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:21.824826 systemd[1]: sshd@20-172.31.16.201:22-4.153.228.146:40532.service: Deactivated successfully. Jan 24 00:39:21.826895 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:39:21.828018 systemd-logind[1958]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:39:21.829543 systemd-logind[1958]: Removed session 21. Jan 24 00:39:21.912925 systemd[1]: Started sshd@21-172.31.16.201:22-4.153.228.146:40536.service - OpenSSH per-connection server daemon (4.153.228.146:40536). Jan 24 00:39:22.434580 sshd[4834]: Accepted publickey for core from 4.153.228.146 port 40536 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:22.436047 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:22.442316 systemd-logind[1958]: New session 22 of user core. Jan 24 00:39:22.449482 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:39:24.520160 systemd[1]: run-containerd-runc-k8s.io-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150-runc.UFYcMn.mount: Deactivated successfully. Jan 24 00:39:24.534188 containerd[1986]: time="2026-01-24T00:39:24.533723330Z" level=info msg="StopContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" with timeout 30 (s)" Jan 24 00:39:24.536853 containerd[1986]: time="2026-01-24T00:39:24.536809930Z" level=info msg="Stop container \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" with signal terminated" Jan 24 00:39:24.556404 containerd[1986]: time="2026-01-24T00:39:24.556171451Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:39:24.557484 systemd[1]: cri-containerd-b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba.scope: Deactivated successfully. Jan 24 00:39:24.587677 containerd[1986]: time="2026-01-24T00:39:24.587639906Z" level=info msg="StopContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" with timeout 2 (s)" Jan 24 00:39:24.596838 containerd[1986]: time="2026-01-24T00:39:24.596797552Z" level=info msg="Stop container \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" with signal terminated" Jan 24 00:39:24.599188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba-rootfs.mount: Deactivated successfully. Jan 24 00:39:24.608601 systemd-networkd[1899]: lxc_health: Link DOWN Jan 24 00:39:24.608613 systemd-networkd[1899]: lxc_health: Lost carrier Jan 24 00:39:24.628206 containerd[1986]: time="2026-01-24T00:39:24.628138992Z" level=info msg="shim disconnected" id=b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba namespace=k8s.io Jan 24 00:39:24.628206 containerd[1986]: time="2026-01-24T00:39:24.628197326Z" level=warning msg="cleaning up after shim disconnected" id=b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba namespace=k8s.io Jan 24 00:39:24.628206 containerd[1986]: time="2026-01-24T00:39:24.628206122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:24.632364 systemd[1]: cri-containerd-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150.scope: Deactivated successfully. Jan 24 00:39:24.632707 systemd[1]: cri-containerd-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150.scope: Consumed 8.351s CPU time. Jan 24 00:39:24.661048 containerd[1986]: time="2026-01-24T00:39:24.661009217Z" level=info msg="StopContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" returns successfully" Jan 24 00:39:24.672599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150-rootfs.mount: Deactivated successfully. Jan 24 00:39:24.673642 containerd[1986]: time="2026-01-24T00:39:24.673614763Z" level=info msg="StopPodSandbox for \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\"" Jan 24 00:39:24.678417 containerd[1986]: time="2026-01-24T00:39:24.678357563Z" level=info msg="Container to stop \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.684397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262-shm.mount: Deactivated successfully. Jan 24 00:39:24.687096 systemd[1]: cri-containerd-61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262.scope: Deactivated successfully. Jan 24 00:39:24.696016 containerd[1986]: time="2026-01-24T00:39:24.695803042Z" level=info msg="shim disconnected" id=162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150 namespace=k8s.io Jan 24 00:39:24.696016 containerd[1986]: time="2026-01-24T00:39:24.695868251Z" level=warning msg="cleaning up after shim disconnected" id=162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150 namespace=k8s.io Jan 24 00:39:24.696016 containerd[1986]: time="2026-01-24T00:39:24.695876713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:24.716611 containerd[1986]: time="2026-01-24T00:39:24.716426360Z" level=info msg="StopContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" returns successfully" Jan 24 00:39:24.717417 containerd[1986]: time="2026-01-24T00:39:24.717352255Z" level=info msg="StopPodSandbox for \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\"" Jan 24 00:39:24.717417 containerd[1986]: time="2026-01-24T00:39:24.717413389Z" level=info msg="Container to stop \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.718159 containerd[1986]: time="2026-01-24T00:39:24.717426301Z" level=info msg="Container to stop \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.718159 containerd[1986]: time="2026-01-24T00:39:24.717440321Z" level=info msg="Container to stop \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.718159 containerd[1986]: time="2026-01-24T00:39:24.717463074Z" level=info msg="Container to stop \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.718159 containerd[1986]: time="2026-01-24T00:39:24.717472141Z" level=info msg="Container to stop \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:39:24.724283 systemd[1]: cri-containerd-d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14.scope: Deactivated successfully. Jan 24 00:39:24.727413 containerd[1986]: time="2026-01-24T00:39:24.727224396Z" level=info msg="shim disconnected" id=61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262 namespace=k8s.io Jan 24 00:39:24.727534 containerd[1986]: time="2026-01-24T00:39:24.727512104Z" level=warning msg="cleaning up after shim disconnected" id=61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262 namespace=k8s.io Jan 24 00:39:24.727534 containerd[1986]: time="2026-01-24T00:39:24.727530708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:24.774121 containerd[1986]: time="2026-01-24T00:39:24.772088664Z" level=info msg="TearDown network for sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" successfully" Jan 24 00:39:24.774121 containerd[1986]: time="2026-01-24T00:39:24.772131707Z" level=info msg="StopPodSandbox for \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" returns successfully" Jan 24 00:39:24.786058 containerd[1986]: time="2026-01-24T00:39:24.785942493Z" level=info msg="shim disconnected" id=d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14 namespace=k8s.io Jan 24 00:39:24.786058 containerd[1986]: time="2026-01-24T00:39:24.786028375Z" level=warning msg="cleaning up after shim disconnected" id=d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14 namespace=k8s.io Jan 24 00:39:24.786600 containerd[1986]: time="2026-01-24T00:39:24.786375081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:24.807228 containerd[1986]: time="2026-01-24T00:39:24.807183162Z" level=info msg="TearDown network for sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" successfully" Jan 24 00:39:24.807228 containerd[1986]: time="2026-01-24T00:39:24.807216706Z" level=info msg="StopPodSandbox for \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" returns successfully" Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849335 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cni-path\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849425 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-xtables-lock\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849452 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqhhw\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-kube-api-access-gqhhw\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849472 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-clustermesh-secrets\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849489 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hostproc\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.849819 kubelet[3175]: I0124 00:39:24.849508 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb6defc-5874-469b-82f8-9f943c362c4d-cilium-config-path\") pod \"1cb6defc-5874-469b-82f8-9f943c362c4d\" (UID: \"1cb6defc-5874-469b-82f8-9f943c362c4d\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849524 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-kernel\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849539 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-bpf-maps\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849555 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-lib-modules\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849569 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-net\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849584 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-config-path\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850374 kubelet[3175]: I0124 00:39:24.849598 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-cgroup\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850546 kubelet[3175]: I0124 00:39:24.849621 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hubble-tls\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850546 kubelet[3175]: I0124 00:39:24.849636 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-etc-cni-netd\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850546 kubelet[3175]: I0124 00:39:24.849652 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-run\") pod \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\" (UID: \"9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53\") " Jan 24 00:39:24.850546 kubelet[3175]: I0124 00:39:24.849667 3175 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7f54\" (UniqueName: \"kubernetes.io/projected/1cb6defc-5874-469b-82f8-9f943c362c4d-kube-api-access-q7f54\") pod \"1cb6defc-5874-469b-82f8-9f943c362c4d\" (UID: \"1cb6defc-5874-469b-82f8-9f943c362c4d\") " Jan 24 00:39:24.869400 kubelet[3175]: I0124 00:39:24.866522 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.873307 kubelet[3175]: I0124 00:39:24.872648 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.874433 kubelet[3175]: I0124 00:39:24.874397 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-kube-api-access-gqhhw" (OuterVolumeSpecName: "kube-api-access-gqhhw") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "kube-api-access-gqhhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:39:24.874527 kubelet[3175]: I0124 00:39:24.874500 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb6defc-5874-469b-82f8-9f943c362c4d-kube-api-access-q7f54" (OuterVolumeSpecName: "kube-api-access-q7f54") pod "1cb6defc-5874-469b-82f8-9f943c362c4d" (UID: "1cb6defc-5874-469b-82f8-9f943c362c4d"). InnerVolumeSpecName "kube-api-access-q7f54". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:39:24.874567 kubelet[3175]: I0124 00:39:24.874528 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.874646 kubelet[3175]: I0124 00:39:24.874628 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.875368 kubelet[3175]: I0124 00:39:24.875344 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.875479 kubelet[3175]: I0124 00:39:24.875468 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.875546 kubelet[3175]: I0124 00:39:24.875537 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.875602 kubelet[3175]: I0124 00:39:24.875593 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.876108 kubelet[3175]: I0124 00:39:24.876092 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb6defc-5874-469b-82f8-9f943c362c4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cb6defc-5874-469b-82f8-9f943c362c4d" (UID: "1cb6defc-5874-469b-82f8-9f943c362c4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:39:24.877336 kubelet[3175]: I0124 00:39:24.877310 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.877400 kubelet[3175]: I0124 00:39:24.877344 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:39:24.877435 kubelet[3175]: I0124 00:39:24.877412 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:39:24.878302 kubelet[3175]: I0124 00:39:24.878232 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:39:24.879033 kubelet[3175]: I0124 00:39:24.878989 3175 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" (UID: "9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:39:24.950605 kubelet[3175]: I0124 00:39:24.950556 3175 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-run\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950605 kubelet[3175]: I0124 00:39:24.950600 3175 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7f54\" (UniqueName: \"kubernetes.io/projected/1cb6defc-5874-469b-82f8-9f943c362c4d-kube-api-access-q7f54\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950605 kubelet[3175]: I0124 00:39:24.950613 3175 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cni-path\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950605 kubelet[3175]: I0124 00:39:24.950622 3175 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-xtables-lock\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950632 3175 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gqhhw\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-kube-api-access-gqhhw\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950640 3175 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-clustermesh-secrets\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950649 3175 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hostproc\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950657 3175 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb6defc-5874-469b-82f8-9f943c362c4d-cilium-config-path\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950664 3175 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-kernel\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950672 3175 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-bpf-maps\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950680 3175 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-lib-modules\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.950854 kubelet[3175]: I0124 00:39:24.950687 3175 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-host-proc-sys-net\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.951049 kubelet[3175]: I0124 00:39:24.950694 3175 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-cgroup\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.951049 kubelet[3175]: I0124 00:39:24.950701 3175 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-hubble-tls\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.951049 kubelet[3175]: I0124 00:39:24.950710 3175 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-etc-cni-netd\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:24.951049 kubelet[3175]: I0124 00:39:24.950723 3175 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53-cilium-config-path\") on node \"ip-172-31-16-201\" DevicePath \"\"" Jan 24 00:39:25.334848 systemd[1]: Removed slice kubepods-besteffort-pod1cb6defc_5874_469b_82f8_9f943c362c4d.slice - libcontainer container kubepods-besteffort-pod1cb6defc_5874_469b_82f8_9f943c362c4d.slice. Jan 24 00:39:25.336015 systemd[1]: Removed slice kubepods-burstable-pod9a8bb0af_e9f5_40ee_9b0b_2efb099f3b53.slice - libcontainer container kubepods-burstable-pod9a8bb0af_e9f5_40ee_9b0b_2efb099f3b53.slice. Jan 24 00:39:25.336095 systemd[1]: kubepods-burstable-pod9a8bb0af_e9f5_40ee_9b0b_2efb099f3b53.slice: Consumed 8.455s CPU time. Jan 24 00:39:25.509164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262-rootfs.mount: Deactivated successfully. Jan 24 00:39:25.509297 systemd[1]: var-lib-kubelet-pods-1cb6defc\x2d5874\x2d469b\x2d82f8\x2d9f943c362c4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7f54.mount: Deactivated successfully. Jan 24 00:39:25.509368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14-rootfs.mount: Deactivated successfully. Jan 24 00:39:25.509423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14-shm.mount: Deactivated successfully. Jan 24 00:39:25.509490 systemd[1]: var-lib-kubelet-pods-9a8bb0af\x2de9f5\x2d40ee\x2d9b0b\x2d2efb099f3b53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqhhw.mount: Deactivated successfully. Jan 24 00:39:25.509559 systemd[1]: var-lib-kubelet-pods-9a8bb0af\x2de9f5\x2d40ee\x2d9b0b\x2d2efb099f3b53-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 00:39:25.509620 systemd[1]: var-lib-kubelet-pods-9a8bb0af\x2de9f5\x2d40ee\x2d9b0b\x2d2efb099f3b53-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 00:39:25.708641 kubelet[3175]: I0124 00:39:25.708578 3175 scope.go:117] "RemoveContainer" containerID="b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba" Jan 24 00:39:25.738748 containerd[1986]: time="2026-01-24T00:39:25.738605074Z" level=info msg="RemoveContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\"" Jan 24 00:39:25.748136 containerd[1986]: time="2026-01-24T00:39:25.747964182Z" level=info msg="RemoveContainer for \"b491fcd1442455f1e740cb260b3095f3368fe7784684c166c39e81034c65a4ba\" returns successfully" Jan 24 00:39:25.748620 kubelet[3175]: I0124 00:39:25.748554 3175 scope.go:117] "RemoveContainer" containerID="162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150" Jan 24 00:39:25.750898 containerd[1986]: time="2026-01-24T00:39:25.750664689Z" level=info msg="RemoveContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\"" Jan 24 00:39:25.756718 containerd[1986]: time="2026-01-24T00:39:25.756672279Z" level=info msg="RemoveContainer for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" returns successfully" Jan 24 00:39:25.757600 kubelet[3175]: I0124 00:39:25.757146 3175 scope.go:117] "RemoveContainer" containerID="6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15" Jan 24 00:39:25.758723 containerd[1986]: time="2026-01-24T00:39:25.758687632Z" level=info msg="RemoveContainer for \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\"" Jan 24 00:39:25.765455 containerd[1986]: time="2026-01-24T00:39:25.765415904Z" level=info msg="RemoveContainer for \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\" returns successfully" Jan 24 00:39:25.765779 kubelet[3175]: I0124 00:39:25.765755 3175 scope.go:117] "RemoveContainer" containerID="5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1" Jan 24 00:39:25.766711 containerd[1986]: time="2026-01-24T00:39:25.766681780Z" level=info msg="RemoveContainer for \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\"" Jan 24 00:39:25.773701 containerd[1986]: time="2026-01-24T00:39:25.773663348Z" level=info msg="RemoveContainer for \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\" returns successfully" Jan 24 00:39:25.773911 kubelet[3175]: I0124 00:39:25.773888 3175 scope.go:117] "RemoveContainer" containerID="dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31" Jan 24 00:39:25.775102 containerd[1986]: time="2026-01-24T00:39:25.775068498Z" level=info msg="RemoveContainer for \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\"" Jan 24 00:39:25.780736 containerd[1986]: time="2026-01-24T00:39:25.780685730Z" level=info msg="RemoveContainer for \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\" returns successfully" Jan 24 00:39:25.780937 kubelet[3175]: I0124 00:39:25.780912 3175 scope.go:117] "RemoveContainer" containerID="8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c" Jan 24 00:39:25.782026 containerd[1986]: time="2026-01-24T00:39:25.781995610Z" level=info msg="RemoveContainer for \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\"" Jan 24 00:39:25.790327 containerd[1986]: time="2026-01-24T00:39:25.790287338Z" level=info msg="RemoveContainer for \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\" returns successfully" Jan 24 00:39:25.790533 kubelet[3175]: I0124 00:39:25.790508 3175 scope.go:117] "RemoveContainer" containerID="162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150" Jan 24 00:39:25.802756 containerd[1986]: time="2026-01-24T00:39:25.794420971Z" level=error msg="ContainerStatus for \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\": not found" Jan 24 00:39:25.803086 kubelet[3175]: E0124 00:39:25.803053 3175 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\": not found" containerID="162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150" Jan 24 00:39:25.808757 kubelet[3175]: I0124 00:39:25.808237 3175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150"} err="failed to get container status \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\": rpc error: code = NotFound desc = an error occurred when try to find container \"162a0cfe5727e570a55f4e9df95ad29a2d2af75ee30efd0bdfbb62ed02843150\": not found" Jan 24 00:39:25.810053 kubelet[3175]: I0124 00:39:25.808840 3175 scope.go:117] "RemoveContainer" containerID="6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15" Jan 24 00:39:25.810053 kubelet[3175]: E0124 00:39:25.809374 3175 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\": not found" containerID="6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15" Jan 24 00:39:25.810053 kubelet[3175]: I0124 00:39:25.809397 3175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15"} err="failed to get container status \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\": not found" Jan 24 00:39:25.810053 kubelet[3175]: I0124 00:39:25.809414 3175 scope.go:117] "RemoveContainer" containerID="5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1" Jan 24 00:39:25.810053 kubelet[3175]: E0124 00:39:25.809705 3175 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\": not found" containerID="5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1" Jan 24 00:39:25.810053 kubelet[3175]: I0124 00:39:25.809723 3175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1"} err="failed to get container status \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\": not found" Jan 24 00:39:25.810053 kubelet[3175]: I0124 00:39:25.809737 3175 scope.go:117] "RemoveContainer" containerID="dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31" Jan 24 00:39:25.810290 containerd[1986]: time="2026-01-24T00:39:25.809225244Z" level=error msg="ContainerStatus for \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a507f63d59db1573d7b47bd48a0065bcde2edc5e9ed1e3cebc71d3e056bee15\": not found" Jan 24 00:39:25.810290 containerd[1986]: time="2026-01-24T00:39:25.809562526Z" level=error msg="ContainerStatus for \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5194c06db1c126335a4074441dc666c1cc8e23efbe14e8ee9837b7e66ccceae1\": not found" Jan 24 00:39:25.810290 containerd[1986]: time="2026-01-24T00:39:25.809920126Z" level=error msg="ContainerStatus for \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\": not found" Jan 24 00:39:25.810376 kubelet[3175]: E0124 00:39:25.810071 3175 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\": not found" containerID="dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31" Jan 24 00:39:25.810376 kubelet[3175]: I0124 00:39:25.810107 3175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31"} err="failed to get container status \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\": rpc error: code = NotFound desc = an error occurred when try to find container \"dca67a54f503d04fe76a5e0c0500a89c9a1d349ab19890f9fee25d3a304c0c31\": not found" Jan 24 00:39:25.810376 kubelet[3175]: I0124 00:39:25.810120 3175 scope.go:117] "RemoveContainer" containerID="8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c" Jan 24 00:39:25.810585 containerd[1986]: time="2026-01-24T00:39:25.810560586Z" level=error msg="ContainerStatus for \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\": not found" Jan 24 00:39:25.810784 kubelet[3175]: E0124 00:39:25.810766 3175 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\": not found" containerID="8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c" Jan 24 00:39:25.810849 kubelet[3175]: I0124 00:39:25.810785 3175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c"} err="failed to get container status \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a153658e3b19fe68b487259e6bb32b7b97f6a920f2fd33852ad269f8b488b1c\": not found" Jan 24 00:39:26.452038 kubelet[3175]: E0124 00:39:26.451977 3175 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:39:26.467474 sshd[4834]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:26.471293 systemd[1]: sshd@21-172.31.16.201:22-4.153.228.146:40536.service: Deactivated successfully. Jan 24 00:39:26.473983 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:39:26.474194 systemd[1]: session-22.scope: Consumed 1.067s CPU time. Jan 24 00:39:26.474960 systemd-logind[1958]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:39:26.476640 systemd-logind[1958]: Removed session 22. Jan 24 00:39:26.554588 systemd[1]: Started sshd@22-172.31.16.201:22-4.153.228.146:35362.service - OpenSSH per-connection server daemon (4.153.228.146:35362). Jan 24 00:39:27.032611 sshd[4998]: Accepted publickey for core from 4.153.228.146 port 35362 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:27.033284 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:27.045564 systemd-logind[1958]: New session 23 of user core. Jan 24 00:39:27.055518 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:39:27.205983 ntpd[1951]: Deleting interface #11 lxc_health, fe80::d064:c8ff:fe53:382a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Jan 24 00:39:27.206361 ntpd[1951]: 24 Jan 00:39:27 ntpd[1951]: Deleting interface #11 lxc_health, fe80::d064:c8ff:fe53:382a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Jan 24 00:39:27.321000 kubelet[3175]: I0124 00:39:27.320906 3175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb6defc-5874-469b-82f8-9f943c362c4d" path="/var/lib/kubelet/pods/1cb6defc-5874-469b-82f8-9f943c362c4d/volumes" Jan 24 00:39:27.321716 kubelet[3175]: I0124 00:39:27.321693 3175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" path="/var/lib/kubelet/pods/9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53/volumes" Jan 24 00:39:28.444154 kubelet[3175]: I0124 00:39:28.444114 3175 memory_manager.go:355] "RemoveStaleState removing state" podUID="1cb6defc-5874-469b-82f8-9f943c362c4d" containerName="cilium-operator" Jan 24 00:39:28.444154 kubelet[3175]: I0124 00:39:28.444157 3175 memory_manager.go:355] "RemoveStaleState removing state" podUID="9a8bb0af-e9f5-40ee-9b0b-2efb099f3b53" containerName="cilium-agent" Jan 24 00:39:28.493065 sshd[4998]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:28.493962 systemd[1]: Created slice kubepods-burstable-pod17ca3a38_dd95_4b7c_9d51_61b4402216f0.slice - libcontainer container kubepods-burstable-pod17ca3a38_dd95_4b7c_9d51_61b4402216f0.slice. Jan 24 00:39:28.504311 systemd-logind[1958]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:39:28.505882 systemd[1]: sshd@22-172.31.16.201:22-4.153.228.146:35362.service: Deactivated successfully. Jan 24 00:39:28.507917 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:39:28.508116 systemd[1]: session-23.scope: Consumed 1.045s CPU time. Jan 24 00:39:28.509359 systemd-logind[1958]: Removed session 23. Jan 24 00:39:28.585964 kubelet[3175]: I0124 00:39:28.585859 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-cni-path\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.585964 kubelet[3175]: I0124 00:39:28.585973 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-lib-modules\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.585995 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-cilium-run\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.586011 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-etc-cni-netd\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.586027 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17ca3a38-dd95-4b7c-9d51-61b4402216f0-cilium-config-path\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.586045 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-cilium-cgroup\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.586064 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-bpf-maps\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586319 kubelet[3175]: I0124 00:39:28.586080 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-host-proc-sys-net\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586095 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-host-proc-sys-kernel\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586111 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17ca3a38-dd95-4b7c-9d51-61b4402216f0-hubble-tls\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586128 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17ca3a38-dd95-4b7c-9d51-61b4402216f0-cilium-ipsec-secrets\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586148 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9n7\" (UniqueName: \"kubernetes.io/projected/17ca3a38-dd95-4b7c-9d51-61b4402216f0-kube-api-access-qf9n7\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586169 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-hostproc\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586588 kubelet[3175]: I0124 00:39:28.586186 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17ca3a38-dd95-4b7c-9d51-61b4402216f0-xtables-lock\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.586749 kubelet[3175]: I0124 00:39:28.586203 3175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17ca3a38-dd95-4b7c-9d51-61b4402216f0-clustermesh-secrets\") pod \"cilium-rqcr8\" (UID: \"17ca3a38-dd95-4b7c-9d51-61b4402216f0\") " pod="kube-system/cilium-rqcr8" Jan 24 00:39:28.598577 systemd[1]: Started sshd@23-172.31.16.201:22-4.153.228.146:35364.service - OpenSSH per-connection server daemon (4.153.228.146:35364). Jan 24 00:39:28.801774 containerd[1986]: time="2026-01-24T00:39:28.801661453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqcr8,Uid:17ca3a38-dd95-4b7c-9d51-61b4402216f0,Namespace:kube-system,Attempt:0,}" Jan 24 00:39:28.831755 containerd[1986]: time="2026-01-24T00:39:28.831569413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:28.832519 containerd[1986]: time="2026-01-24T00:39:28.831738707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:28.832519 containerd[1986]: time="2026-01-24T00:39:28.832274615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:28.832628 containerd[1986]: time="2026-01-24T00:39:28.832358758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:28.853469 systemd[1]: Started cri-containerd-9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d.scope - libcontainer container 9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d. Jan 24 00:39:28.880686 containerd[1986]: time="2026-01-24T00:39:28.880643476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqcr8,Uid:17ca3a38-dd95-4b7c-9d51-61b4402216f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\"" Jan 24 00:39:28.885046 containerd[1986]: time="2026-01-24T00:39:28.885003694Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:39:28.908003 containerd[1986]: time="2026-01-24T00:39:28.907950056Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179\"" Jan 24 00:39:28.912269 containerd[1986]: time="2026-01-24T00:39:28.912197906Z" level=info msg="StartContainer for \"eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179\"" Jan 24 00:39:28.941800 systemd[1]: Started cri-containerd-eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179.scope - libcontainer container eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179. Jan 24 00:39:28.971631 containerd[1986]: time="2026-01-24T00:39:28.971510890Z" level=info msg="StartContainer for \"eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179\" returns successfully" Jan 24 00:39:28.991753 systemd[1]: cri-containerd-eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179.scope: Deactivated successfully. Jan 24 00:39:29.053140 containerd[1986]: time="2026-01-24T00:39:29.052975633Z" level=info msg="shim disconnected" id=eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179 namespace=k8s.io Jan 24 00:39:29.053140 containerd[1986]: time="2026-01-24T00:39:29.053034394Z" level=warning msg="cleaning up after shim disconnected" id=eac1d8191bf9a8c09b255378eba90bd30034d0ada2320e2aa4bbaff08ce0b179 namespace=k8s.io Jan 24 00:39:29.053140 containerd[1986]: time="2026-01-24T00:39:29.053042581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:29.118481 sshd[5012]: Accepted publickey for core from 4.153.228.146 port 35364 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:29.120054 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:29.125090 systemd-logind[1958]: New session 24 of user core. Jan 24 00:39:29.132671 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:39:29.491556 sshd[5012]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:29.494758 systemd[1]: sshd@23-172.31.16.201:22-4.153.228.146:35364.service: Deactivated successfully. Jan 24 00:39:29.497080 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:39:29.498700 systemd-logind[1958]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:39:29.500168 systemd-logind[1958]: Removed session 24. Jan 24 00:39:29.574606 systemd[1]: Started sshd@24-172.31.16.201:22-4.153.228.146:35380.service - OpenSSH per-connection server daemon (4.153.228.146:35380). Jan 24 00:39:29.705777 containerd[1986]: time="2026-01-24T00:39:29.705622326Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:39:29.728027 containerd[1986]: time="2026-01-24T00:39:29.727982096Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625\"" Jan 24 00:39:29.729408 containerd[1986]: time="2026-01-24T00:39:29.728630817Z" level=info msg="StartContainer for \"70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625\"" Jan 24 00:39:29.766559 systemd[1]: Started cri-containerd-70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625.scope - libcontainer container 70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625. Jan 24 00:39:29.804231 containerd[1986]: time="2026-01-24T00:39:29.804190786Z" level=info msg="StartContainer for \"70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625\" returns successfully" Jan 24 00:39:29.813118 systemd[1]: cri-containerd-70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625.scope: Deactivated successfully. Jan 24 00:39:29.852329 containerd[1986]: time="2026-01-24T00:39:29.852236698Z" level=info msg="shim disconnected" id=70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625 namespace=k8s.io Jan 24 00:39:29.852329 containerd[1986]: time="2026-01-24T00:39:29.852304645Z" level=warning msg="cleaning up after shim disconnected" id=70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625 namespace=k8s.io Jan 24 00:39:29.852329 containerd[1986]: time="2026-01-24T00:39:29.852313723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:30.053299 sshd[5127]: Accepted publickey for core from 4.153.228.146 port 35380 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:30.055081 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:30.063540 systemd-logind[1958]: New session 25 of user core. Jan 24 00:39:30.068525 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:39:30.698549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70c2b1ff95649b623123db3d65839149250b07c3b6f00fd44cb51960d707a625-rootfs.mount: Deactivated successfully. Jan 24 00:39:30.708836 containerd[1986]: time="2026-01-24T00:39:30.708784806Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:39:30.734005 containerd[1986]: time="2026-01-24T00:39:30.733959957Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712\"" Jan 24 00:39:30.734607 containerd[1986]: time="2026-01-24T00:39:30.734562105Z" level=info msg="StartContainer for \"352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712\"" Jan 24 00:39:30.775449 systemd[1]: Started cri-containerd-352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712.scope - libcontainer container 352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712. Jan 24 00:39:30.810923 containerd[1986]: time="2026-01-24T00:39:30.810813022Z" level=info msg="StartContainer for \"352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712\" returns successfully" Jan 24 00:39:30.820185 systemd[1]: cri-containerd-352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712.scope: Deactivated successfully. Jan 24 00:39:30.861332 containerd[1986]: time="2026-01-24T00:39:30.861276571Z" level=info msg="shim disconnected" id=352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712 namespace=k8s.io Jan 24 00:39:30.861332 containerd[1986]: time="2026-01-24T00:39:30.861324343Z" level=warning msg="cleaning up after shim disconnected" id=352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712 namespace=k8s.io Jan 24 00:39:30.861332 containerd[1986]: time="2026-01-24T00:39:30.861332611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:31.453690 kubelet[3175]: E0124 00:39:31.453640 3175 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:39:31.698700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352edf4dec509680f504428a8de7fdf4d4025fbc21ff42338cc3e1a337583712-rootfs.mount: Deactivated successfully. Jan 24 00:39:31.713142 containerd[1986]: time="2026-01-24T00:39:31.712997077Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:39:31.737786 containerd[1986]: time="2026-01-24T00:39:31.737734205Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373\"" Jan 24 00:39:31.739439 containerd[1986]: time="2026-01-24T00:39:31.738237996Z" level=info msg="StartContainer for \"7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373\"" Jan 24 00:39:31.779512 systemd[1]: Started cri-containerd-7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373.scope - libcontainer container 7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373. Jan 24 00:39:31.813781 systemd[1]: cri-containerd-7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373.scope: Deactivated successfully. Jan 24 00:39:31.822783 containerd[1986]: time="2026-01-24T00:39:31.822025874Z" level=info msg="StartContainer for \"7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373\" returns successfully" Jan 24 00:39:31.831282 containerd[1986]: time="2026-01-24T00:39:31.817376795Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17ca3a38_dd95_4b7c_9d51_61b4402216f0.slice/cri-containerd-7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373.scope/memory.events\": no such file or directory" Jan 24 00:39:31.869368 containerd[1986]: time="2026-01-24T00:39:31.869316366Z" level=info msg="shim disconnected" id=7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373 namespace=k8s.io Jan 24 00:39:31.869655 containerd[1986]: time="2026-01-24T00:39:31.869571954Z" level=warning msg="cleaning up after shim disconnected" id=7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373 namespace=k8s.io Jan 24 00:39:31.869655 containerd[1986]: time="2026-01-24T00:39:31.869594873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:32.697905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c4102feaa3a6b72640a39a061b4991310c9cda5d62bbffaf514170dae8e0373-rootfs.mount: Deactivated successfully. Jan 24 00:39:32.717268 containerd[1986]: time="2026-01-24T00:39:32.716564375Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:39:32.747434 containerd[1986]: time="2026-01-24T00:39:32.747373140Z" level=info msg="CreateContainer within sandbox \"9865327a024852a9b748669ffd460ecaf7b9e8b2615407ab701f58501956ef1d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d\"" Jan 24 00:39:32.748193 containerd[1986]: time="2026-01-24T00:39:32.748162379Z" level=info msg="StartContainer for \"34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d\"" Jan 24 00:39:32.778689 systemd[1]: Started cri-containerd-34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d.scope - libcontainer container 34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d. Jan 24 00:39:32.822524 containerd[1986]: time="2026-01-24T00:39:32.821769850Z" level=info msg="StartContainer for \"34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d\" returns successfully" Jan 24 00:39:33.526985 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 00:39:33.758398 kubelet[3175]: I0124 00:39:33.757199 3175 setters.go:602] "Node became not ready" node="ip-172-31-16-201" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:39:33Z","lastTransitionTime":"2026-01-24T00:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 00:39:34.318205 kubelet[3175]: E0124 00:39:34.318162 3175 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2bg9w" podUID="9822af21-e5ad-41e4-9959-756eaf4e6bb8" Jan 24 00:39:36.318054 kubelet[3175]: E0124 00:39:36.317994 3175 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2bg9w" podUID="9822af21-e5ad-41e4-9959-756eaf4e6bb8" Jan 24 00:39:36.455898 systemd-networkd[1899]: lxc_health: Link UP Jan 24 00:39:36.459066 (udev-worker)[5872]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:39:36.462483 systemd-networkd[1899]: lxc_health: Gained carrier Jan 24 00:39:36.857305 kubelet[3175]: I0124 00:39:36.857184 3175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rqcr8" podStartSLOduration=8.857161348 podStartE2EDuration="8.857161348s" podCreationTimestamp="2026-01-24 00:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:33.740026371 +0000 UTC m=+102.615426366" watchObservedRunningTime="2026-01-24 00:39:36.857161348 +0000 UTC m=+105.732561340" Jan 24 00:39:37.103884 systemd[1]: run-containerd-runc-k8s.io-34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d-runc.MX70b6.mount: Deactivated successfully. Jan 24 00:39:38.441444 systemd-networkd[1899]: lxc_health: Gained IPv6LL Jan 24 00:39:39.360222 systemd[1]: run-containerd-runc-k8s.io-34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d-runc.LiOVg7.mount: Deactivated successfully. Jan 24 00:39:41.206118 ntpd[1951]: Listen normally on 14 lxc_health [fe80::f066:2ff:fe29:6c37%14]:123 Jan 24 00:39:41.206769 ntpd[1951]: 24 Jan 00:39:41 ntpd[1951]: Listen normally on 14 lxc_health [fe80::f066:2ff:fe29:6c37%14]:123 Jan 24 00:39:41.614569 systemd[1]: run-containerd-runc-k8s.io-34f55a78be963b54ec96058ad7494165d909c9b0045f8dc6c3aa603a3a8ecf0d-runc.4129Wh.mount: Deactivated successfully. Jan 24 00:39:41.829573 sshd[5127]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:41.835055 systemd[1]: sshd@24-172.31.16.201:22-4.153.228.146:35380.service: Deactivated successfully. Jan 24 00:39:41.837326 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:39:41.839436 systemd-logind[1958]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:39:41.840926 systemd-logind[1958]: Removed session 25. Jan 24 00:39:51.341537 containerd[1986]: time="2026-01-24T00:39:51.341383377Z" level=info msg="StopPodSandbox for \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\"" Jan 24 00:39:51.341537 containerd[1986]: time="2026-01-24T00:39:51.341473911Z" level=info msg="TearDown network for sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" successfully" Jan 24 00:39:51.341537 containerd[1986]: time="2026-01-24T00:39:51.341484207Z" level=info msg="StopPodSandbox for \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" returns successfully" Jan 24 00:39:51.342009 containerd[1986]: time="2026-01-24T00:39:51.341905623Z" level=info msg="RemovePodSandbox for \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\"" Jan 24 00:39:51.342009 containerd[1986]: time="2026-01-24T00:39:51.341930020Z" level=info msg="Forcibly stopping sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\"" Jan 24 00:39:51.342009 containerd[1986]: time="2026-01-24T00:39:51.341985011Z" level=info msg="TearDown network for sandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" successfully" Jan 24 00:39:51.347032 containerd[1986]: time="2026-01-24T00:39:51.346841159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:51.347032 containerd[1986]: time="2026-01-24T00:39:51.346919139Z" level=info msg="RemovePodSandbox \"61e197a8e44a3c22ac66b6660739015298a45917aa05b96b588b3565f7ab0262\" returns successfully" Jan 24 00:39:51.347482 containerd[1986]: time="2026-01-24T00:39:51.347452586Z" level=info msg="StopPodSandbox for \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\"" Jan 24 00:39:51.347589 containerd[1986]: time="2026-01-24T00:39:51.347552588Z" level=info msg="TearDown network for sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" successfully" Jan 24 00:39:51.347589 containerd[1986]: time="2026-01-24T00:39:51.347571165Z" level=info msg="StopPodSandbox for \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" returns successfully" Jan 24 00:39:51.347983 containerd[1986]: time="2026-01-24T00:39:51.347932802Z" level=info msg="RemovePodSandbox for \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\"" Jan 24 00:39:51.347983 containerd[1986]: time="2026-01-24T00:39:51.347963890Z" level=info msg="Forcibly stopping sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\"" Jan 24 00:39:51.348163 containerd[1986]: time="2026-01-24T00:39:51.348028944Z" level=info msg="TearDown network for sandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" successfully" Jan 24 00:39:51.353661 containerd[1986]: time="2026-01-24T00:39:51.353509192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:51.353661 containerd[1986]: time="2026-01-24T00:39:51.353571181Z" level=info msg="RemovePodSandbox \"d31d0fccdd93aa7168804b807ce99f77f751feb9d431f08c9fcc551aa9230f14\" returns successfully" Jan 24 00:39:55.533915 systemd[1]: cri-containerd-9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468.scope: Deactivated successfully. Jan 24 00:39:55.535396 systemd[1]: cri-containerd-9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468.scope: Consumed 4.475s CPU time, 42.5M memory peak, 0B memory swap peak. Jan 24 00:39:55.563242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468-rootfs.mount: Deactivated successfully. Jan 24 00:39:55.589108 containerd[1986]: time="2026-01-24T00:39:55.589057223Z" level=info msg="shim disconnected" id=9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468 namespace=k8s.io Jan 24 00:39:55.589777 containerd[1986]: time="2026-01-24T00:39:55.589148806Z" level=warning msg="cleaning up after shim disconnected" id=9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468 namespace=k8s.io Jan 24 00:39:55.589777 containerd[1986]: time="2026-01-24T00:39:55.589158174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:39:55.767144 kubelet[3175]: I0124 00:39:55.766912 3175 scope.go:117] "RemoveContainer" containerID="9583fe07cb9db0883485c8a48e01080405a515308ace885294dd0041febbc468" Jan 24 00:39:55.782084 containerd[1986]: time="2026-01-24T00:39:55.781855583Z" level=info msg="CreateContainer within sandbox \"323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:39:55.807264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715052741.mount: Deactivated successfully. Jan 24 00:39:55.812665 containerd[1986]: time="2026-01-24T00:39:55.812623876Z" level=info msg="CreateContainer within sandbox \"323103ff556b3d5d7bdc62dcdda1c48946517a3b3abf4e630e956e502cfbf57f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0\"" Jan 24 00:39:55.813278 containerd[1986]: time="2026-01-24T00:39:55.813218838Z" level=info msg="StartContainer for \"cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0\"" Jan 24 00:39:55.855511 systemd[1]: Started cri-containerd-cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0.scope - libcontainer container cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0. Jan 24 00:39:55.904876 containerd[1986]: time="2026-01-24T00:39:55.904823015Z" level=info msg="StartContainer for \"cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0\" returns successfully" Jan 24 00:39:56.563885 systemd[1]: run-containerd-runc-k8s.io-cb2f9c7fe818e616c1a1a83ec7449772a973e64a02094f7a187b909a6a1caed0-runc.9kkJIR.mount: Deactivated successfully. Jan 24 00:40:01.275581 systemd[1]: cri-containerd-a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb.scope: Deactivated successfully. Jan 24 00:40:01.276923 systemd[1]: cri-containerd-a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb.scope: Consumed 3.044s CPU time, 23.3M memory peak, 0B memory swap peak. Jan 24 00:40:01.342309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb-rootfs.mount: Deactivated successfully. Jan 24 00:40:01.499029 containerd[1986]: time="2026-01-24T00:40:01.483423452Z" level=info msg="shim disconnected" id=a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb namespace=k8s.io Jan 24 00:40:01.499029 containerd[1986]: time="2026-01-24T00:40:01.483502292Z" level=warning msg="cleaning up after shim disconnected" id=a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb namespace=k8s.io Jan 24 00:40:01.499029 containerd[1986]: time="2026-01-24T00:40:01.483515897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:01.561372 containerd[1986]: time="2026-01-24T00:40:01.561019333Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:40:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:40:01.813796 kubelet[3175]: I0124 00:40:01.811830 3175 scope.go:117] "RemoveContainer" containerID="a5b21d01f2c8da37c4a3d4c2b42017d844c4ef9c8354432fd9540d293805d1bb" Jan 24 00:40:01.859016 containerd[1986]: time="2026-01-24T00:40:01.858054348Z" level=info msg="CreateContainer within sandbox \"91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:40:02.077659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038042585.mount: Deactivated successfully. Jan 24 00:40:02.186654 containerd[1986]: time="2026-01-24T00:40:02.186610022Z" level=info msg="CreateContainer within sandbox \"91c9b265772f5952221811a38c769586d14591006fddc05cae96e6e1c159cf0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd\"" Jan 24 00:40:02.187897 containerd[1986]: time="2026-01-24T00:40:02.187749718Z" level=info msg="StartContainer for \"8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd\"" Jan 24 00:40:02.390933 systemd[1]: run-containerd-runc-k8s.io-8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd-runc.ztsuqZ.mount: Deactivated successfully. Jan 24 00:40:02.397895 systemd[1]: Started cri-containerd-8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd.scope - libcontainer container 8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd. Jan 24 00:40:02.583220 containerd[1986]: time="2026-01-24T00:40:02.583166886Z" level=info msg="StartContainer for \"8bbaa2f4d04d3622ed5c372fda56223a0b559533fbfa052b4d0afe43b4a00cfd\" returns successfully" Jan 24 00:40:03.306967 kubelet[3175]: E0124 00:40:03.306130 3175 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:40:13.307561 kubelet[3175]: E0124 00:40:13.307156 3175 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-201?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"